I reread Dr. Isaac Asimov's fifty-four-year-old masterpiece I, Robot in preparation for the 2004 Twentieth Century Fox motion picture of the same name, knowing full well that to appeal to today's action-thriller rollercoaster-addicted audience there was no way the movie and the book could even come close. I was right. But not the way I thought I would be.
The movie, recently out on DVD, begins with the Three Laws of Robotics: robots must not harm a human being; they must obey human orders, so long as this does not violate the first law; and they must protect their own existence, so long as that doesn't violate laws one and two. Apart from these three laws and the use of the same title and some of the character names, the motion picture appears to radically depart from Asimov's book, first published by Doubleday in 1950. To give Twentieth Century Fox credit, the film does not pretend to be the same as the book; I notice in the credits that the movie was "suggested by" rather than "based on" Asimov's work. But how different was it, really? I submit that the two are much more similar than they first appear.
Surficial differences between book and motion picture are nevertheless glaring. First off, Asimov's I, Robot is essentially a string of short stories that evolve along a theme, much in the vein of The Martian Chronicles by Ray Bradbury. The book is told largely from the point of view of Dr. Susan Calvin, a plain and stern robopsychologist who gets along better with robots than with humans. Dr. Asimov uses this cold and colorless character as a vehicle to stir undercurrents of poignant thought on the human condition through a series of deceptively mundane tales. I, Robot offers a treatise both on humanity's ingenuity and its foibles and on how these two are inexorably intertwined in paradoxes that speak to the ultimate truth of what it is to be human. Each of his nine stories discloses a metaphoric piece of his clever puzzle. The puzzle pieces successively tease us through the three laws of robotics, as ever more sophisticated robots toil with their conflicts when dealing with perceived logical contradictions of the laws. For instance, there is Robbie, the endearing nursemaid robot. Cutie (QT-1) is a robot Descartes in "Reason." In "Liar!," Herbie has problems coping with the three laws as a mind-reading robot. And in "Little Lost Robot," Dr. Calvin must outsmart Nestors—or the NS-2 model robots—whose positronic brains were not impressioned with the entire First Law of Robotics. The larger question and ultimate paradox posed by the three laws culminate in Asimov's final story, "The Evitable Conflict," which subtly explores the role of free will and faith in our definition of what it means to be human.
The book jacket aptly describes I, Robot this way: ". . . humans and robots struggle to survive together—and sometimes against each other. . . and both are asking the same questions: what is human? And is humanity obsolete?" Interestingly, the latter part of the book jacket quote, which accompanied the 1991 Bantam mass market edition, can be interpreted in several ways.
Asimov's stories span fifty years of robot evolution, which play out mostly in space, from Mercury to beyond our own galaxy. Alex Proyas's movie is set in Chicago in 2035 and condenses the time frame into a short few weeks with some flashbacks from several years prior. This serves the film well but at some cost. What is gained in tension and focus is lost in scope and erudition, two qualities often best left to the literary field. Asimov's string of tales are quirky, contemplative, and thoughtful. The film version is more direct, trading these for a faster pace, pretty much a prerequisite in the film industry today.
The original screenplay, entitled "Hardwired" by Jeff Vintar, was reworked by Akiva Goldsman into a techno-thriller/murder mystery directed by Alex Proyas (Dark City) with its requisite hard-boiled detective cop (Will Smith as Del Spooner) and a "suicide" that looks suspiciously like murder. Smith's character (a Hollywood invention, so don't go looking for him in the book) is a 21st century anachronism: a Luddite who wears retro clothes and sets his computer car on manual. The story centers on Spooner's investigation of a so-called suicide by Dr. Alfred Lanning (James Cromwell), robot pioneer and the originator of the Three Laws of Robotics. Lanning was an employee of U.S. Robotics, a megacorporation run by Lawrence Robertson (Bruce Greenwood). Robertson relies on the real brains, V.I.K.I., the corporation's superintelligent virtual computer.
With a "simple-minded" plot (according to Roger Ebert, Chicago Sun-Times) and a lead character who is little more than a "wisecracking . . . guns-a-blazin' . . . action-hero cliché" (Rob Blackwelder, SPLICEDwire), the motion picture rendition of Asimov's groundbreaking book promises little but disappointment for the literate science fiction fan according to many critics. I disagree. I was not disappointed. This is both despite and because of director Alex Proyas's interpretation of Asimov's book and his three laws. Several critics focused on the surficial plot at the expense of the subtle multilayered thematic subplots contrived by a director not known for creating superficial action-figure fluff. I think this critical myopia was generated from critics admittedly not having read Asimov's masterpiece. Familiarity with Asimov's I, Robot is a prerequisite to recognizing the subtle intelligence Proyas wove into his otherwise playful and glitzy Hollywood techno-thriller.
While literate science fiction fans will certainly recognize the names of Lanning, Dr. Calvin, and Robertson, these movie characters in no way resemble their book counterparts. Dr. Calvin (Bridget Moynahan) is a robopsychologist, but in the movie she is far from plain and her wooden performance fails to disguise that she is clearly ruled by her feelings, unlike the coldly logical book character. The lead character in the film, Del Spooner (Smith), is, of course, a Hollywood fabrication, along with an entourage of requisite techno-thriller components: spectacular chase and battle scenes, explosions, lots of shooting, and some romantic tension. The film is also fraught with Hollywood clichés: for instance, repressed psychologist (Moynahan) who typically speaks in three-syllabic words encounters cynical antihero beefy cop (Smith) whose rude attentions help transform her into a gun-slinging kick-ass warrior. But Proyas also treats us to some of the most convincing portrayals of a futuristic metropolis, complete with seamlessly incorporated CGI-generated robots and an evocative score by Peter Anthony. Asimov fans will, of course, also recognize certain aspects of the book in the movie, such as a scene and concepts, borrowed from "Little Lost Robot."
Despite the clichés and comic-action razzle-dazzle, Proyas manages to preserve the soul and spirit of Dr. Asimov's great creation. He does this by allowing us to glimpse some of Asimov's elevated themes, if not his more complex questions. Indeed the most poignant scenes in the movie are those which involve the "humanity" of the robot called Sonny (Alan Tudyk). A unique NS-5 model with a secondary processing system that clashes with his positronic brain, Sonny is capable of rejecting any of the three laws and hence provides us ironically with the most complex (and interesting) character in the movie. Sonny is both humble and feisty, a robot who dreams and questions. For me, this was not unlike the several stirring scenes in Asimov's "Liar!" where the mind-reading robot, Herbie, when dealing with the complex nature of humans, unintentionally causes its own destruction (with the help of a bitter Dr. Calvin) because it tries to please everyone by telling them what it thinks they want to hear. Sonny's complex character (like any character with depth) keeps you guessing. Sonny asks the right questions and at the end of the film we are left wondering about his destiny and what he will make of it. This parallels Asimov's equally ambiguous ending in "The Evitable Conflict."
Which brings me back to the foundation shared by both book and movie: the Three Laws of Robotics, the infinite ways that they can be interpreted, and how they may be equally applied to robot or human. The laws may apply physically or emotionally; individually or toward the whole of humanity; long-term or short-term . . . the list is potentially endless. Asimov's collection of stories centers on these questions by showing how different robots deal with the conflicts and perceived contradictions presented by the laws. Asimov's last story describes a world run by a network of powerful but benevolent machines, who guide humankind through strict adherence to the three laws (their interpretation, of course!). Taking his cue from this, Proyas cleverly takes an old cliché—that of "evil" machine with designs to rule the world—and turns it upside down according to the first law of robotics. His "evil" machine turns out not to be evil, but misguided. V.I.K.I. acts not out of its own interests, like the self-preserving HAL in 2001: A Space Odyssey, but in the best interests of humankind (at least according to the machine). Citing humanity's self-destructive proclivity to pollute and make war, V.I.K.I. decides to treat us as children and pull the plug on free will. Viewed from the perspective of the first law, this is simply a logical, though erroneous, extrapolation of "good will," and far more interesting than the workings of simple evil, which I feel is much overdone and overrated in films these days. The well-meaning dictator possessed of the hubristic notion that he holds all the keys to the happiness and well-being of others smacks of a reality and a humanity all too prevalent in well-meaning governments today.
It is when the line between "good intentions" and "wrong-doing" blur that things get really interesting. Both Asimov and Proyas explore this chiaroscuro in I, Robot, though in different ways. The challenge is still the same: If given the choice of ending war and all conflict at the expense of 'free will,' would we permit benevolent machines to run our world? Or is it our destiny . . . and requirement for the transcendence of our souls . . . to continue to make those mistakes at the expense of a life free of self-destruction and violence? On the surface, Proyas offers the obvious answer. He likens the benevolent machine to an overprotective parent, who in the interests of a child's safety, prevents the enrichment of that child's heart, soul, and spirit otherwise provided by that very conflict. Asimov is far more subtle in "The Evitable Conflict," and while these questions are discussed at length, they remain largely unanswered.
In one of his most clever stories, "Evidence," near the end of his book, Dr. Asimov expounds on the three laws to describe the ultimate dilemma of defining and differentiating a human-looking robot with common sense from a genuine human on the basis of psychology. Asimov's Dr. Calvin says:
"The three Rules of Robotics are the essential guiding principles of a good many of the world's ethical systems[;] . . . every human being is supposed to have the instinct of self-preservation. That's Rule Three to a robot. Also every 'good' human being, with a social conscience and a sense of responsibility, is supposed to defer to proper authority. . . . That's Rule Two to a robot. . . . Also, every 'good' human being is supposed to love others as himself, protect his fellow man, risk his life to save another. That's Rule One to a robot[;] . . . to put it simply, if [an individual] follows all the Rules of Robotics, he may be a robot, and may simply be a very good man."
Proyas metaphorically (if not literally) explores the question of "what is human" with his robotic character Sonny. In a stirring scene of the motion picture where Sonny is prepared for permanent shutdown, Dr. Lanning expounds on his belief that robots could evolve naturally:
"There have always been ghosts in the machine . . . random segments of code that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. . . . Why is it that when some robots are left in the dark they will seek the light? Why is it that when robots are stored in an empty space they will group together rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter moat of the soul?"
Near the end of the film, Sonny, having fulfilled his initial purpose (i.e., stopping V.I.K.I.), asks Spooner, "What about the others [NS-5s, recalled for servicing and storage]? Can I help them? Now that I have fulfilled my purpose I don't know what to do." To this, an enlightened Spooner answers: "I guess you'll have to find your way like the rest of us, Sonny. . . . That's what it means to be free."
Proyas gives us a strong indication of what his film was really about by ending not with Spooner, his lead action-figure character who has just saved humanity from the misguided robot army lead by the heuristic AI, V.I.K.I., but with Sonny, the enigmatic robot just embarking on his uncertain journey. The motion picture closes with a final scene of Sonny, resembling a messianic figure on the precipice of a bluff, overlooking row upon row of his lesser robotic counterparts. We are left with an ambiguous ending of hope and mystery. What will Sonny do with his abilities, his dreams, and his potential "following"? Will his actions be for the betterment of humankind and/or robots? Will society trust him and let him seek and find his destiny or, like Asimov's fearful Society for Humanity (from "The Evitable Conflict"), will we squash them all before they get so complex and powerful that not only do we fail to understand them but we have no hope of controlling them? This parallels Asimov's equally ambiguous ending in his book. In it, Stephen Byers (a humanoid AI), and Dr. Calvin, discuss the fate of robots and humanity. Ironically, it is through her interaction with robots that Dr. Calvin discovers a human trait that may be more valuable to humanity than exercising free will: that of faith. It is she who confronts the coordinator with these words: "How do we know what the ultimate good of Humanity will entail? We haven't at our disposal the infinite factors that the Machine has at its." Then to his challenge that humankind has lost its own say in its future, she further responds with: "It never had any, really. It was always at the mercy of economic and sociological forces it did not understand . . . at the whims of climate, and the fortunes of war. . . . Now the Machines understand them . . . for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable." This quote in Asimov's final story may horrify or anger some even as it may inspire and reassure others. But if true free will is largely a self-perpetuated myth of the Western pioneer movement, then we are effectively left with respect and faith in oneself and in others. Perhaps, ultimately, that is what both Asimov and Proyas had in mind.
It is interesting to note that Harlan Ellison and Asimov collaborated on a screenplay of I, Robot in the 1970s, which Asimov said would provide "the first really adult, complex, worthwhile science fiction movie ever made." Am I disappointed that this earlier rendition, most likely truer to the original book, did not come to fruition? No. That is because we already have that story. You can still read the book (and I strongly urge you to, if you have not). Proyas's film, I, Robot, is a different story, with a different interpretation. And like the robot's own varying interpretation of the three laws, it is refreshing to see a different human's interpretation expressed.