Roger Zelazny's Hugo- and Nebula-nominated novelette "This Moment of the Storm" (1966) opens with the narrator explaining how an old philosophy professor of his once saved himself the trouble of teaching a class by asking his students, "What is a man?" Although the narrator assures us that only his own answer holds true over the course of the story, he first enumerates the various responses of his classmates: "I learned that Man is a Reasoning Animal, Man is the One Who Laughs, Man is greater than beasts but less than angels, Man is the one who watches himself watching himself doing things he knows are absurd (this from a Comparative Lit gal), Man is the culture-transmitting animal, Man is the spirit which aspires, affirms, loves, the one who uses tools, buries his dead, devises religions, and the one who tries to define himself. [ . . . ] I'd said, 'Man is the sum total of everything he has done, wishes to do or not to do, and wishes he had done, or hadn't'" (pp. 239-40). The two works reviewed here take up the vexed definition of humanity, but in the context of both impaired or partial artificial intelligences and the contemporary social and economic landscapes of digital worlds: these books are largely meditations on "AI 2.0."
In The Lifecycle of Software Objects, Ted Chiang approaches the eternal SF question of "what makes a man" with particular attention to the implications of our being eternally unable to answer it. His (semi-?) sentient virtual Tamagotchis—"digients"—in fact meet all of Zelazny's overlapping criteria for Humanhood, with the exception of devising religions and burying the dead: the digients don't die, not precisely, and none of them seem much into metaphysics, in spite of their evolving powers of abstract reasoning. Yet, while the digients do satisfy so many of these definitions, their status remains confused throughout the novella—morally, legally, and ontologically, and to their trainers, their owners, and the readers. If nothing else, Chiang's narrative superbly demonstrates that, if, at long last, we were to develop such artificial intelligences, we wouldn't know quite what to do with the things. When the constant sense of perplexity running through the novella does not spill over into the narrative itself, as admittedly it sometimes does, it becomes one of the book's chief strengths.
We feel the first unsettling frisson of subverted expectations in the first chapter, when we learn that the most recent personnel acquisition for an artificial intelligence project is in fact none other than a zookeeper. Ana, one of two central characters, is an underemployed Ph.D. working at a zoo after great apes have become all but extinct in the wild; because of her background in primate communication, however, she is tapped for a special project that uses a "genomic engine" called Neuroblast to breed low-intelligence AIs, which will soon be marketed as interactive pets on Data Earth, a next-generation virtual world modeled on Second Life. The novella chronicles the rise and fall of the digients' popularity over a decade or so of the near-future economic cycle, with an intimate emphasis on the challenges and joys of the few humans who continue to raise them; a short sentence on the third page goes a long way towards summing up the concerns of the entire book: "Their avatars hug" (p. 9). Lifecycle, in other words, is a story of love in the time of cyberspace—not the love between lonely soulmates who meet on internet dating sites, but that between human trainers and their digients. Chiang's refusal to pin down exactly what the digients are remains one of the great pleasures and great frustrations of the novella: we are left guessing—and I think to decide—whether we should best consider the intelligence level of these new virtual beings as comparable to that of genius apes, very young children, mentally handicapped adults, or something else entirely. The main problem here is that the ethical implications of each option are very different, and, since ethical questions dominate the narrative, at the same time that Chiang infuses the issues he raises with an appealing sense of complexity, there is also an unattractive side to this indeterminacy, in that the novella can simply feel not fully worked out.
There are also smaller frustrations: uncharacteristically for Chiang, the writing style itself is quite stripped down, and unfortunately the characters and character drama can seem that way as well. Since Lifecycle is by far Chiang's longest work to date, it's especially disappointing that the novella isn't quite up to his own high standards—though, it must be conceded, they are almost impossibly high standards. But what little plot there is revolves entirely around the initial premise, that novum of the digients' existence, as does all of the character development and drama. Indeed, it is surely no coincidence that, for both Ana and Derek—an animator for the project and the novella's other main character—one of their few strong personality traits should be a single-minded obsessiveness about the premise, the "raising" of this new form of life. Chiang's decision to structure his narrative around several gaps of months or years heightens this sense that his characters' lives are dominated by the novum, rather than simply altered by it. For example, Derek goes through an offstage divorce from a spouse we never meet; the narration of this event occupies Chiang for the space of a single short paragraph, and what reason is given for the split but—of course—the digients. She just doesn't understand, and I'll admit that sometimes I don't, either. In part, I have such difficulty comprehending Ana and Derek's motivations because even the most complex glimpses we get of their interiorities strike me as more than a little simplistic: "If he weren't already married, he might ask Ana out, but there's no point in speculating about that now. The most they can be is friends, and that's good enough" (p. 52); "She sees that Derek has a very different idea of high expectations than she has. More than that, she realizes that his is actually the better one" (pp. 73-74). In general—not simply when relating thoughts such as these—there is quite a bit of "telling not showing," to use the old terms from creative writing class; I suspect that Chiang has intentionally emphasized "telling" as part of the matter-of-fact, wide-angle style he has adopted here, but I don't think it always works, and the perpetual present tense can make it stand out all the more: "[Ana] let Derek borrow the body last week so Marco and Polo could play in it and now he's returning it. She's going to spend time in the park, letting other owners' digients have a turn in the body" (p. 48). There is nothing really "wrong" with these sentences, but without a strong plot to support the at times lackluster language, the narrative can fall flat.
Since the strengths of Lifecycle do not lie in its characterization, plot, or verbal pyrotechnics, I suppose we should be inclined to evaluate it more on the basis of its treatment of its novum: this is an unabashedly idea-driven novella, even if the central ideas all focus on the relationship of a loving parental human figure and a non-human consciousness. Forgive me for quoting one more time the back-cover blurb that seems to travel with the book everywhere, but it explains quite neatly and compactly what Chiang is up to with his revisitation of the old birth-of-AI story:
What's the best way to create artificial intelligence? In 1950, Alan Turing wrote, "Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried."
Much of the strength of Lifecycle lies in Chiang's often ingenious debates with himself about the implications of this second approach, especially for the person doing the teaching. Chiang has a way of anticipating every objection one might make to his (characters') arguments, even if the reader must ultimately disagree, and the second half of the novella shows a consistently supple navigation of some very complex issues, especially after the "sexual turn": the question arises of whether the digients should be allowed to choose for themselves if they would like to participate in an economic venture that would make some copies of them available for purchase for sexual purposes. Its imperfections notwithstanding, Lifecycle raises important and provocative questions about personhood and "domesticated" AIs that I haven't seen raised anywhere except, perhaps, in the occasional Asimov robot story. But even Asimov couldn't have predicted the Internet, not in all its current complexity, and Chiang has the advantage not only of writing in the contemporary moment, but of being extremely well-versed in the various contemporary matrices in which AI would be most likely to flourish.
Yet, for all of Chiang's obvious expertise in both digital technologies themselves and the increasingly byzantine economic climates in which they thrive (and go bust), I would argue that at the heart of Lifecycle lies a fantasy, not an extrapolative science fiction story. In fact, the novella began to work much better for me when I decided to think of it as a fantasy novel rather than a typical "hard" SF treatment of the problems of artificial intelligence. For Chiang gives us not only a fantasy of the AI that still eludes us—surely an even more relentless and urgent desire than the often-cited collective yearning for that jetpack—but also a simpler fantasy of impossible anthropomorphism: what if our computer programs, obsolete and likely to be abandoned after years or months, nevertheless had feelings, even feelings like our dogs and cats have feelings? I do not mean to diminish the novel by arbitrarily reassigning its genre in this fashion, and Chiang's approach gives him certain advantages over the strict hard SF formula: as a fantasy, Lifecycle can bracket the question of why and how such a thing would come into being, and focus on the consequences that would follow if it did. In this way, Chiang can examine the preoccupations of "harder" AI narratives through an interesting filter, even if when all is said and done I think I fundamentally disagree with the assertion that "[e]very quality that [makes] a person more valuable than a database [is] a product of experience" (p. 131). It seems clear to me that we have already designed many tools more useful than data storage and retrieval systems, and assuming that a silicon-based life form would need face-to-face instruction to ever exceed us meat-bags in any metric of intelligence strikes me as curiously hidebound.
To be fair, Chiang may have a different definition of value in mind when he speaks so dismissively of databases, not "usefulness" at all, but perhaps the more nebulous value we attach to a human life. Also, these thoughts on experience are all focalized through Ana's consciousness, so I don't know that Chiang himself sanctions such views completely, that "if you want to create the common sense that comes from twenty years of being in the world, you need to devote twenty years to the task," or that ultimately "experience is algorithmically incompressible" (p. 131). Some interviews, however, would suggest that he more or less agrees with this position, and such would seem to be the contention of the novella. Or, at least, the algorithmic incompressibility of experience would seem to hold true within the world of the novella, which supports my sense that the book is perhaps best understood as first a fantasy; in any case, it will continue to work as a fantasy even if one disagrees with Chiang about the essential nature of consciousness. I suppose I find most dubious not only the posited direct correlation between quantity of experience gathered and quantity of "common sense" constructible, but also this insistence on a constant rate of learning. For example, I see no reason why, say, language acquisition could not be made at least somewhat easier for even an artificial intelligence of the type Chiang describes: curiously, the digients make precisely the same grammatical and lexical mistakes that human children do. Can they not be equipped with tools an organic intelligence could not be given, such as some kind of internal lexicon to check themselves against, or a more stable memory system that would allow them to remember that "tringle"—as cute and endearing as the mistake is—is not in fact the correct way to say "triangle"? In the world of the novella, the answer is no, but I think largely because such cute and endearing details cement the fantasy of newborn AIs as transposable with young children. In fact, the novella ends with Ana having a self-consciously fantastic vision of a future in which digients and humans live as equals in peace and harmony, a future which she acknowledges may not even be possible. Thus, Chiang concludes not with any sort of Campbellian optimism that mankind, equipped with the tools of science and reason, will find a way, but a strain of optimism more commonly found in fantasy, rooted instead in the sense that it would be nice if it did, along with a recognition that for now we have what we have: our fantasies.
If The Lifecycle of Software Objects is admirable in its way, but something of a disappointment coming from Chiang, Zendegi is a more solid entry in the Egan canon, perhaps less innovative and mind-blowing than some of his more prominent earlier works, but also a little more "human." To speak of the two books comparatively for a moment, Chiang's is the more ambitious in its unresolved ethical dialectics and examinations of alternative models of intelligence, but Egan's makes a great deal more sense and proves more successful in sustaining emotional and psychological drama. By way of (brief) illustration, Martin, the protagonist of Zendegi, also undergoes an off-stage divorce, but this event serves as prelude to his character arc, rather than intruding awkwardly into it and leaving little lasting impact. The setting and plot of the novel's first section are also wonderfully rich in detail, some of it presumably autobiographical, since the Iranian setting appears to have been researched as thoroughly as any other Egan "high concept." In fact, most of the "infodumping" in the novel takes the form of Iranian history lessons, rather than the relentlessly technical explanations for which Egan is notorious. This is not to say that Zendegi lacks a serious science-fictional interest; much like Lifecycle, the novel explores the implications of incomplete AIs, in this case virtual characters generated based on maps of the human brain. Unlike in Chiang's novella, however, Egan's new life forms become more like ghosts or revenants than chimps or puppies, and the ethical issues Zendegi raises ultimately take on a more unsettling character. At the same time, unlike so many other Frankenstein stories, Egan's own nuanced meditation on the ethical implications of artificial life is not by any means a shrill indictment of science, nor is it overly moralizing; there are only a few instances of real preachiness in the novel, and fortunately they seem restricted to single sentences, for example, advocating vegeterianism (p. 83), or criticizing Australian immigration detention camps (p. 103), or the bureaucratic "denial" of atheism's existence (p. 116).
The way in which Egan introduces his central concern with the quasi-Frankenstein-like preservation of parts of the human personality may come across as a little heavy-handed, but the scene provides an excellent illustration of how Egan is reorienting his approach to the standard science fiction narrative: a man sitting next to him on the plane to Iran remarks of Martin's disastrous attempt to digitize his vinyl collection, "With time and patience, everything could be preserved, but no one really has the patience" (p. 13). The conversation crescendos to a more direct allusion to what Egan will attempt in Zendegi: "'We're at the doorway to a new kind of world,' Haroun said. 'And we have the chance to make it extraordinary. But if we spend all our time gazing at the wonders ahead without remembering where we're standing right now, we're going to trip and fall flat on our faces, over and over'" (p. 13). Admittedly, these lines, which conclude the first chapter, read a little too much like a thesis statement, but I would emphasize that Egan hasn't chosen a bad thesis statement after all; indeed, the author's move away from posthumanist and/or postsingularity preoccupations arguably resembles, say, William Gibson's gradual retreat from cyberfutures in favor of cyberpresents. If one is willing to follow Egan back to the present before moving forward again, Zendegi offers a well-grounded analysis of some important contemporary trends—though some fans of Egan more interested in the mind-uploading and the heavy-duty scientific extrapolation may feel the first section runs too long, and yawn a bit over the emphasis on speculative Iranian politics. Only about a third of the way into the novel does Egan skip ahead a decade and a half and devote himself, in a more familiar way, to speculation about the human implications of new technologies.
I'll be the first to admit that Egan's way of arriving at the science fiction story in Zendegi is somewhat elliptical. Martin is an Australian journalist used to "hardship assignments" who eventually decides to move to Iran after his divorce leaves him with nothing left in Sydney. While Martin documents—and comes rather close to participating in—turbulent political upheavals in a near-future Iran, the narrative switches to the perspective of Nasim, an estranged second cousin of the Iranian woman Martin will later marry. At the beginning of the novel, Nasim lives in America and works as a researcher for the Human Connectome Project, an effort to map the entire human brain using a composite of neural scans from many, many volunteers. Frustrated, however, by the constant delays impeding the project's progress and encouraged by the new political situation in Iran, Nasim eventually decides to leave the project and return home, where she begins to work on, well, something completely different, a popular virtual reality social and gaming platform called Zendegi. Only later does she realize that she can apply her original research on the HCP to develop far more human-like "Proxies" for Zendegi, something like the equivalent of NPCs, or rather avatars that aren't avatars of anyone at all, but personalities used for advertising or to enhance gaming experiences. One of Zendegi's first great successes, for example, is a brain scan of the captain of Iran's national football team: the scan allows users to interact with a Proxy that possesses the motor skills of the star athlete, because the exact neural pathways were pulled from his brain. It is important to note here that his personality is not copied onto the Proxy, not only because of limiting legal agreements, but because the HCP has not nearly advanced to the point of what we think of as mind-uploading or complete mind-copying. While Nasim enjoys increasing commercial success with Zendegi, Martin's story begins to focus on his family life and his own explorations in the virtual world with his son; after Egan first introduces Nasim in the fourth chapter, he tells her story alongside Martin's in approximately alternating chapters up until their paths finally cross late in the novel, when Martin must turn to Nasim—and her work with the HCP—for help.
As in Lifecycle, it's not immediately clear to anyone what the Proxy brains are actually good for beyond their specialized commercial applications in Zendegi: Nasim thinks ruefully to herself that "[a] digital mosaic of corpse brains that read Dickens would look about as promising to most people as a car engine based on Galvani's twitching frog legs" (p. 164). Indeed, Nasim's work certainly never approaches the digital immortality desired by Caplan, a millionaire who attempts to enlist Nasim and her work in his own personal quest for transcendence. In fact, the novel appears to take a fairly dim view of any such attempts at immortality, or of the possibility of our achieving a technological singularity through artificial intelligence. Like the world of Lifecycle, the near-future world of Zendegi no longer appears to believe in HAL 9000 or other traditional "thinking machines," and Egan consistently pokes fun at the blue-sky artificial intelligence project competing with the HCP for funds, even in the name he's given it—the "Benign Superintelligence Bootstrap Project," which Nasim usually refers to as "Bullshit Squared." Yet the process of brain scanning or "side-loading" that Nasim continues to develop does not rule out all kinds of immortality, if one is willing to settle for less than perfect upload; by the time he comes to Nasim, Martin has figured all of this out. We learn that Martin desperately wants to become Nasim's newest test subject in order to preserve something of himself as a Zendegi Proxy in the event of his death, ostensibly to ensure that his son is raised according to his wishes. For Martin, living abroad in Iran where different value systems prevail even among his closest friends renders this desire all the more urgent, but Egan taps a universal impulse in describing Martin's plight. One wonders, indeed, if Martin's wish is exclusively connected to his son's future, or whether his own fear of death may also be driving him to preserve at least a digital ghost of himself. In its final acts, Zendegi becomes the story of the limitations of the "survival by proxy" that Nasim's techniques can offer, and its insights on the implications of fragmented consciousness are numerous and sometimes quite moving; perhaps not so surprisingly, although a novel primarily concerned with the preservation of some essential humanity, it becomes a powerful meditation on loss.
A final word on how Egan succeeds where I think Chiang could have done somewhat better: Lifecycle, I noticed, unfortunately sidesteps certain issues like the problem of "passing" that would inevitably arise with artificial intelligences—especially if they began to use the Internet interacting with organic intelligences—an issue that Egan rightly brings up as early as page 3. Although Chiang earlier acknowledges the dismal truth that some people would want to purchase the artificial pets only to torture them, he seems far too optimistic in his explanation of how the digients earn easy acceptance in fan clubs and other online communities: "The adolescents who dominate these communities seem unconcerned with the fact that the digients aren't human, treating them as just another kind of online friend they are unlikely to meet in person" (p. 84). Of course, artificial passing in Zendegi is perhaps little more than tenuously connected to the question of Martin's racial passing as a Westerner abroad in Iran—"he'd always look foreign close up in good light" (pp. 72-3)—but the issue of the unavoidable uncanniness of artificial intelligence becomes a major point of interest in Zendegi, beginning with Nasim's initial sense that she is dabbling in Frankenstein brains, "trying to shake off the eerie feeling that she'd stitched together something gruesome from the corpses of the birds and could now feel the awakened result fluttering its wings in her hands" (p. 64). In fact, Egan's interest in the questions of passing and failing to pass for human points us to an entirely different set of ethical questions absent from Lifecycle, since Nasim's composite beings are actually based on genuine human brains. Not only, then, is the nature of the consciousness they experience in question, but these beings were, in some way and at some point, actually human, rather than simply "like" humans. When Nasim first receives the funding to try to help Martin, her source—the would-be immortal Caplan—asks her, "If it doesn't work, are you prepared to clean up the mess? To put your botched creation out of its misery?" (p. 213). When Martin's personal Proxy reacts unpredictably, even disturbingly, in some of the test runs, Nasim and Martin are forced to confront the fact that their hideous progeny is not fully human, nor can it be. Worse, however, is the fact that it is neither fully not human, and here we return to a central question that Chiang does take up: the digients are not simply data, but a collection of experiences—and it looks like we are back to Zelazny's narrator's beloved definition! In a sense, both Zendegi and The Lifecycle of Software Objects speak to the futility of any attempt to outflank the question of "What is a human being?" by negative or exclusionary methods, that is, defining a human being by pointing to everything that is not human. Even if our future does not hold the Frankensteins we have always desired and feared, both books warn of an increasing number of in-betweens on our horizon, and insist, of course, that the question of how we should treat them will always have been, in large part, the question of how we should treat ourselves.
T.S. Miller is currently a graduate student at the University of Notre Dame, where he studies Middle English literature. Of course, genre fiction has been the secret vice of many a medievalist before him. His nonfiction has also appeared on The Internet Review of Science Fiction, and another article is forthcoming in Science Fiction Studies.