Size / / /

Artificial Intelligence: the ability of computers to perform functions that normally require human intelligence.

Encarta World English Dictionary, 1999.

 

Dr. Mark Humphrys of Dublin City University defines artificial intelligence as "engineering inspired by biology." [1] Today's robots and AI systems are no smarter than insects [2, 3]. Despite this current limitation, there are many reasons to sit back and enjoy the myriad of services technology has created for humanity through AI systems. AIs now play chess, checkers, bridge, and backgammon at world-class levels (e.g., IBM's chess computer, Deep Blue, beat Garry Kasparov, the world champion, in 1997). They compose music, prove mathematical theorems, synthesize stock option and derivative prices on Wall Street, make decisions about credit applications, diagnose motor pumps, and act as remedial reading tutors for elementary school children [4]. Robots mow your lawn; conduct complex scientific research, surveillance and planetary exploration; track people; play table soccer; and act as pets. But they can't "think" like you and me. And they don't possess common sense . . . yet.

Today, AI systems are still nothing more than glorified adding machines or "idiot savants," capable of manipulating vast amounts of data a million times faster than humans [2,3]. AIs can't understand what they are doing and have no independent thought. They also can't program themselves. Today's most complex robots use a simple feedback mechanism to move and act (e.g., Attila, MIT's "insectoid" robot), based on paradigms used in nature to simulate intelligence. Like real insects, these automatons are capable of making their own decisions (e.g., symbolic AI like Shakey, the first mobile robot built in 1969, which contained an internal model of its micro-world), as opposed to the industrial robots on assembly lines, which are pre-programmed. In the final analysis AIs are still an oxymoron.

Building Large Knowledge-Based Systems cover

Artificial intelligences still have a long way to go before attaining anything remotely close to a human's thought process and achieving what we call "common sense" (e.g., nothing can be in two places at the same time). But the new approach to AI known as "nouvelle AI," pioneered at MIT in the late 1980s by Rodney Brooks, appears more likely to attain complex reasoning than previous approaches [1, 5]. The previous top-down approach championed by Douglas Lenat and others attempted to endow AI with an encyclopedia of "common sense" (e.g., Cyc).

Instead, Brooks' team uses bottom-up biology-based models of intelligence by implementing a long history of interaction with the world and other biology-based intelligent systems, rather than force-feeding abstract reasoning and logical deduction. This is called "situated AI," the building of embodied intelligences situated in the real world and following the process of "the normal teaching of a child." [5] As Kaku said of this philosophy of AI: "learning is everything; logic and programming are nothing." [2] According to Dr. Kaku, the most successful AI systems, like Brooks' biology-based models, are those that learn like we do, through trial and error (e.g., Terry Sejnowski's neural network, NETalk, that learned the English language heuristically).

In the meantime, AI components are getting smaller and more affordable. The first nanochip was produced by the semiconductor industry in 2000, not only packing more transistors per cubic centimeter but also lowering the cost per transistor, increasing the speed of microprocessing [6], and permitting a whole new array of uses for and within humans. Which brings us to the two major areas of AI development: 1) robots and AI systems external from humans; and 2) interactive implants inside or on human bodies.

 

External Systems

Regarding the first, Dr. Michio Kaku, cofounder of string field theory and author of Hyperspace, describes a new branch of AI research called heuristics, which would codify logic and intelligence with a series of rules and would permit us by 2030 to speak to a computerized doctor, lawyer, or technician who could answer detailed questions about diagnostics or treatment. These "intelligent agents" may act as butlers, perform car tune-ups, perhaps even cook gourmet meals [2].

However, despite their many human-like characteristics, such systems remain a far cry from achieving what we call "real intelligence." They would still be glorified automatons, albeit sophisticated diagnostic tools, taking on the form of a human figure on a screen or a humanoid robotic form. Although they would give the appearance of human intelligence and likely pass the Turing Test, these essentially pre-programmed systems would not "think," be "self-aware," or have "common sense" as we know it. According to Dr. Kaku, this level of consciousness, which is the ability to set one's own goals, may only be achieved after 2050, when he predicts the top-down and bottom-up approaches to AI development will meet, giving us the best of both.

 

Internal Systems

Neanderthal Parallax cover

Regarding the second area of AI development, research labs are already developing a vast array of "intelligent clothes," which can interface with us and enhance memory, awareness, and cognition [7]. Along with these exterior enhancements, microchip implants, such as radio frequency identification devices (RFIDs) inserted in humans, are gaining momentum. On May 2, 2002, the first human was "chipped" for security reasons; the idea was that if he became ill or impaired, professionals could access his medical history by scanning his microchip implant [8]. The next step in the evolution of this technology is the ability to track people using GPS and connect to additional personal information of importance such as medical data [9]. Science fiction writer Robert J. Sawyer calls such devices "companions," as used by an alternative society in his Neanderthal Parallax trilogy [10]. Since 9/11 the idea of national identification has gained much approval by US citizens.

Medical implants are not new; they are used in every organ of the human body. More than 1,800 types of medical devices are currently in use. These run the gamut from heart valves, pacemakers, and cochlear implants, to drug infusion devices and neuro-stimulating devices for pain relief or to combat certain disorders like Parkinson's.

On October 14, 2003, the Associated Press announced that monkeys with brain implants could consciously move a robot arm with their thoughts, representing a key advance by researchers at Duke University, who were hoping to permit paralyzed people to perform similar tasks [11]. Paul Root Wolpe, a bioethicist at the University of Pennsylvania, declared, "We're on the verge of profound changes in our ability to manipulate the brain." [12] New developments in neuroscience promise to improve memory, boost intellectual acumen, and fine-tune emotional responses through brain implants.

This excites transhumanists, who seek to expand technological opportunities for people to live longer and healthier lives and enhance their intellectual, physical, and emotional capacities through the use of genetic, cybernetic, and nanotechnologies. From the transhuman perspective, "in time the line between machines and living beings will blur and eventually vanish, making us part of a bionic ecology." [13]

The US National Science Foundation and the Department of Commerce initiated a program that "wires together biotechnology, IT, and cognitive neuroscience (under the acronym of NBIC) into one megatechnology by mastering nano-scale engineering." [14] In a detailed report that projected twenty years into the future, the authors declared that: "understanding of the mind and brain will enable the creation of a new species of intelligent machine systems." [15] The report envisioned technological achievements that would seize control of the molecular world through nanotechnology including the re-engineering of neurons "so that our minds could talk directly to computers or to artificial limbs." [16] Brain-to-brain interaction, direct brain control devices via neuromorphic engineering, and retarding of the aging process would then be feasible.

 

Future Systems

Spiritual Machines cover

When might all this be possible? Some of it is already occurring (e.g., the recent work of Duke University mentioned above). Dr. Kaku asserted that "after years of stagnation in the field of artificial intelligence, the biomolecular revolution and the quantum revolution are beginning to provide a flood of rich, new models for research." [2] Drawing on the insight of AI pioneers like Hans Moravec of Carnegie Mellon University, Dr. Kaku suggested that this may happen only once the opposing schools of AI research amalgamate, combining all the ways humans think and learn: heuristically, by "bumping into the world;" by absorbing certain data through sheer memorization; and by having certain circuits "hard-wired" into our brains. He predicted that this would occur sometime beyond 2050, at which time AIs would acquire consciousness and self-awareness. MIT artificial intelligence guru and transhumanist, Ray Kurzweil, agreed in his 1999 book The Age of Spiritual Machines, that sentient robots were indeed a near-term possibility: "The emergence of machine intelligence that exceeds human intelligence in all of its broad diversity is inevitable." [3] Kurzweil asserted that the most basic vital characteristics of organisms such as self-replication, morphing, self-regeneration, self-assembly, and the holistic nature of biological design can eventually be achieved by machines. Examples include self-maintaining solar cells that replace messy fossil fuels and body-cleaning and organ-fixing nanobots.

Now that we are poised on a new era of machine and human evolution, Kurzweil's vision of our cyborg future where humans are fused with machine in what he calls a "post-biological world" [3] appears seductive. However, Bruce Sterling presents us with the following depressing vision of the future: citing Japan's rapidly growing elderly population and its serious shortage of caretakers, Japanese roboticists envision walking wheelchairs and mobile arms that manipulate and fetch. "The peripherals may be dizzyingly clever gizmos from the likes of Sony and Honda, but the CPU is a human being: old, weak, vulnerable, pitifully limited, possibly senile." [19] The possible mayhem generated by this scenario is limited only by our imaginations.

Predictions of a cyborg future have also prompted concerns and dark predictions of governments misusing brain implants and other aspects of AI to monitor and control citizens' behavior [12]. Bill Joy, the cofounder of Sun Microsystems, wrote in his April 2000 article in Wired Magazine, called "Why the Future Doesn't Need Us": "Our most powerful 21st century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species." [17] Joy cited Unabomber Theodore Kaczynski's dystopian scenario to warn us of the consequences of unbridled technological advances of GNR (Genetics-Nanotech-Robotics). Joy then cautioned: "We have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication." [17] He fires home the point by suggesting that "a bomb is blown up only once—but one bot can become many, and quickly get out of control." [17] Joy also warns that nanotechnological devices in the hands of terrorists or an unscrupulous military could become the ultimate genocidal weapon, created to be selectively destructive, and affecting only a certain geographical area or group of genetically distinct people. Kurzweil argues back that: "People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age old problems, then a sense of dread at a new set of grave dangers that accompany these new technologies, followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril." [18]

The perils described by Joy would result largely from unethical actions of humans—not machines. So, how will we make a robot behave? How will we pass on the best ethics to machines and manage them, when, according to Bruce Sterling, "we've never managed that trick with ourselves." [19]? Nanotechnologist J. Storrs Hall astutely states: "We have never considered ourselves to have moral duties to our machines, or them to us. All that is about to change." [16] In reference to morality, SF author Vernor Vinge, with a hierarchy of superhuman intelligences presumably in mind, referred to I.J. Good's Meta-Golden Rule, which is: "Treat your inferiors as you would be treated by your superiors." [20] What ethics and morals will we instill in our thinking machines? And what will they, in turn, teach us about what it means to be human?

Will ethics alone be sufficient to ensure the benevolence of AI? What other means might we employ to control machine intelligence, which, according to Kurzweil, will surpass the brain power of our entire human race by 2060? [3] Although Kurzweil believes that the evolution of smart machines will run a natural course toward moral responsibility, he does support "fine-grained relinquishments" such as a "moratorium on the development of physical entities that can self-replicate in a natural environment, a ban on self-replicating physical entities that contain their own codes for self-replication and a design called Broadcast Architecture, which would require entities to obtain self-replicating codes from a centralized secure server that would guard against undesirable replication," [3] such as that alluded to by Joy.

Frames of Mind cover

Along with Dr. Michio Kaku, I explored a premise for AI control through the paradigm of creating AIs with separate dominant intelligence types within an "AI community" of multiple intelligences. Howard Gardner, professor of cognition and education at Harvard University, introduced the theory of multiple intelligences in humans in his book, Frames of Mind, in 1983. His theory is that human intelligence can be best described as a "multiplicity of intelligences" [21], incorporating up to nine types of relatively independent mental abilities or intelligence (i.e., musical, spatial, kinetic, mathematical and logical, linguistic, inter- and intra-personal, and naturalist). Gardner provides eight criteria for these discrete intelligence types in the brain, including: 1) potential isolation by brain damage; 2) existence of prodigies; 3) existence of savants and other exceptional individuals; 4) an identifiable core operation or set of operations; 5) a distinctive development within an individual along with a definable nature of expert performance; 6) an evolutionary history and evolutionary plausibility; 7) support from tests in experimental psychology; 8) support from psychometric findings; and 9) susceptibility to encoding in a symbol system. For instance, the existence of the naturalist intelligence is based on evidence that parts of the temporal lobe are dedicated to the naming and recognition of natural things. To return to the use of Gardner's theory in AI evolution, I pose the question of the use of intelligence types in AI design. What will our AI designers create and instill? Should we contemplate self-limiting aspects in our designs such as the use of specialized AI that mimics a "gifted" savant? The idea here is to create an AI that performs incredibly well within a specified and confined ability, while it, like a savant, remains severely deficient in others. The result is an entity that is incomplete, socially isolated, and therefore less threatening.

Dr. Mark Humphrys agrees in principle and takes this argument a step further. Dr. Humphrys argues that: "You can't expect to build single isolated AIs alone in laboratories and get anywhere. Unless the creatures can have the space in which to evolve a rich culture, with repeated social interaction with things that are like them you can't really expect to get beyond a certain stage. If we work up from insects to dogs to Homo erectus to humans, the AI project will fall apart somewhere around the Homo erectus step because of our inability to provide them with a real cultural environment. We cannot make millions of these things and give them the living space in which to develop their own primitive societies, language and cultures." [1]

Is it possible that an AI community would strive as an autopoietic system to assemble its disparate parts into a cohesive "organism" to achieve "wholeness"? Perhaps it is the destiny of all intelligent beings to seek "wholeness" within their community and a place in the universe. Perhaps we will finally realize ours through the community of machines we make.


Nina Munteanu photo

References:

[1] Humphrys, Mark. "Reaching out to over 15 million people" in Robotbooks.com, The Future of Artificial Intelligence.

[2] Kaku, Michio. Visions: how science will revolutionize the 21st century, Anchor Books Doubleday, New York. 1997. 403pp.

[3] Kurzweil, Ray. The Age of Spiritual Machines, Penguin Books, New York. 1999. 253pp.

[4] Ford, Kenneth & Patrick J. Hayes. "On Computational Wings: rethinking the goals of artificial intelligence", in Scientific American Presents: Exploring Intelligence 9 (4) Winter. 1998.

[5] Copeland, Jack. "What is Artificial Intelligence?" in www.alanturing.net. May 2000.

[6] Hutcheson, G. Dan. "The First Nanochips," in Scientific American 290 (4): 76-81. April, 2004.

[7] Pentland, Alex P. "Wearable Intelligence", in Scientific American Presents: Exploring Intelligence 9 (4) Winter. 1998.

[8] CBC News: www.cbc.ca/stories/2002/11/05/consumers/microchip_021105, May, 2002; Julie Foster in WorldNetDaily.com, 2001.

[9] Gaitherburg, MD. Medical Implant Information Performance, and Policies Workshop, Sept. 19-20, 2002. Final Report.

[10] Sawyer, Robert J. The Neanderthal Parallax Trilogy, TOR.

[11] Alex Dominguez (Associated Press). "Monkeys move robotic arms with their minds," in: The Vancouver Sun, October 14, 2003.

[12] Center of Cognitive Liberty & Ethics: www.cognitiveliberty.org/neuro/bailey.html.

[13] WTA World Transhuman Association. transhumanism.org.

[14] Thomas, Jim. "Future Perfect?" in The Ecologist, May 22, 2003.

[15] National Science Foundation and Department of Commerce. "Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science." 2002. 402pp.

[16] Hall, J. Storrs. "Ethics for Machines," in www.KurzweilAI.net, July 5, 2001.

[17] Joy, Bill. "Why the Future Doesn't Need Us," in Wired, April, 2000.

[18] Kurzweil, Ray. "The Law of Accelerating Returns," 2001.

[19] Sterling, Bruce. "Robots and the Rest of Us," in Wired, May, 2004.

[20] Vinge, Vernor. ""The Coming Technological Singularity: How to Survive in the Post-Human Era," in Vision-21, NASA, 1993.

[21] Gardner, Howard. Frames of Mind, BasicBooks, 1983. 440pp.



Nina Munteanu is a Canadian writer whose fiction and nonfiction have appeared in publications throughout United States, Canada, the UK, Romania, and Greece. Several of her short stories have been nominated for the Speculative Literature Foundation (SLF) Fountain Award. Her SF romantic thriller, "Collision with Paradise," is scheduled for release in Spring 2005 by Liquid Silver Books. Nina's critical essays and reviews regularly appear in The Internet Review of Science Fiction, The New York Review of Science Fiction, Strange Horizons, and Aoife's Kiss, among others. For more information on her writing, visit her website, SF Girl. Read more by Nina in our archive.
Current Issue
18 Nov 2024

Your distress signals are understood
Somehow we’re now Harold Lloyd/Jackie Chan, letting go of the minute hand
It was always a beautiful day on April 22, 1952.
By: Susannah Rand
Podcast read by: Claire McNerney
In this episode of the Strange Horizons Fiction podcast, Michael Ireland presents Little Lila by Susannah Rand, read by Claire McNerney. Subscribe to the Strange Horizons podcast: Spotify
Friday: The 23rd Hero by Rebecca Anne Nguyen 
Issue 11 Nov 2024
Issue 4 Nov 2024
Issue 28 Oct 2024
Issue 21 Oct 2024
By: KT Bryski
Podcast read by: Devin Martin
Issue 14 Oct 2024
Issue 7 Oct 2024
By: Christopher Blake
Podcast read by: Emmie Christie
Issue 30 Sep 2024
Issue 23 Sep 2024
By: LeeAnn Perry
Art by: nino
Issue 16 Sep 2024
Issue 9 Sep 2024
Load More