Chicago-Kent College of Law Illinois Institute of Technology Institute on Biotechnology & the Human Future
Search IBHF Search Nano & Society


commentaries







Chairman
• Nigel M. de S. Cameron

Fellows
• Adrienne Asch
• Brent Blackwelder
• Paige Comstock Cunningham
• Marsha Darling
• Jean Bethke Elshtain
• Kevin FitzGerald
• Debra Greenfield
• Amy Laura Hall
• Jaydee Hanson
• C. Christopher Hook
• Douglas Hunt
• William B. Hurlbut
• Andrew Kimbrell
• Abby Lippman
• Michele Mekel
• C. Ben Mitchell
• M. Ellen Mitchell
• Stuart A. Newman
• Judy Norsigian
• David Prentice
• Charles Rubin

Affiliated Scholars
• Sheri Alpert
• Diane Beeson
• Nanette Elster
• Rosario Isasi
• Henk Jochemsen
• Christina Bieber Lake
  Christina Bieber Lake's Blog
• Katrina Sifferd
• Tina Stevens
• Brent Waters

Co-founders
• Lori Andrews
• Nigel M. de S. Cameron



Institute on Biotechnology & the Human Future
565 W. Adams Street
Chicago Illinois
312.906.5337
info@thehumanfuture.org


general commentaries



Another View of the Singularity Summit: Is the "Singularity" for Everyone?


Dawn M. Willow, J.D.
Legal Fellow
Institute on Biotechnology & the Human Future and the Center on Nanotechnology & Society at Chicago-Kent College of Law/Illinois Institute of Technology



When 1997 world chess champion Gary Kasparov was defeated by a computer, and when computers became capable of mimicking the styles of Bach, Mozart, and Chopin so closely that music aficionados could hardly distinguish between the human-composed and the machine-composed scores, the distinction between man and machine became obscured as computers seemed to demonstrate skills that require creativity -- a quality thought to be a uniquely human characteristic. But, while computers may become capable of more and more human-like tasks or exceed certain human cognitive abilities, can artificial intelligence ever capture human ingenuity? Are we approaching technological changes that will merge biological and non-biological intelligence, fuse the man-machine relationship, and blur the lines between reality and virtual reality? These ideas and others were the topics of discussion at the Singularity Summit, held on May 13, 2006, at Stanford University. The event drew more than 1,800 technophiles, scientists, academics, entrepreneurs, and curious observers from across the country.

Ray Kurzweil, pre-eminent scientist and author of The Singularity is Near: When Humans Transcend Biology, keynoted the event. He spoke non-dogmatically about accelerating change, exponential growth, and the paradigm shifts of science that are, in theory, leading to the "Singularity." One panelist referred to the "Singularity" as the "intelligence explosion." Put another way, the concept of the "Singularity," as described by the Singularity Institute for Artificial Intelligence at Stanford, is defined as the "capab[ility] of technologically creating smarter-than-human intelligence, perhaps through enhancement of the human brain, direct links between computers and the brain, or Artificial Intelligence. This event is called the 'Singularity' by analogy with the singularity at the center of a black hole-just as our current model of physics breaks down when it attempts to describe the center of a black hole, our model of the future breaks down once the future contains smarter-than-human minds."

The field of artificial intelligence (A.I.) brings together the schools of neuroanatomy, evolutionary psychology, mathematics, nanotechnology, and computing, among many other disciplines. A.I. technology is used to create culturally liberating inventions such as real-time language translators and self-operated cars. But will other applications create a technological trap for their creators? In order to answer both technical questions concerning the advancement of A.I., as well as philosophical questions concerning the future of humankind in an A.I.-dominated world, speakers addressed: the ways the brain differs from conventional computers; what distinguishes humans from other primates; and how the laws of technological acceleration may transform humanity.

For example, Kurzweil pointed out that much of human intelligence is based on pattern recognition - the quintessential example of self-organization (i.e., the process by which the internal organization of a system increases in complexity without being guided or managed by an outside source) - and that replicating this skill via A.I. presents a great challenge for scientists. However, through reverse engineering of the human brain and the leveraging of pattern recognition, Kurzweil ambitiously predicted that A.I. will surpass the human mind in just a few decades. Kurzweil explained that reverse engineering of the auditory cortex could derive principles of operation, which can be expressed as mathematics and simulated by computer programming, and that, thereby, the brain's capacity could be compressed into about 20 megabytes of data. He further predicted that, by 2020, the power of the human brain could be contained in a personal computer for $1,000.

These Singularity predictions were welcomed by many transhumanists and post-human futurists in attendance, who seem to be working towards the ultimate goal of creating a utopia characterized by immortality, and which they postulate may be realized by "uploading" the content of one's brain on to a computational substrate in order to exist in a non-corporeal form somewhere in cyberspace-where one could "live" forever. Nevertheless, Douglas Hostader, professor of cognitive science and computer science, and adjunct professor of history and philosophy of science, philosophy, comparative literature, and psychology at Indiana University, offered some critique of these Singularity predictions that blur the lines between science and science fiction. Hofstadter expressed concern about the idea of humans becoming software entities inside of computing hardware. For instance, if sentient life can exist in silicon substrates, could the Singularity bring about a "planned obsolescence" of humans as we exist today?

The idea of existing or achieving happiness in a transhuman state facilitated by A.I. raises the questions of what it means to be human, and the purpose and essence of our existence, as well as how, how long, and where we should exist. During the question-and-answer session, one audience member asked the panel if any religious studies been undertaken to address the implications of A.I. After a moment of silence (not in reverence), one panelist wryly stated something along the lines of: "I hope not."

Perhaps, a transhumanist would find no need for religion in a world where we can upload ourselves into cyber-heaven. While some scientists may find religion a stagnating factor to technological progress, and while science and religion may arrive at conclusions about our human nature, Singularity enthusiasts and people of faith both share a strong and hopeful vision for the future of humanity-although the means by which this vision should become reality differs.

Dawn M. Willow, J.D., is a legal fellow at the Institute on Biotechnology and the Human Future and the Center on Nanotechnology and Society at Chicago-Kent College of Law/Illinois Institute of Technology. During the Spring 2006 Legislative Session, she served as Legislative Counsel, Office of the Speaker of the Illinois House of Representatives.