September 7, 2007 4:00 AM PDT

Coming to grips with intelligent machines

Some technologists believe that rapid advancement in computer hardware and software could at some point lead to a hazy future for humans.

The so-called point of technological singularity has a number of definitions. In one, it's the point at which advances in artificial intelligence bring about self-improving machines that are smarter than humans. The idea has been played out to a dark extreme in popular science fiction books and movies like The Matrix, in which AI machines rule the world and turn humans into the fleshy equivalent of batteries.

A correction was made to this story. Read below for details.

Scary scenarios aside, a group of accomplished technologists and investors will gather this weekend at the two-day Singularity Summit to discuss the benefits and risks of advancing artificial intelligence, technical issues surrounding accelerating technology in many fields, and what to do in the event that machines one day outthink humans.

"There are different definitions of singularity. But the most useful way to think about it is that we're in a period of accelerating technology change that our species has never faced before," said Christine Peterson, vice president of Foresight Nanotech Institute, a public interest group focused on advanced technology. "So the question is how do we address the issue of change so rapid that it becomes difficult to project how it will affect us?"

Sure to spark lively conversation, the conference will be held at San Francisco's Palace of Fine Arts and will draw such speakers as Rodney Brooks, professor of robotics at the Massachusetts Institute of Technology and CTO of iRobot; Steve Jurvetson, managing director of venture firm Draper Fisher Jurvetson; and Peter Norvig, Google's director of research and the former head of the NASA Ames computational group. Norvig wrote the book on artificial intelligence that most collegiate computer science students read, Artificial Intelligence: A Modern Approach.

"So the question is how do we address the issue of change so rapid that it becomes difficult to project how it will affect us?"
--Christine Peterson, vice president, Foresight Nanotech Institute

Peter Thiel, co-founder of PayPal and president of venture group Clarium Capital Management, will play host to the event, which is put on by The Singularity Institute. Thiel, who also bankrolled the social network Facebook, has invested $500,000 in The Singularity Institute, a 5-year-old research and event nonprofit formed to address issues around AI.

Thiel has said in a statement: "It has been predicted for a long time that AI is right around the corner, and it's taking longer than many people thought it would, with many disappointments along the way. However, it's clear that there's a massive set of issues happening, and people who don't think there's something important going on are living in a fantasy, and need to wake up."

AI is hardly the first technological advancement in recent years to cause concern among both well-respected scientists and alarmists. In 2000, Sun co-founder and programming extraordinaire Bill Joy wrote an article for Wired magazine that served as a call to arms for people worried about the effects of nanotechnology. In 2002, Jurassic Park author Michael Crichton wrote an entertaining if implausible novel called Prey about killer nanodevices. Five years later, work on nanotechnology continues, but the fear--at least publicly--has subsided.

No plurality on singularity

To be sure, the idea behind the technological singularity is controversial, with many skeptics.

Introduced decades ago, the concept has been written about by futurists and science fiction authors. In the sixties, British statistician I.J. Good wrote about an "intelligence explosion" that would produce a self-improving computing system. In the 1990s, computer scientist and science fiction author Vernor Vinge predicted a coming technological singularity brought about by developments like bioengineering and brain-computer interfaces such as retinal implants.

Futurist Ray Kurzweil, director of the Singularity Institute, has predicted a future in which human brains will be teeming with robots that can augment intelligence and transport people into virtual reality realms or enable people to back up their own childhood memories. (Kurzweil, who can't attend the event this year, will give a 30-minute talk via videoconference on Sunday.)

Other speakers at the event include Barney Pell, co-founder and CEO of natural language search engine Powerset; Wendell Wallach, lecturer at Yale Interdisciplinary Center for Bioethics; and Paul Saffo, a lecturer at Stanford University.

"The summit is about how we may be developing technology that could expand beyond intelligence as we know it. And that could be shaped in either a favorable way or it could swing the other way. It will depend on the choices we make," said Tyler Emerson, chair of the summit and executive director of the Singularity Institute for Artificial Intelligence.

Peterson, who spoke at last year's inaugural event and who will speak again this year, said that the 2007 summit has a more serious tone with a longer schedule.

She added: "It's a great conference for skeptics, they'd have a blast. Some of the brightest minds in the world will be there."

Correction: An earlier version of this article misstated the amount of Peter Thiel's investment in The Singularity Institute. He has invested $500,000.

See more CNET content tagged:
Artificial Intelligence, human, nanotechnology, co-founder, vice president


Join the conversation!
Add your comment
Will David Gelertner or Tim Estes of Digital Reasoning Systems be speakers at the event? If not, seems that it will be another back patting session for the establishment instead of a true discussion of AI applications.
Posted by Buffy Proginosko (2 comments )
Reply Link Flag
David Gelertner or Tim Estes of DSRI speaking?
I don't think this posted earlier.
Posted by Buffy Proginosko (2 comments )
Link Flag
Sorry Dave. I am unable to do that.
Computer please scandisk and defrag yourself.
Posted by inachu (963 comments )
Reply Link Flag
Sorry Dave
Instead of "de-frag yourself" wouldn't that be "frag yourself"?
Posted by spothannah (145 comments )
Link Flag
Not as frightening as H.I.
Human Intelligence, which is subject to misinformation, ideology, really evil motives and, eventually, dementia. Artificial Intelligence, once codified and applied, will resemble living under a socialist, capitalist or religious regime. It will prove danderous and insideous, yet folks will learn to pretend to comply and get on with eeking out a life by unapproved thinking.
Posted by Ngallendou (27 comments )
Reply Link Flag
Maybe we're already just living in some computer's mind. I mean, if it were a really good (powerful) computer would we be able to tell whether we existed "in reality" or just some form of cyperspace that gave us "feeling real" as one of the many data sets contained within the computer's program?
Posted by spothannah (145 comments )
Link Flag
The Future will not be televised. It may be on youtube
<a class="jive-link-external" href="" target="_newWindow"></a>
Let me put it this way, Dr. Ramer, "No 9000 series computer has ever made a mistake or distorted information. We are Foolproof and incapable of error."
Posted by azareus (31 comments )
Reply Link Flag
Ultimately, those in power will be the puppet masters
Question: Ultimately, who will fund to have these machines programmed/built? concerned citizens? no. Christians? no. Islamic terrorists? no. the poor? no. Who then? Answer: Corporations, Defense contractors, International investment firms and those vague Foundations that you see at the end of PBS programs.-- The parallel processing power of the human brain might eventually be attained but self awareness will never be, if that's how you define A.I. or singularity.
Posted by davewithipod (3 comments )
Reply Link Flag
Not quite.
Self-awareness in machines will become technically possible, but it won't be useful to anyone. Why design a creature free to think for itself when you can design an unconscious zombie with superhuman intelligence? Basically you're correct though - power blocs in government and industry will use soft AI to predict and thereby control both individuals and entire societies. It's like Frank Herbert said in his Dune series: People believed they would be freed by machines, but just ended up being enslaved by other people who had machines.

But I think some people will have minds capable of overcoming that control, and such people probably already exist--they'll be ignored as "outliers" in the computations until their cascade effects on others become a problem. In fact, some people will learn how to act such that the computations are distorted to obscure themselves or others, and to manipulate the actions of the power blocs that control the machines. The controller becomes the controlled.
Posted by ToasterToad (22 comments )
Link Flag
Fear Mongering for bylines
Anyone familiar with the current state of the art in AI knows it is years (ten or more, IMHO) before we have to start worrying about machines with any real intelligence. And by that I mean able to make cognitive decisions better than a year-old baby. Stop the fear mongering just to get a byline and report on reality.
Posted by chrisw63 (26 comments )
Reply Link Flag

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot



RSS Feeds

Add headlines from CNET News to your homepage or feedreader.