AI is a Threat Despite Calming Voices

Bill Hibbard  20 August 2010


     Movies like The Terminator and The Matrix present the public with frightening visions of artificial intelligence. In response, AI experts have been working to calm their fears. In its August 2009 Interim Report, the AAAI (Association for the Advancement of Artificial Intelligence) Presidential Panel on Long-Term AI Futures wrote "The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes." Ray Kurzweil is the leading public voice about AI. In his 2005 book, The Singularity is Near, trying to calm fears of AI, he wrote that it will be "intimately embedded in our bodies and brains" and hence "it will reflect our values because it will be us." Jaron Lanier, in a recent New York Times OP-ED about AI, wrote "Technology is essentially a form of service. We work to make the world better."

     The problem with these calming voices is that while most AI movies are not realistic depictions of the future, machines much more intelligent than humans will likely exist during the lives of children already born and will pose threats to society that need to be addressed. The AAAI Panel, Kurzweil and Lanier are right that they will serve humans, but not necessarily all humans. We may not have to fear humans losing control of machines, but given the history of misery caused by humans, that leaves plenty of other worries.

     How can we be so confident that we will build super-intelligent machines? Because the progress of neuroscience makes it clear that our wonderful minds have a physical basis, and we should have learned by now that our technology can do anything that's physically possible. IBM's Watson, playing Jeopardy as skillfully as human champions, is a significant milestone and illustrates the progress of machine language processing. Watson learned language by statistical analysis of the huge amounts of text available on-line. When machines become powerful enough to extend that statistical analysis to correlate language with sensory data, you will lose a debate with them if you argue that they don't understand language.

     Lanier wrote "These algorithms do not represent emotion or meaning, only statistics and correlations." But if the emotions and meanings in our minds are based in our physical brains and governed by the mathematical laws of physics, then they are nothing more than sufficiently complex statistics and correlations. Skepticism of that idea may feel good now, but it serves our future poorly if it stops us from preparing for machines with minds better than ours.

     By providing services their owners want, intelligent machines will threaten mass unemployment (as Marshall Brain describes in his Robotic Nation essays), invasion of privacy, unassailable robot armies supporting dictators, and unstable financial markets. These problems need to be addressed by regulations, treaties and support for the unemployed.

     However, AI also poses more fundamental threats. The variation in intelligence among humans is small compared to the variations that will exist when we can build intelligent machines and perhaps enhance human brains. Nature creates all humans roughly equal, but artificial systems are generally made in a wide range of sizes. As a result, the best minds will speak languages that ordinary humans cannot speak or even learn. The precedent of the legal status of children and animals suggests that this may lead to different legal rights depending on intelligence. Furthermore, if your intelligence depends on how large a brain you can afford and your wealth depends on your intelligence, class mobility will be impossible. Because of the profound effects of true AI, we must contemplate profound changes in our social contract to preserve human dignity.

     There should be a public discussion and debate about AI, not based on fantastic movie plots or on assurances that humans are in control so there is nothing to worry about. We need to seriously consider the social effects of machines with greater intelligence than humans. We need to negotiate about what kind of future we want rather than just stumbling into a possibly horrible future.