Review of The Singularity is Near by Ray Kurzweil
A good book, but it fails to adequately address the dangers of AI
Bill Hibbard     October 2005

In "The Singularity is Near" Ray Kurzweil continues his role as the primary advocate and educator for the coming technological revolution in intelligence. As he describes, this and other related new technologies promise enormous benefits to humans, such as indefinite life span and greatly increased intelligence. However, artificial intelligence (AI) also poses serious threats, and Kurzweil does not adequately address those threats and the possible ways to defend against them. This is in contrast to his detailed descriptions of threats from genetic engineering and nanotechnology, and possible defenses against them.

Kurzweil writes that one of his fundamental principles is "respect for human consciousness", on page 374. But if AI develops without any regulation it will just extend human military and economic competition. This will amplify the gap between winners and losers, whether AI is used to improve human brains as in Kurzweil's vision, or AI is merely machines that serve the interests of their human owners. Without any regulation of AI, human society will evolve to a state in which the intelligence gaps between humans are greater than the gaps between current humans and their pets, and in which humans retain all their competitive instincts. Such a society will show different levels of respect to humans based on their intelligence, similar to the differences in the levels of respect currently shown to humans and their pets. Before such a situation develops, the public must be informed and given a choice whether this is what they really want. As the primary educator on intelligence technology, it would be helpful for Kurzweil to explain this. On page 470 he quotes Leon Furth, former National Security Advisor to Vice President Gore, as saying that "The majority of Americans will not simply sit still while some elite strips off their personalities and uploads themselves into their cyberspace paradise." It is encouraging that people in such powerful positions are aware of the issues.

In his section on "... and Dangers", pages 397-400, Kurzweil discusses dangerous scenarios for genetic engineering and nanobots but not for AI, which is puzzling.

He does later address the issue briefly for AI, on page 420. He writes that AI will be "intimately embedded in our bodies and brains" and hence "it will reflect our values because it will be us." But which of us? The wealthy and powerful initially, and the rest of us later if they allow it. You could make the same argument for lack of regulation over nuclear weapons, because they are built and controlled by "us", and hence control over them reflects our values. Kurzweil also writes that an "open free-market system" is the best way for AI to embody human values, and argues against "secretive government programs" for controlling AI. But certainly he cannot believe that an open market would have been preferable to the secretive government programs used to control nuclear weapons. In fact, in a New York Times op-ed on 17 October 2005 he and Bill Joy argue against publishing the genome of the the 1918 influenza virus. For viruses they prefer secretive government programs. Kurzweil must know that AI poses much greater dangers than either nuclear weapons or viruses. Such dangerous technologies are managed according to collectively agreed regulations, at least in democratic countries, and with some effort to extend that internationally. It is imperative that AI be managed with the same safeguards.

On page 424, of efforts "to deal with the danger from pathological R (strong AI)" Kurzweil writes "But there is no purely technical strategy that is workable in this area, because greater intelligence will always find a way to circumvent measures that are the product of a lesser intelligence." This is not true if our strategy is to design greater intelligence to not want to circumvent protective measures. I discuss this at length at:

http://www.ssec.wisc.edu/~billh/g/mi.html

There will be those who design machines with values that don't comply with regulation, but this threat is best met by putting resources into the development of complying AIs that can help detect and eliminate non-complying AIs. This is very similar to Kurzweil's own prescription for accelerating development of defensive technologies for genetic engineering and nanotechnology. He makes clear that such defenses are very difficult but that the problem must be solved to avoid a catastrophe. The same logic applies to defenses against pathological AI: very difficult but necessary.

In his "Response to Critics" chapter, Kurzweil addresses the issue of government regulation, but only whether it will slow down or stop technological progress. He does not address here the question of whether AI should be regulated.

Kurzweil ends (except for the notes and other back matter) on a very good note in "Human Centrality". He rebuts the claim of Stephen Jay Gould that all scientific revolutions reduce the stature of humans in the universe, by asserting that human brains and the successors they create are the main drivers of the universe.