Date: Wed, 10 May 2006 07:16:41 -0500 (CDT)
From: Bill Hibbard <email@example.com>
To: firstname.lastname@example.org, email@example.com,firstname.lastname@example.org, email@example.com
Subject: the Singularity Summit and regulation of AI
I am concerned that the Singularity Summit will not include any speaker advocating government regulation of intelligent machines. The purpose of this message is not to convince you of the need for such regulation, but just to say that the Summit should include someone speaking in favor of it. Note that, to be effective, regulation should be linked to a widespread public movement like the environmental and consumer safety movements. Intelligent weapons could be regulated by treaties similar to those for nuclear, chemical and biological weapons.
The obvious choice to advocate this position would be James Hughes, and it is puzzling that he is not included among the speakers. Can anyone explain why he is not included?
Nick Bostrom is a speaker, and it is possible that he will advocate such regulation. However, while he has written about the regulation of nanotechnology and biotechnology, I am not aware of anything he has written advocating regulation of intelligent machines. He has been very clear about the need to avoid existential threats from new technologies including artificial intelligence, and presumably he feels that regulation is needed to avoid these threats. I hope he will address this issue explicitly. Machine intelligence poses other threats to human happiness that are not existential but should be addressed by regulation.
Ray Kurzweil has advocated regulation of biotechnology and nanotechnology, but appears to be pessimistic about regulation of AI. In The Singularity is Near, he writes "But there is no purely technical strategy that is workable in this area, because greater intelligence will always find a way to circumvent measures that are the product of a lesser intelligence." I think the answer is to design AI to not want to harm humans (I think SIAI agrees with this, although we disagree on the details). Kurzweil also writes that AI will be "intimately embedded in our bodies and brains" and hence "it will reflect our values because it will be us." But the values of some humans have led to much misery for other humans. If some humans are radically more intelligent than others and retain all their human competitive instincts, this could create a society that the vast majority will not want. If they are given a choice. Meetings like the Singularity Summit should help educate the public about the ethical choices they face with new technologies.
Eliezer Yudkowsky is very clear about the dangers from artificial intelligence but is equally clear about his contempt for any regulation. Rather, it appears that his SIAI organization intends to be the first to create AI, which will be friendly and take over the world before governments have time to react. I think this scenario is very unlikely.
Bill McKibben wants a total prohibition of all the radical new technolgies. Used correctly, these technologies can give all humans much better lives, and it would be shameful to ban them completely. It would also be politically impossible to convince all governments to ban them. Rather than preserving the world exactly as it is, we need to be more specific about the values we want to preserve and find ways to enjoy the benefits of new technologies while preserving those values.
I have read the statements of the other speakers, included on the Singularity Summit web site, and none of them suggest that they will advocate regulation of intelligent machines.
Tenzin Gyatso, the 14th Dalai Lama, would be an interesting speaker for the Singularity Summit. Not as a religious leader, but as an ethical leader. He is very interested in new technologies and spoke to the Society for Neuroscience on 12 November 2005. On that same day he wrote, in an op-ed in the New York Times:
If you are uneasy about listening to a religous leader, consider that in the same op-ed he also wrote:
The Singularity Summit should include all points of view, including advocates for regulation of intelligent machines. It will weaken the Summit to exclude this point of view.
A copy of this message and other writings about the singularity are available at:http://www.ssec.wisc.edu/~billh/g/Singularity_Notes.html
Jeff Medina responded to my above email by suggesting that I submit questions to the Singularity Summit. I have done that with two messages: