The Reductionist Apocalypse

Bill Hibbard


Comment on Searle's review of Bostrom's book on Superintelligence. 20 September 2014.

Self-Modeling Agents Evolving in Our Finite Universe, paper at The Seventh Conference on Artificial General Intelligence (AGI-14). August 2014.

[Message Contains No Recognizable Symbols]: Escape. A story about a technological singularity. Part 6 of [MCNRS]. April 2014.

. A story about a technological singularity. Part 5 of [MCNRS]. May 2013.

Wireheading, the Delusion Box and Model-Based Utility Functions. 18 April 2013.

Video of a North Atlantic storm taken on the way home from AGI-12. 19 December 2012.

My paper Avoiding Unintended AI Behaviors won the Singularity Institute's Turing Prize for the Best AGI Safety Paper at AGI-12 and AGI Impacts! Thank you. Note SI has become MIRI. 11 December 2012.

I become a Research Associate at the Machine Intelligence Research Institute. December 2012.

The Relation Between Dewey's "A Representation Theorem for Decisions about Causal Models" and the von Neumann-Morgenstern Theorem. This is about Dewey's paper at The Fifth Conference on Artificial General Intelligence (AGI-12). 22 October 2012.

Avoiding Unintended AI Behaviors, paper at The Fifth Conference on Artificial General Intelligence (AGI-12). This paper provides a functional definition of an AI agent and argues that it avoids known types of risks to humans. September 2012.

Decision Support for Safe AI Design, paper at The Fifth Conference on Artificial General Intelligence (AGI-12). September 2012.

The Error in My 2001 VisFiles Column. September 2012.

The Reductionist Apocalypse. When people ask me what the singularity is, I explain it is the time when human technology will include everything that is physically possible. In short, the Reductionist Apocalypse. Just to clarify, I consider that reductionism includes emergence but with very complex mathematics. 22 August 2012.

Muehlhauser-Hibbard Dialogue on AGI, dialog with Luke Muehlhauser about the dangers of AI, July 2012.

Model-based Utility Functions, in the Journal of Artificial General Intelligence, Volume 3. This paper solves the "wireheading" problem. Intentional self-delusion or wireheading is an action of the agent. This paper shows that a properly formulated utility function defined in terms of a (possibly learned) environment model will assign a low value to the consequences of that action of self-delusion, so the agent will not choose to self-delude. May 2012.

The Singularity Will Not Eliminate Money, a response to Ben Goertzel's article Will corporations prevent the Singularity?. March 2012.

The End of Rough Equality of Intelligence, talk at the Future of AGI Workshop at The Fourth Conference on Artificial General Intelligence (AGI-11). Here's a video of my talk (my talk starts 25:25 into the video and ends at 31:10 - I was given 10 minutes but always speak well within my time limit). August 2011.

Measuring Agent Intelligence via Hierarchies of Environments, paper at The Fourth Conference on Artificial General Intelligence (AGI-11). August 2011.

Societies of Intelligent Agents, paper at The Fourth Conference on Artificial General Intelligence (AGI-11). August 2011.

My interview about AI and its social effects on What Now with Ken Rose, 18 April 2011.

What the Wisconsin Demonstrations Can Teach Transhumanists, 17 March 2011.

When Future Watsons Play Politics, about the significance of IBM's Watson, 3 March 2011.

The Machine vs. the 'Jeopardy' Champs, letter to the editor of the New York Times, 14 February 2011.

Matching Pennies is Tim Tyler's web site for a tournament of algorithms competing at the game of "matching pennies." I recommended such an algorithm competition in my AGI-08 paper Adversarial Sequence Prediction, which is equivalent to matching pennies. February 2011.

AI is a Threat Despite Calming Voices, submitted as an op-ed to the New York Times, 20 August 2010.

[Message Contains No Recognizable Symbols]: Reality. A story about a technological singularity. Part 4 of [MCNRS]. March 2010.

Nietzsche's Overhuman is an Ideal Whereas Posthumans Will be Real, in the Journal of Evolution & Technology. January 2010.

Interview with Josh Hall, Paul Davies and myself wins Emmy. Part of EBRU TV's Matter and Beyond series. October 2009.

Bias and No Free Lunch in Formal Measures of Intelligence, in the Journal of Artificial General Intelligence, Volume 1. September 2009.

[Message Contains No Recognizable Symbols]: the Skeptic's Tale. A story about a technological singularity. Part 3 of [MCNRS]. September 2009.

Ambitious human space travel should wait for smart robots, letter to the editor of the New York Times, 21 July 2009 (5th of 8 letters).

I highly recommend Darren Aronofsky's 1998 film π, which roughly equates the true name of god with a number essential for predicting the stock market. One theme of the singularity is that a program for super-human AI is a sort of name for god. The film is a nice depiction of the obsessive genius and madness of people seeking to create super-human AI. May 2009.

Temptation, a paper rejected by the AGI-09 Workshop on The Future of AI.

Marcus Hutter, Tsvi Achler, myself and Pei Wang on an AGI-09 panel (very happy to be in such company). March 2009.

Distribution of Environments in Formal Measures of Intelligence, paper at The Second Conference on Artificial General Intelligence (AGI-09) (here's a video of my talk). March 2009. (Extended version of paper.)

Superintelligent machines and the financial crisis, letter to the editor of the New York Times, 19 Oct 2008 (1st of 6 letters - I was visiting New York hence the New York dateline).

Superintelligent machines - partner or master?, letter to the editor of the New York Times, Science Times, 2 Sept 2008.

The Middle Class Squeeze, submitted as an op-ed to the New York Times, 28 August 2008.

The Skills to Survive Globalization, letter to the editor of the New York Times (1st of 6 letters), 5 May 2008.

The Need for Regulation to Prevent Future Financial Crises, submitted as an op-ed to the New York Times, 26 March 2008.

[Message Contains No Recognizable Symbols]: the Movie. A story about a technological singularity. Part 2 of [MCNRS]. March 2008.

AI Politics, talk at the AGI-08 Workshop on the Sociocultural, Ethical and Futurological Implications of Artificial General Intelligence (here's a video of my talk). March 2008.

Ben Goertzel's opening talk at AGI-08 included this funny pair of slides (March 2008): What we expected in 2001 , and What we got .

Adversarial Sequence Prediction, paper at The First Conference on Artificial General Intelligence (AGI-08) (here's a video of my talk, and a video of the Q&A panel for my session). March 2008.

Open Source AI, paper at the AGI-08 Workshop on the Sociocultural, Ethical and Futurological Implications of Artificial General Intelligence. March 2008.

"During the Q & A I asked Venter why he spends so much of his time speaking in public, 150 talks a year. He said he sees that as part of his scientific work, to prepare the public for the big changes coming. He wants to avoid repeating the mistakes made with genetically modified crops (GMOs), where there was insufficient transparency and regulation, and irrational opposition by environmentalists, which crippled a crucial field. The public should feel it is included in every stage of genetic science and emerging biotechnology." -- Stewart Brand, commenting on Craig Venter's talk at the Long Now Foundation on 25 Feb 2008. The same logic applies to artificial intelligence - the public must be educated about and exercise collective, democratic control over this technology.

The Technology of Mind and a New Social Contract, in the Journal of Evolution & Technology, and presented in May 2007 at Human Rights for the 21st Century (here's an MP3 audio file of my talk). January 2008.

New Technologies and Economic Growth, letter to the editor of the Sunday Business section of the New York Times, 26 August 2007.

The Simulation Hypothesis, my letter to the editor of Science Times section of the New York Times, not published, about the hypothesis that our universe is a simulation. August 2007.

I highly recommend Artificial General Intelligence: A Gentle Introduction and Suggested Education for Future AGI Researchers by Pei Wang. June 2007.

Free Will and Morality, letter to the editor of the New York Times (6th of 7 letters), 22 April 2007.

[Message Contains No Recognizable Symbols]. A story about a technological singularity subject to the constraint that natural human authors are unable to depict the actions and dialog of super-intelligent minds. In particular, the languages of super-intelligent minds will be unintelligible to natural humans. Part 1 of [MCNRS]. April 2007.

I highly recommend Democratic Transhumanism 2.0 and Global Technology Regulation and Potentially Apocalyptic Technological Threats by James Hughes. April 2007.

Comment to the National Academy of Engineering Grand Challenges for Engineering. January 2007.

The Next Miracle. Submitted to Vanity Fair's 2006 Essay Contest. September 2006.

Reply to AIRisk, reply to Eliezer Yudkowsky's article on AI risks, June 2006.

Comment on the 2006 Singularity Summit. The Summit should include an advocate of regulating, but not banning, AI. May 2006.

Critique of the 2005 AAAI Fall Symposium on Machine Ethics. Problems with the review process and the content of this symposium. December 2005.

Critique of the SIAI Collective Volition Theory. December 2005.

Voluntary Versus Mandatory Privacy Protection for Web Search. December 2005.

"It is all too evident that our moral thinking simply has not been able to keep pace with the speed of scientific advancement. Yet the ramifications of this progress are such that it is no longer adequate to say that the choice of what to do with this knowledge should be left in the hands of individuals." - Tenzin Gyatso, the 14th Dalai Lama, in the New York Times on 12 November 2005, the day he spoke to the annual meeting of the Society for Neuroscience. The technology of mind will have profound consequences for humanity, and humanity must be educated about and exercise collective, democratic control over this technology.

A Review of Ray Kurzweil's The Singularity is Near. A good book, but it fails to adequately address the dangers of AI. October 2005.

A Manned Mission to Mars is a Bad Idea At This Time, letter to the editor of the New York Times, 30 July 2005.

The Ethics and Politics of Super-Intelligent Machines. My ideas about the ethics and politics of AI. July 2005.

Social Security and the coming Productivity Explosion, submitted as an op-ed to the New York Times, 7 November 2004.

Consciousness and Souls, letter to the editor of the New York Times, 12 September 2004.

Reinforcement Learning as a Context for Integrating AI Research. My ideas about how intelligence works, presented at the 2004 AAAI Fall Symposium on Achieving Human-Level Intelligence through Integrated Systems and Research. July 2004.

Critique of the SIAI Guidelines on Friendly AI . May 2003.

Should Standard Oil Own the Roads?. Thoughts on current social issues of information technology. February 2003.

Consciousness is a Simulator with a Memory for Solving the Temporal Credit Assignment Problem in Reinforcement Learning. My ideas about consciousness, presented at Towards a Science of Consciousness. April 2002.

The Introductory Chapter of Götterdämmerung. My attempt to communicate the drama of the singularity. Summer 2001.

Super-intelligent Machines. My first publication about intelligent machines, this defines my basic position. February 2001.


------

For more information, see the SSEC Machine Intelligence Project.

gg