The Reductionist Apocalypse

Bill Hibbard


The Geometry of Human Information Flows Coloring Book, my whimsical take on artificial intelligence. 12 October 2021.

AI On the Internet Should Be Transparent, article rejected by The New York Times and The Guardian. 13 May 2023.

Interview on Bartlomiej Pohorecki's AI Tomorrow Podcast. November 2021.

NY Times Hit Piece on Google CEO Sundar Pichai and my rejected comment that the story was likely instigated by someone who wants Sundar's job. 23 June 2021.

An Information Process Perspective on Our Times. April 2021.

The Coup We Are Not Talking About, a great op-ed by Shoshana Zuboff. 29 Jan 2021.

Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure, by Samuel Alexander and Bill Hibbard in the Journal of Artificial General Intelligence. Sam wrote this interesting paper and generously invited me to be co-author. 22 Jan 2021.

The Guardian view on DeepMind's brain: the shape of things to come, a thought provoking article in The Guardian about AI and open source (note my 2008 paper Open Source AI). 6 Dec 2020.

Interview by Johnny Boston on The Need for Open-Source Artificial Intelligence. This is part of his series 2030 Beyond the Film. September 2020.

AI Safety and Ethics, my invited talk to the 2020 Discovery Conference of the Wisconsin Society of Professional Engineers. 13 June 2020.

Preserving Human Freedom in the Information Age, my proposal for policies governing human use of information processing equipment (IPE), to preserve human freedom as the intelligence of IPE increases. Note this great video argument about the danger of political speech being controlled by a "Technocratic Oligarchy." May 2020.

You Are Now Remotely Controlled, a great op-ed by Shoshana Zuboff, based on her book, "The Age of Surveillance Capitalism". 24 Jan 2020.

"Human Compatible" and "Avoiding Unintended AI Behaviors", my review of Stuart Russell's new Book, Human Compatible. I really like this book, in part because it vindicates ideas in my AGI-12 paper, Avoiding Unintended AI Behaviors. 17 October 2019.

We Should Be Paid To Not Interact With the Internet, a letter submitted to the New York Times (not published) in response to Jaron Lanier's proposal that we should be paid for our data. 25 September 2019.

We Need AI Transparency, an op-ed submitted to the New York Times (not published). 12 August 2019.

Good for Google, Bad for America, a great op-ed by Peter Thiel. 1 August 2019.

IT'S SENTIENT - Meet the classified artificial brain being developed by US intelligence programs, interesting article about AI for national intelligence. 31 July 2019.

Coming of Age With AI, my talk to AI@UW. 13 Mar 2019.

Don't believe the hype: the media are unwittingly selling us an AI fantasy, a great article in The Guardian. 13 Jan 2019.

Artificial General Intelligence (AGI), my talk to the Chaos and Complex Systems Seminar. 13 Nov 2018.

How (Not) to Think About Artificial Intelligence, my talk invited by the Wisconsin Institute for Discovery as part of their Crossroads of Ideas series. Here's my talk as edited for public television. 4 Dec 2018.

The Hacking of America, a great New York Times op-ed by Jill Lepore. Insightful and poetic. Sept 2018.

How the Enlightenment Ends, by Henry Kissinger, has great insight about the impact of AI. However he wrote, "By treating a mathematical process as if it were a thought process." Neuroscience demonstrates that human thought processes have a purely physical basis and hence can be created by mathematical processes. June 2018.

How to Trust a Robot, an article in On Wisconsin, the UW Alumni magazine, features my work on AI ethics and transparency. June 2018.

My letter to the editor of the New York Times, not published, responding to an OP-ED about the danger of A.I. in Social Media Advertising. March 2018.

When Our Thoughts Are No Longer Our Own, a New York Times op-ed by Hari Kunzru. I wish I had written it. And here's a great article in the Times about AI voice surveillance in China. Coming soon to a country near you! Dec 2017.

If you're curious about the real problem with artificial intelligence, here it is in a nutshell: The Twentieth Century was largely about human minds creating mathematical models of things to understand and control them. The Twenty First Century will largely be about things creating mathematical models of human minds to understand and control them. Oct 2017.

Good News on AI. Commentators as diverse as Maureen Dowd and Steve Bannon have focused on the dangers of AI developed and controlled by elites. When Dowd and Bannon both talk about the danger of AI developed by elites, that is good news indeed. Oct 2017

My letter to the editor of the New York Times, not published, responding to an OP-ED about Co-Parenting With Alexa. Oct 2017.

How to Regulate Artificial Intelligence, my letter to the editor of the New York Times, not published, responding to an OP-ED about How to Regulate AI. Sept 2017.

In his New York Times op-ed, Why Are American Liberals So Afraid of Russia, Ivan Krastev wrote, "It may take a while before working-class Americans start to realize that while the American economy is dramatically different from that of Russia, the technological revolution led by Silicon Valley could in time tilt Western societies toward authoritarian politics in the same way that an abundance of natural resources has made Mr. Putin's regime possible." 17 August 2017.

The Robots Are Here And They're Not Friendly, letter to the editor of the New York Times, 4 August 2017 (2nd of 3 letters).

The Real Threat of AI, my letter to the editor of the New York Times, not published, responding to an OP-ED about the Real Threat of AI. June 2017.

A short video produced at the request of The Future Society at The Harvard Kennedy School. This is to introduce myself and describe a possible agenda item for The AI Initiative at The Future Society. 23 April 2017.

Here are two excellent articles about what I fear most from AI: The Rise of the Weaponized AI Propaganda Machine and Robert Mercer: the big data billionaire waging war on mainstream media. While these stories are about the use of AI by the political right, the left also uses these techniques. Better than the right in 2008 and 2012, but less well in 2016. 26 Feb 2017.

The Asilomar AI Principles Should Include Transparency About the Purpose and Means of Advanced AI Systems, my response to the Asilomar AI Principles. Other responses to the Asilomar AI Principles include Governing the rise of AI... and other transformative emerging technologies from Nicolas Miailhe and It's time for some messy, democratic discussions about the future of AI from The Guardian. 2 February 2017

"Data is the defensible barrier, not algorithms," Andrew Ng of Baidu said, speaking about the race to AI. He said that better algorithms can only put a competitor ahead by a year or so. 8 January 2017.

White House OSTP Report Underestimates the Impact of AI. A response to the White House Office of Science and Technology Policy (OSTP) report, Preparing for the Future of Artificial Intelligence 27 November 2016.

Transparency in Artificial Intelligence, my response (scroll down to Respondent 96 or find 'Hibbard') to the White House OSTP Request for Information (RFI) on Artificial Intelligence. This RFI is part of a White House initiative on AI that also includes a report Preparing for the Future of Artificial Intelligence. 18 October 2016.

I was a member of a panel on Ethics, Ethologies and Ecologies of the Emerging Global Brain at the Future of Mind Symposium held at The New School in New York City. My talk starts at 18:45 in this video. 20 July 2016.

Donald Trump's political success is a symptom of social disruption caused by technological change. 7 May 2016.

White House initiative on Preparing for the Future of Artificial Intelligence. I have long thought that the social and political dangers of AI are more serious than the technical dangers, so am glad to see this White House initiative. 3 May 2016.

Tim Tyler's review of my book Ethical Artificial Intelligence, focusing on Chapter 6. Thanks Tim for a fair review. 4 April 2016.

My response to comments about our article, Humans for Transparency in Artificial Intelligence, claiming that open source AI will tell people with bad or careless intention how to construct dangerous AI. 16 March 2016.

Humans for Transparency in Artificial Intelligence published in Humanity+, and this Petition for Transparent AI. 11 March 2016.

I join the AI Initiative of The Future Society at Harvard Kennedy School as a Senior Advisor. 27 February 2016.

I was invited to speak on a panel on The Long-Term Future of AI at the 2016 MIT Tech Conference, The Rise of Artificial Intelligence. 20 February 2016.

The History Channel program Nostradamus: 21st Century Prophecies Revealed includes an interview with me about aritificial intelligence. 28 December 2015.

AI Development Should Be More Transparent. An article calling for greater transparency in AI development. 14 October 2015.

I was interviewed about Artificial Intelligence by Ray Suarez on his Inside Story program on Aljazeera America. 28 May 2015.

I was invited to speak at a DARPA Workshop on Robust Autonomy. 9-10 April 2015.

Interview with Luke Muehlhauser about my book Ethical Artificial Intelligence, 9 March 2015.

Self-Modeling Agents and Motivated Value Selection. The self-modeling agent framework, as described in my book, Ethical Artificial Intelligence, avoids the problem of "inconsistency between the agent's utility function and its definition." One case of this problem is what Stuart Armstrong calls "motivated value selection" and describes as "a conflict between agents learning their future values and following their current values" in his paper in the AAAI-15 workshop on AI and Ethics. 7 February 2015.

This open letter, organized by the Future of Life Institute, advocates research on making AI robust and beneficial, in addition to the traditional focus on AI capability. The letter references this research priorities document and was associated with a conference on The Future of AI: Opportunities and Challenges. January 2015.

Self-Modeling Agents and Reward Generator Corruption, paper at the AAAI-15 Workshop on AI and Ethics. January 2015.

Ethical Artificial Intelligence, a book covering my work on AI. 5 November 2014.

Exploratory Engineering in AI, article with Luke Muehlhauser in the Communications of the ACM. September 2014.

Comment on Searle's review of Bostrom's book on Superintelligence. 20 September 2014.

Self-Modeling Agents Evolving in Our Finite Universe, paper at The Seventh Conference on Artificial General Intelligence (AGI-14). Here's video of my talk. August 2014.

[Message Contains No Recognizable Symbols]: Escape. A story about a technological singularity. Part 6 of [MCNRS]. April 2014.

. A story about a technological singularity. Part 5 of [MCNRS]. May 2013.

Wireheading, the Delusion Box and Model-Based Utility Functions. 18 April 2013.

Video of a North Atlantic storm taken on the way home from AGI-12. 19 December 2012.

My paper Avoiding Unintended AI Behaviors won the Singularity Institute's Turing Prize for the Best AGI Safety Paper at AGI-12 and AGI Impacts! Thank you. Note SI has become MIRI. 11 December 2012.

I become a Research Associate at the Machine Intelligence Research Institute. December 2012.

The Relation Between Dewey's "A Representation Theorem for Decisions about Causal Models" and the von Neumann-Morgenstern Theorem. This is about Dewey's paper at The Fifth Conference on Artificial General Intelligence (AGI-12). 22 October 2012.

Avoiding Unintended AI Behaviors, paper at The Fifth Conference on Artificial General Intelligence (AGI-12). This paper provides a functional definition of an AI agent and argues that it avoids known types of risks to humans. September 2012.

Decision Support for Safe AI Design, paper at The Fifth Conference on Artificial General Intelligence (AGI-12). September 2012.

The Error in My 2001 VisFiles Column. September 2012.

The Reductionist Apocalypse. When people ask me what the singularity is, I explain it is the time when human technology will include everything that is physically possible. In short, the Reductionist Apocalypse. Just to clarify, I consider that reductionism includes emergence but with very complex mathematics. 22 August 2012.

Muehlhauser-Hibbard Dialogue on AGI, dialog with Luke Muehlhauser about the dangers of AI, July 2012.

Model-based Utility Functions, in the Journal of Artificial General Intelligence, Volume 3. This paper solves the "wireheading" problem. Intentional self-delusion or wireheading is an action of the agent. This paper shows that a properly formulated utility function defined in terms of a (possibly learned) environment model will assign a low value to the consequences of that action of self-delusion, so the agent will not choose to self-delude. May 2012.

The Singularity Will Not Eliminate Money, a response to Ben Goertzel's article Will corporations prevent the Singularity?. March 2012.

The End of Rough Equality of Intelligence, talk at the Future of AGI Workshop at The Fourth Conference on Artificial General Intelligence (AGI-11). Here's a video of my talk (my talk starts 25:25 into the video and ends at 31:10 - I was given 10 minutes but always speak well within my time limit). August 2011.

Measuring Agent Intelligence via Hierarchies of Environments, paper at The Fourth Conference on Artificial General Intelligence (AGI-11). August 2011.

Societies of Intelligent Agents, paper at The Fourth Conference on Artificial General Intelligence (AGI-11). August 2011.

My interview about AI and its social effects on What Now with Ken Rose, 18 April 2011.

What the Wisconsin Demonstrations Can Teach Transhumanists, 17 March 2011.

When Future Watsons Play Politics, about the significance of IBM's Watson, 3 March 2011.

The Machine vs. the 'Jeopardy' Champs, letter to the editor of the New York Times, 14 February 2011.

Matching Pennies is Tim Tyler's web site for a tournament of algorithms competing at the game of "matching pennies." I recommended such an algorithm competition in my AGI-08 paper Adversarial Sequence Prediction, which is equivalent to matching pennies. February 2011.

AI is a Threat Despite Calming Voices, submitted as an op-ed to the New York Times, 20 August 2010.

[Message Contains No Recognizable Symbols]: Reality. A story about a technological singularity. Part 4 of [MCNRS]. March 2010.

Nietzsche's Overhuman is an Ideal Whereas Posthumans Will be Real, in the Journal of Evolution & Technology. January 2010.

Interview with Josh Hall, Paul Davies and myself wins Emmy. Part of EBRU TV's Matter and Beyond series. October 2009.

Bias and No Free Lunch in Formal Measures of Intelligence, in the Journal of Artificial General Intelligence, Volume 1. September 2009.

[Message Contains No Recognizable Symbols]: the Skeptic's Tale. A story about a technological singularity. Part 3 of [MCNRS]. September 2009.

Ambitious human space travel should wait for smart robots, letter to the editor of the New York Times, 21 July 2009 (5th of 8 letters).

I highly recommend Darren Aronofsky's 1998 film π, which roughly equates the true name of god with a number essential for predicting the stock market. One theme of the singularity is that a program for super-human AI is a sort of name for god. The film is a nice depiction of the obsessive genius and madness of people seeking to create super-human AI. May 2009.

Temptation, a paper rejected by the AGI-09 Workshop on The Future of AI.

Marcus Hutter, Tsvi Achler, myself and Pei Wang on an AGI-09 panel (very happy to be in such company). March 2009.

Distribution of Environments in Formal Measures of Intelligence, paper at The Second Conference on Artificial General Intelligence (AGI-09) (here's a video of my talk). March 2009. (Extended version of paper.)

Superintelligent machines and the financial crisis, letter to the editor of the New York Times, 19 Oct 2008 (1st of 6 letters - I was visiting New York hence the New York dateline).

Superintelligent machines - partner or master?, letter to the editor of the New York Times, Science Times, 2 Sept 2008.

The Middle Class Squeeze, submitted as an op-ed to the New York Times, 28 August 2008.

The Skills to Survive Globalization, letter to the editor of the New York Times (1st of 6 letters), 5 May 2008.

The Need for Regulation to Prevent Future Financial Crises, submitted as an op-ed to the New York Times, 26 March 2008.

[Message Contains No Recognizable Symbols]: the Movie. A story about a technological singularity. Part 2 of [MCNRS]. March 2008.

AI Politics, talk at the AGI-08 Workshop on the Sociocultural, Ethical and Futurological Implications of Artificial General Intelligence (here's a video of my talk). March 2008.

Ben Goertzel's opening talk at AGI-08 included this funny pair of slides (March 2008): What we expected in 2001 , and What we got .

Adversarial Sequence Prediction, paper at The First Conference on Artificial General Intelligence (AGI-08) (here's a video of my talk, and a video of the Q&A panel for my session). March 2008.

Open Source AI, paper at the AGI-08 Workshop on the Sociocultural, Ethical and Futurological Implications of Artificial General Intelligence. March 2008.

"During the Q & A I asked Venter why he spends so much of his time speaking in public, 150 talks a year. He said he sees that as part of his scientific work, to prepare the public for the big changes coming. He wants to avoid repeating the mistakes made with genetically modified crops (GMOs), where there was insufficient transparency and regulation, and irrational opposition by environmentalists, which crippled a crucial field. The public should feel it is included in every stage of genetic science and emerging biotechnology." -- Stewart Brand, commenting on Craig Venter's talk at the Long Now Foundation on 25 Feb 2008. The same logic applies to artificial intelligence - the public must be educated about and exercise collective, democratic control over this technology.

The Technology of Mind and a New Social Contract, in the Journal of Evolution & Technology, and presented in May 2007 at Human Rights for the 21st Century (here's an MP3 audio file of my talk). January 2008.

New Technologies and Economic Growth, letter to the editor of the Sunday Business section of the New York Times, 26 August 2007.

The Simulation Hypothesis, my letter to the editor of Science Times section of the New York Times, not published, about the hypothesis that our universe is a simulation. August 2007.

I highly recommend Artificial General Intelligence: A Gentle Introduction and Suggested Education for Future AGI Researchers by Pei Wang. June 2007.

Free Will and Morality, letter to the editor of the New York Times (6th of 7 letters), 22 April 2007.

[Message Contains No Recognizable Symbols]. A story about a technological singularity subject to the constraint that natural human authors are unable to depict the actions and dialog of super-intelligent minds. In particular, the languages of super-intelligent minds will be unintelligible to natural humans. Part 1 of [MCNRS]. April 2007.

I highly recommend Democratic Transhumanism 2.0 and Global Technology Regulation and Potentially Apocalyptic Technological Threats by James Hughes. April 2007.

Comment to the National Academy of Engineering Grand Challenges for Engineering. January 2007.

The Next Miracle. Submitted to Vanity Fair's 2006 Essay Contest. September 2006.

Reply to AIRisk, reply to Eliezer Yudkowsky's article on AI risks, June 2006.

Comment on the 2006 Singularity Summit. The Summit should include an advocate of regulating, but not banning, AI. May 2006.

Critique of the 2005 AAAI Fall Symposium on Machine Ethics. Problems with the review process and the content of this symposium. December 2005.

Critique of the SIAI Collective Volition Theory. December 2005.

Voluntary Versus Mandatory Privacy Protection for Web Search. December 2005.

"It is all too evident that our moral thinking simply has not been able to keep pace with the speed of scientific advancement. Yet the ramifications of this progress are such that it is no longer adequate to say that the choice of what to do with this knowledge should be left in the hands of individuals." - Tenzin Gyatso, the 14th Dalai Lama, in the New York Times on 12 November 2005, the day he spoke to the annual meeting of the Society for Neuroscience. The technology of mind will have profound consequences for humanity, and humanity must be educated about and exercise collective, democratic control over this technology.

A Review of Ray Kurzweil's The Singularity is Near. A good book, but it fails to adequately address the dangers of AI. October 2005.

A Manned Mission to Mars is a Bad Idea At This Time, letter to the editor of the New York Times, 30 July 2005.

The Ethics and Politics of Super-Intelligent Machines. My ideas about the ethics and politics of AI. July 2005.

Social Security and the coming Productivity Explosion, submitted as an op-ed to the New York Times, 7 November 2004.

Consciousness and Souls, letter to the editor of the New York Times, 12 September 2004.

Reinforcement Learning as a Context for Integrating AI Research. My ideas about how intelligence works, presented at the 2004 AAAI Fall Symposium on Achieving Human-Level Intelligence through Integrated Systems and Research. July 2004.

Critique of the SIAI Guidelines on Friendly AI . May 2003.

Should Standard Oil Own the Roads?. Thoughts on current social issues of information technology. February 2003.

Consciousness is a Simulator with a Memory for Solving the Temporal Credit Assignment Problem in Reinforcement Learning. My ideas about consciousness, presented at Towards a Science of Consciousness. April 2002.

The Introductory Chapter of Götterdämmerung. My attempt to communicate the drama of the singularity. Summer 2001.

Super-intelligent Machines. My first publication about intelligent machines, this defines my basic position. February 2001.


------

For more information, see the SSEC Machine Intelligence Project.

gg