A Defense of Humans for Transparency in Artificial Intelligence
16 March 2016
The article Humans for Transparency in Artificial Intelligence and associated petition argue for transparency in what AI is used for and how it works. This includes a call for open source AI. Some object that open source AI will tell people with bad or careless intention how to construct dangerous AI, as expressed by this article, Should AI Be Open? My goal with the article and petition is to create a world in which there is overwhelming public opinion for AI transparency and against secret, large-scale AI projects.
The article "Should AI Be Open?" raises the question of fast takeoff versus slow takeoff. I believe the first appearance of human-level AI will require a scale of computing resources available to only a small number of institutions, and the race to human-level AI will be won by a system that uses computing resources fairly efficiently. Thus resources and efficiency are unlikely to further increase explosively. Furthermore, as Matt Mahoney has argued, intelligence is a function of both resources and knowledge, and there are limits to the speed of increasing knowledge. And knowledge increase requires interaction with the world. Thus a fast takeoff is unlikely, and early human-level AI systems will be difficult to hide because of their resource use and their interactions with the world. While open source may help "Dr. Evil" or "Dr. Amoral" (to use terms from "Should AI Be Open?") develop their secret AI systems, such systems can be detected and stopped by governments responding to overwhelming public opinion for AI transparency (AI will be good at detecting secret AIs).
The speed of AI takeoff depends on the scalability of intelligence algorithms. Over four orders of magnitude of brain volume, network diameter in mammal brains remains constant at approximately 2.6. Accompanying this, the number of connections per neuron increases with brain volume. If mammal brains are models for how intelligence works, then explosive growth in the number of neurons in AI brains may be difficult as the number of connections per neuron increases.
The article "Should AI Be Open?" discusses claims that slow takeoff of AI will create a world of many roughly equal powered AIs. I don't think so. Initial resource requirements will be large, limiting the number of initial human-level AIs. And the initial AI systems will benefit from the positive feedback loop: more intelligence => more money => more intelligence. This will increase their advantage over late and weak entrants to the AI race. Thus we (and especially children) will likely live in a world shaped by a few large AIs. Overwhelming public opinion for transparency is necessary to create proper oversight of these systems and to create proper ethics within the organizations that create these systems.
Consider that many current Internet services are free to consumers and paid for by clients who use the services to persuade consumers to buy products and to support political candidates and positions. Future AI will be able to subtly embed persuasion in its natural language conversations with consumers, and persuade by creating subtle peer pressure. This will ultimately enable it to control society. If the public demands transparency about what AI is used for, then such social manipulation becomes visible and can be resisted. A transparent environment will likely discourage any desire to create manipulative AI. Social manipulation is the AI outcome I fear most because it is such a natural progression from current practice.
The article "Should AI Be Open?" says, "If someone tries to use AI to exploit others, the government can pass a complicated regulation against that." But the complexity of AI makes it an ideal candidate for regulatory capture. Transparency about what AI is used for can create an informed and engaged public to prevent regulatory capture and hence resist social control by AI.
To understand the role of open source in transparency about what AI is used for, consider the scandal of VW emissions control software. Trade secret protection enabled VW to say the software did one thing when in fact it did something different. The outrage of VW owners is a good model for what I want in AI transparency: public outrage at secret, large-scale AI projects. I want a world in which the best AI developers do not want to work on secret projects, and in fact we do see the Google DeepMind team making their amazing work open source!
The article "Should AI Be Open?" contrasts "Dr. Evil" and "Dr. Amoral" with "Dr. Good." But how do we know "Dr. Good" really is good? Let's have overwhelming public opinion for AI transparency to encourage all the Dr. Goods to actually be good. Let them have glory and get rich, as long as the world gets transparent AI that really is good. I don't think humanity should trust their fate to any organization creating AI in secret.