We Need AI Transparency

An Op-ed submitted to the New York Times on 12 August 2019

Bill Hibbard

We humans are creating an electronic infrastructure of billions of devices with artificial eyes, ears and voices, connected via the Internet to "big data" servers with artificial intelligence, that gives those who control the servers power for social surveillance and control. As server intelligence increases, that power will become absolute as will the consequent corruption. This is most evident in China, where central control by a political elite is the governing ideology. They have passed the tipping point toward a future when every citizen will agree with the government on every issue - total thought control. The Hong Kong protests are like the scene in Jaws where Quint furiously kicks to stay out of the shark's mouth.

The New York Times wrote in an editorial that the Internet is dividing into three: China, EU and US. This is a partition of the world into AI data domains, with AI servers in one country prohibited from connecting to artificial eyes, ears and voices in another country. China's Great Firewall keeps foreign data out and more significantly keeps Chinese data in. In the US we want to keep Russian "bots" out of our politics, Russia has plans to disconnect from the Internet, and India wants to stop "colonization" by tech giants.

While national governments seek to deny foreign AI servers the power for surveillance and control of their populations, citizens need a way to deny this power to domestic AI servers as well. The starting point for that is algorithmic transparency: the right of everyone to know the algorithms of AI servers. The EU's General Data Protection Regulation takes a large step toward transparency. However, the AI industry is lobbying intensely to avoid giving away their trade secrets.

Much recent discussion of privacy and artificial intelligence misses a critical issue: this technology is advancing rapidly, and our current problems are a mild foretaste of what is coming. When AI can talk with us in our own languages, it will see deeply inside our minds. Its intimate knowledge of billions of people will make it expert at creating peer pressure to control us. And we will not be able to trust video or any electronic communication, which can be faked or hacked using AI. The future possibilities will be too complex and dynamic to be anticipated by regulation. The only hope for democracy to keep pace with technology is complete transparency about how AI works and what it is being used for.

It is not only elites who are corrupted by the power of our electronic infrastructure. In the intensity of our political divide ordinary citizens want to use this power to control the speech and thoughts of those who oppose them. The struggles of left versus right are like two fingers in the finger trap of electronic surveillance and control.

Some object that AI transparency will tell bad guys how to build evil AI. But the VW emissions scandal demonstrates that even apparent good guys may hide wrongdoing in trade secret software. The future of human freedom is at stake and we cannot simply trust AI developers. We may need an exception to complete AI transparency where national security is involved, but this cannot apply to any AI system that interacts with the public. International treaties on biological weapons have been largely successful and are a good model for treaties on AI (think computer viruses). There are also objections that AI transparency will help bad guys find weaknesses in AI-based cyber-defenses. This is true and must be addressed but does not outweigh the danger of loss of human freedom to AI manipulation. Transparency can be enforced because AI systems must interact with the world to exercise social surveillance and control, making their presence known. Known AI systems can be examined by experts to enforce transparency.

AI will fundamentally alter our social contract. AI transparency is simply the right of every person to know how our contract is changing.