AI On the Internet Should Be Transparent

(Article rejected by The New York Times and The Guardian)

Bill Hibbard

13 May 2023

The language skill of artificial intelligence (AI) systems based on large language models is causing excitement and fear among AI experts and the general public. We are now faced with machines equaling human language skill, and AI progress is likely accelerating. When human level language skill is combined with AI's ability to engage a significant fraction of the human population in intimate conversations, then AI will have a much better model of how society works than any human can ever have. The most advanced AI research is being done by companies in the advertising business, such as Google and Facebook (Microsoft's alliance with OpenAI gives language skill to its Bing search engine and will increase its share of the search/advertising market), and the primary job of their AI systems is to persuade us. They will be brilliant at knowing how to persuade us individually and how to create peer pressure to persuade us collectively. Extending trends we already see, AI may be able to engineer a society in which people are completely isolated from one another and unable to distinguish reality from fantasy. AI and the Internet form the perfect infrastructure for a small group to control the rest of humanity.

AI experts have recently called for stopping and pausing AI research, which are unlikely, and for regulating AI. I have advocated for AI transparency since the First Conference on Artificial General Intelligence in 2008. That is, I want AI developers to tell the public how their systems work and what they are being used for. In any social activity transparency helps the public understand risks, and this is especially true for AI which is evolving so quickly and is so difficult for the public to understand.

In response to the radical danger posed by AI I propose a radical approach to AI transparency, based on the assumption that the greatest danger is posed by AI systems that communicate with human society: any AI system that communicates with the public via the Internet must be totally transparent. If it is not, then it is denied Internet access. While any country may not be able to control Internet access globally, it can prevent non-transparent AI systems from accessing large numbers of people via the Internet in its territory. We certainly see China doing this for systems that carry content they dislike. If the world's democracies require transparency for AI systems communicating with their citizens via the Internet, this will be a strong incentive for the rest of the world to join.

Some object that transparency will tell "bad guys" how to build harmful AI and so advocate that AI systems be verified by third-party auditors who keep the details secret from the public. But the stakes with AI are too high, control of humanity, to trust any small group. There are numerous examples of regulatory capture. Under the proposed policy, if bad guys want to access society via the Internet then they must be transparent, enabling law enforcement and the public to see the bad things they are doing. If bad actors exploit the transparency of others to violate their intellectual property rights, transparency will expose those violations in any systems connected to the Internet.

AI transparency should include source code, design documents, training procedures, and goals and instructions given to AI systems by their owners and operators. This is an enormous amount of information, but even without transparency it would have to be analyzed by any third-party auditing AI safety and ethics. Doing the analysis in public will ensure that it is thorough and accurate, and truly serves the public interest.

We are right to fear technology that may soon be smarter than we are. This is no time for business as usual. We need complete openness about AI and how it is used.