"Human Compatible" and "Avoiding Unintended AI Behaviors"

Bill Hibbard

17 October 2019

Stuart Russell's new book, Human Compatible: Artificial Intelligence and the Problem of Control (HC2019), is great and everyone should read it. And I am proud that the ideas in my AGI-12 paper, Avoiding Unintended AI Behaviors (AGI2012), are very similar to ideas in HC2019. AGI2012 had its moment of glory, winning the Singularity Institute's (now called MIRI) Turing Prize for the Best AGI Safety Paper at AGI-12, but has since been largely forgotten. I see agreement with Stuart Russell as a form of vindication for my ideas. This article will explore the relation between HC2019 and AGI2012.

Chapters 7 - 10 of HC2019 "suggest a new way to think about AI and to ensure that machines remain beneficial to humans, forever." Chapter 7 opens with three principles for beneficial machines, which are elaborated over Chapters 7 - 10:

  1. The machine's only objective is to maximize the realization of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior.

AGI2012 defines an AI agent that is similar to Marcus Hutter's Universal AI (UAI2004). However, whereas the UAI2004 agent learns a model of its environment as a distribution of programs for a universal Turing machine, the AGI2012 agent learns a model of its environment as a single stochastic, finite-state program. The AGI2012 agent is finitely computable (assuming a finite time horizon for possible futures), although not practically computable. The ideas of AGI2012 correspond quite closely with the HC2019 principles:

  1. The objective of the AGI2012 agent is to maximize human preferences as expressed by a sum of modeled utility values for each human (utility functions are a way to express preferences, as long as the set of preferences is complete and transitive). These modeled utility values are not static. Rather, the AGI2012 agent relearns its environment model and its models for human utility values periodically, perhaps at each time step.
  2. The AGI2012 agent knows nothing about human preferences until it learns an environment model, so AGI2012 proposes a "two-stage agent architecture." The first stage agent learns an environment model but does not act in the world. The second stage agent, which acts in the world, takes over from the first stage agent only after it has learned a model for the preferences of each human.
  3. The AGI2012 agent learns its environment model, including its models for human preferences, from its interactions with its environment, which include its interactions with humans.

Subject to the length limits for AGI-12 papers, AGI2012 is terse. My on-line book, Ethical Artificial Intelligence (EAI2014), combines some of my papers into a (hopefully) coherent and expanded narrative. Chapter 7 of EAI2014 provides an expanded narrative for AGI2012.

On page 178, HC2019 says, "In principle, the machine can learn billions of different predictive preference models, one for each of the billions of people on Earth." The AGI2012 agent does this, in principle.

On pages 26, 173 and 237, HC2019 suggests that humans could watch movies of possible future lives and express their preferences. The AGI2012 agent connects models of current humans to interactive visualizations of possible futures (see Figure 7.4 in EAI2014) and asks the modeled humans to assign utility values to those futures (a weakness of AGI2012 is that it did not reference research on inverse reinforcement learning algorithms). As an author of Interactivity is the Key (VIS1989) I prefer interactive visualizations to movies.

As HC2019 and AGI2012 both acknowledge, there are difficult issues for expressing human preferences as utility values and combining utility values for different humans. AGI2012 argues that constraining utility values to the fixed range [0.0, 1.0] provides a sort of normalization. Regarding the issues of the tyranny of the majority and evil human intentions, AGI2012 proposes applying a function with positive first derivative and negative second derivative to utility values to give the AI agent greater total utility for actions that help more dissatisfied humans (justified in Section 7.5 of EAI2014 on the basis of Rawl's Theory of Justice). This is a hack but there seem to be no good theoretical answers for human utility values. HC2019 and AGI2012 both address the issue of the agent changing the size of the human population.

On page 201, HC2019 says, "Always allocate some probability, however small, to preferences that are logically possible." The AGI2012 agent does this using Bayesian logic.

On page 245, HC2019 warns against the temptation to use the power of AI to engineer the preferences of humans. I wholeheartedly agree, as reflected in my recent writings and talks. Given an AI agent that acts to create futures valued by (models of) current humans, it is an interesting question how current humans would value futures in which their values are changed.

On pages 254-256, HC2019 warns of possible futures in which humans are so reliant on AI that they become enfeebled. Again, it is an interesting question how current humans would value futures in which they must overcome challenges versus futures in which they face no challenges.

On page 252, HC2019 says, "Regulation of any kind is strenuously opposed in the [Silicon] Valley," and on page 249 it says that "three hundred separate efforts to develop ethical principles for AI" have been identified. I believe one goal of these AI ethics efforts is to substitute voluntary for mandatory standards. Humanity needs mandatory standards. Most importantly, humanity needs developers to be transparent about how their AI systems work and what they are used for.