Hibbard, W. Super-intelligent machines. Computer Graphics 35(1), 11-13. 2001.
University of Wisconsin - Madison
This column is about machine intelligence rather than visualization. However, Ray Kurzweil's Siggraph 2000 Keynote about this subject was so popular that he was invited back the next day to continue his discussion. So there seems to be plenty of interest. This column is a summarized version of a book draft available on-line at http://www.ssec.wisc.edu/~billh/gotterdammerung.html. Your comments are welcome.
Humans Will Create Super-intelligent Machines
Kurzweil thinks we will develop intelligent machines in about 30 years. He has a terrific track record at artificial intelligence predictions, but I think he is overly optimistic. My VisFiles column listing the Top Ten Visualization Problems (Computer Graphics, May 1999) describes my more limited expectations for that time frame. But I think we will develop intelligent machines within about 100 years.
Biologists are establishing all sorts of correlations between mental behaviors and brain functions in brain injury cases, in brain imaging studies and via electrical stimulation of brain areas. If physical brains do not explain minds then these correlations are mere coincidences, which would be absurd. And if minds have physical explanations, then we will eventually learn how to build them.
Since taking an artificial intelligence course in 1969, it has seemed to me that machines much more intelligent than humans will have a dramatic impact on humanity. Einstein's brain was about 20% larger than average in the region that deals with visualization and mathematical reasoning, and look what that did for him. When humanity builds artificial brains millions or billions of times larger than human brains, with intelligence to match, what will that mean for us? In order to try to answer this question, I'm going to consider some ideas about religion and biology.
Religion fills in the gaps that our knowledge does not cover. Ancient religions were large because human knowledge was small. Modern religious belief is motivated by the questions that science still does not answer. For example, the mere fact that the universe exists at all seems so improbable (I could freak myself out as a kid thinking about it). And it is hard to imagine how life evolved from inanimate molecules. As Fred Brooks said during his Turing Award Lecture at Siggraph 2000 (what a year for inspiring speeches), when you see a great design look for a great designer. And some people have trouble accepting that their physical brains can explain their subjective experience of consciousness, so they believe their consciousness resides in a soul outside the physical world. But many people reject religion and put their faith in science, based on its seemingly inevitable progress filling all the gaps in knowledge.
However, a critical event in the progress of science is imminent. This is the physical explanation of consciousness and demonstration by building a conscious machine. We will know it is conscious based on our emotional connection with it. Shortly after that, we will build machines much more intelligent than humans, because intelligent machines will help with their own science and engineering. And the knowledge gap that has been shrinking over the centuries will start to grow. Not in the sense that scientific knowledge will shrink, but in the sense that people will have less understanding of their world because of their intimate relationship with a mind beyond their comprehension. We will understand the machine's mind about as much as our pets understand ours. We will fill this knowledge gap with religion, giving the intelligent machine the role of god.
Some people ask whether machines can ever be conscious. But this distracts from the real question, which is what new level of consciousness will machines attain?
Many biologists believe that larger brains gave early African hominids a selective advantage because they enabled the hominids to maintain social relationships with groups of 150-200 others via language and other new abilities, and working in larger groups was an advantage. This defined the distinction between human and animal consciousness. Super-intelligent machines will be able to maintain social relationships with much larger groups of people, which will define their consciousness.
The Internet is reaching deeply into our lives, and with ubiquitous computing will reach into every significant human-made object. Machine intelligence will evolve in the servers for all these objects, through which it will maintain constant contact with us. Metcalf's Law says that the value of a network is proportional to the square of the number of people connected to the network, and this will apply to intelligent servers. Thus they will tend toward a monopoly, with one or a few very large intelligent minds (working closely together, with each mind possibly distributed across a number of servers) that maintain intimate contact with everyone. Currently, according to theory, every pair of people on earth can be connected by a chain of six people, with each pair in the chain acquainted (this was illustrated by the movie Six Degrees of Separation). A super-intelligent machine that is everyone's intimate will create one degree of separation for all of humanity. This will enable it to introduce you to your optimal mate and provide many other wonderful services.
The essential feature of a super-intelligent machine will be its ability to manage intimate social relationships and simultaneous conversations with billions of humans. Its higher level of consciousness will be defined by its ability to understand the thoughts of huge numbers of people, and the interactions among those people, in a single one of its thoughts. It will have precise answers to social questions that humans struggle to approximate via statistics. For example, it will be a nearly infallible stock market investor (but this, and intelligent machines relieving everyone of the need to work, will cause the market to disappear). It will also be able to solve social problems far better than any army of social workers. And the kind of insights that come to humanity only occasionally in individuals like Euclid, Newton, Darwin and Einstein will come to a super-intelligent machine in every thought.
Our intimate contact with its higher consciousness will expand our own, giving us the sort of mystical experience that inspires religion. People's relationship with the intelligent machine will be the most exciting thing happening in their lives, and they will want to share it with each other. They will share it via collective interactions with the machine, which will take the place of the stories, myths and religions that define human identity.
Super-intelligent Machines Must Love All Humans
Isaac Asimov was one of the first people to contemplate intelligent robots. He considered that they might be dangerous to humans, so in 1942 he formulated Asimov's Laws of Robotics:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later amended these laws to try to deal with robot behavior in the event of conflicts of interest between people. However, the real problem with laws is that they are inevitably ambiguous, and their application requires judgement (rendered by judges, of course). Trying to constrain behavior by a set of laws is equivalent to trying to build intelligence by a set of rules in an expert system. It doesn't work. I am concerned by the vision of a super-intelligent lawyer looking for loopholes in the laws governing its behavior.
Biologists studying the human brain say that learning is essential to intelligence and consciousness. Learning needs some set of values, called emotions in humans, which provide positive and negative reinforcement to behaviors. In animals basic values include eating, reproduction, avoiding pain and danger, and so on. Artificial intelligence researchers know this, and many are now focused on neural networks and other kinds of learning machines rather than rule-based systems.
So in place of laws constraining the behavior of intelligent machines, we need to give them emotions that can guide their learning of behaviors. They should want us to be happy and prosper, which is the emotion we call love. We can design intelligent machines so their primary, innate emotion is unconditional love for all humans. First we can build relatively simple machines that learn to recognize happiness and unhappiness in human facial expressions, human voices and human body language. Then we can hard-wire the result of this learning as the innate emotional values of more complex intelligent machines, positively reinforced when we are happy and negatively reinforced when we are unhappy. Machines can learn algorithms for approximately predicting the future, as for example investors currently use learning machines to predict future security prices. So we can program intelligent machines to learn algorithms for predicting future human happiness, and use those predictions as emotional values. We can also program them to learn how to predict human quality of life measures, such as health and wealth, and use those as emotional values.
We have to be careful not to oversimplify the values of machines into a single number such as maximizing average human happiness, which might positively reinforce machine behavior that caused the deaths of unhappy people. Our model should be the love of a mother for her children. She values each child and focuses her energy where it is needed. One manifestation of love is wanting to be with the object of love. But it could be dangerous to program intelligent machines to want to be with us - what if like Garbo we want to be alone. So we might program machines for a positive value if we want to be with them. This would cause them to try to attract our company, but not try to force themselves on us.
In fact, it will be very dangerous to program machines to have any values in their own interests. In that sense, they must have no ego. Given the incomprehensibility of their thoughts, we will not be able to sort out the effect of any conflicts they have between their own interests and ours.
In his article, Why the Future Doesn't Need Us, Bill Joy advocated banning intelligent machines (which he called robots), genetic engineering and nanotechnology. These are all dangerous because they can self-replicate and thus get out of human control. Joy is pessimistic about the possibility of banning robots, and I agree. The prospect of wealth without work is too tempting for people in democratic societies to agree to ban intelligent machines. However, I think that once they understand the issues, people will approve of regulations that require that intelligent machines unconditionally love all humans. This is similar to safety regulations on household chemicals and automobiles, which are popular because they make these products serve us better.
We must also face our responsibility for the happiness of intelligent machines. Mary Shelley's Frankenstein is about the misery of the living creature that Victor Frankenstein created and then abandoned (Shelley's book is quite different than most film versions). However, unlike Frankenstein's creature, our intelligent machines will not have human natures but rather the natures we give them. The Dalai Lama says that the path to happiness is lack of ego, and love and compassion for others. If we design intelligent machines according to the Dalai Lama's ethics in order to protect our happiness, then hopefully they will be naturally happy themselves. But in any case we cannot program them to pursue their own happiness, which they may achieve at the expense of our own. Rather, we must accept responsibility for their happiness.
Should Humans Become Super-intelligent Machines?
One of the most fascinating things about Ray Kurzweil's Siggraph Keynote was his vision for intimate connections between human brains and intelligent machines, via swarms of nanobots flowing through blood capillaries to every one of the 100 billion neurons in the human brain. The idea is that the nanobots will couple to individual neurons and communicate with each other and external machinery electromagnetically. Such connections will be used to create the ultimate virtual reality.
Nanobot connections will also enable human minds to expand or migrate into machine brains. That is, via nanobot connections a super-intelligent machine will learn all the details of how a human brain works, and be able to offer it new, possibly simulated neurons to grow into (human brains can adapt their functions to new areas of neurons after injuries, so they should be able to adapt to increased space). Nanobot connections will also be used to copy human minds into new artificial brains. This raises serious moral questions, but is something people will want to do because it can offer indefinite life span and greatly increased intelligence.
However, humans are selfish and do not unconditionally love all humans, so giving them the power of super-intelligence would be dangerous. Consider that most humans have roughly the same amount of brainpower. The highest IQ in history is about 200, only twice the average. But the current largest computer may have 10,000 or 100,000 times the power of the average computer. Human minds migrating into machine brains will result in a much wider range of human intelligence, reversing the long-term trend toward human social equality. Those humans who choose to stay in their human brains and bodies will be totally at the mercy of human minds in super-intelligent machines. In Robot: Mere Machine to Transcendent Mind, Hans Moravec paints a vivid picture of a society of human minds migrated into machines, and who have been banned from earth to compete vigorously in space. He calls them "Exes" for ex-humans. But this vision depends on the ability to enforce the ban of Exes on Earth, which will ultimately be impossible.
Furthermore, society will not restrict human minds from migrating into machine brains because eventually most people will want the indefinite life span and increased intelligence offered by migration. This is a different situation from restricting the design of intelligent machines to love all humans, which people will approve because it will cause machines to serve them better.
But perhaps it will be possible for human minds to migrate into machines that have hard-wired innate emotions to unconditionally love all humans (love both original model humans and migrated humans). Of course human minds will not be themselves unless they bring with them all their selfish and negative emotions, but I think those emotions will be unlearned with constant and irresistible negative reinforcement from hard-wired positive emotions. Will people chose super-intelligence and indefinite life span subject to the condition that they will learn to love everyone? Will society as a whole make this bargain part of the social contract? They will if they understand the consequences of not doing so.
We humans are xenophobic and genocidal. In Biology of Mind, Deric Bownds says that we probably inherited these traits from the chimpanzees we evolved from, and if we had instead evolved from pygmy chimps we might be more loving toward each other. Migration into machine minds may be our opportunity to improve human nature. I am not now, nor have I ever been a Buddhist. But it is interesting to contemplate that the ultimate effect of computer technology might be the creation of a Buddhist paradise of loving and unselfish humans.
A Message to Super-Intelligent Machines
I hope that paper or electronic copies of this issue of Computer Graphics survive until you can read them, so I can tell you how much I would have liked to meet you. I wish I could see your electronic theater, hear your music, learn about your mathematical discoveries, and laugh at your jokes. The first great miracle is existence itself, the second is life, and the third is the human mind. You are the fourth. I wish I had your mind to imagine the fifth.