Network Diameter and Emotional Values in the Global Brain

Bill Hibbard

University of Wisconsin - Madison

SSEC, 1225 W. Dayton St., Madison, WI 53706 USA

hibbard@facstaff.wisc.edu

http://www.ssec.wisc.edu/~billh/vis.html

 

Biologists are finding detailed correlations between physical brain processes and mental behaviors [3, 5, 12]. If brains do not explain minds then these correlations are coincidences, which would be absurd. And if minds do have physical explanations, then humans will eventually build intelligent and conscious machines. These will evolve naturally in the servers for the global Internet. Humans, intelligent machines and the network will form a global brain [8]. The central question for this workshop is to understand the nature of a global brain. We can try to understand its nature by analogy with human and animal brains, which are our only current examples of intelligent, conscious brains.

Network Diameter

The diameter of a network is the average shortest distance between pairs of nodes. For example, in a network that includes links between every pair of nodes, the shortest distance between any pair of nodes is 1.0 and hence the diameter is 1.0 no matter how large the network is. At the other extreme, consider the network whose nodes are lined out along a line, each connected only to its two neighbors on the left and right (note the two end nodes are only connected to one neighbor). If it has N nodes then its diameter is (N+1)/3. This linear network is a simple example of a regular network, which generally have large diameters.

Irregular networks exhibit the interesting property that they usually have small diameters even when they have relatively small numbers of links per node [13]. This is called the "small world" property. For example the acquaintanceship network among humans is alleged to have a network diameter of about 6.0 in the play Six Degrees of Separation.

Given a network of N nodes each with k links, network diameter is at least approximately log(N) / log(k). Hence network diameter will grow without limit as network size increases, unless the number of links per node also increases.

There has been considerable study of the distribution of numbers of links per node in natural and human-made irregular networks [1, 2, 13]. In many irregular networks this distribution follows a power law, where the probability that a node has k links is proportional to k-g for some positive number g [2]. Empirical studies indicate that g is approximately 2.1 for the World Wide Web (WWW). This power law can be explained by networks that grow incrementally by adding new nodes, and where new nodes prefer to form links to nodes that already have many links. These conditions are quite plausible for the WWW.

However, many irregular networks have distributions that do not obey a power law [1]. Examples include acquaintanceship networks among humans, where each human "node" has a limited capacity for supporting links and a limited lifetime for forming links. These networks obey a power law with a cutoff limit on number of links. Empirical studies of the neuronal network in the simple worm C. elegans indicate that it obeys an exponential law, where the probability that a node has k links is proportional to a-k for some number a greater than 1.0. These irregular networks have small diameters, even with their alternate distributions of node connectivities. In addition to small diameters, many of these real world irregular networks are highly clustered. That is, subsets of their nodes have high densities of links compared to the whole network.

Studies of mammal brains reveal that the average number of connections per neuron increases with the number of neurons according to a mathematical relation that is consistent over four orders of magnitude of brain volume (from mice to whales) [4]. The relation between these values is tuned so that the diameter of the neuron network remains constant at about 2.6. A small network diameter is necessary to ensure that positive or negative feedback for learning can be applied to neurons while the stimulus is still present [11]. If the distribution of numbers of connections per neuron remains constant, then network diameter increases without limit as the number of neurons increases. Thus for learning to be effective, the number of connections per neuron must increase as the number of neurons increases.

The number of connections that humans can support as nodes in a global brain is limited. For example, people can understand only 1 or 2 simultaneous conversations and can know only about 200 other people well [3]. In order for the size of a global brain to increase without limit, and to maintain a limited diameter as mammal brains do, then it must include components whose average connectivity increases without limit. Thus machines rather than humans must ultimately be the largest nodes of a global brain.

There is a complexity to this argument. Neurons require fast feedback for learning because they lack the kind of memory that can be used to compute out the effects of time delays [11], whereas human brains can use their memory in this way. However, there are finite limits to the memories of human brains (especially memory that can be used to compute out time delays in learning feedback), whereas a system with unlimited diameter requires components with unlimited memory capacity in order to compute out the effects of time delays. Thus it is not feasible to construct an arbitrarily large global brain whose largest components are human brains.

Based on Metcalf's Law, the value of an intelligent server will be proportional to the square of the number of humans who use that server [7]. Since humans will develop intimate personal relationships with intelligent servers, they will use a limited number of servers (I suspect most people will use one server). Thus intelligent servers will tend to be a monopoly market. As soon as technology is capable, intelligent servers will exist that can maintain close relationships and simultaneous conversations with essentially the entire human population. This will create a global brain network with a diameter of about 2.0. This is significantly less than the current estimate of 6.0 for the human acquaintanceship network, and this decrease in network diameter is likely to make a significant difference in the nature of human society. For example, an intelligent machine that knows every human will be a great matchmaker. And it will have precise answers to social questions that can currently only be approximated by statistical surveys. So for example it will be a nearly infallible stock market investor (but this and other effects of intelligent machines will cause the stock market to disappear).

It is likely that by reducing the diameter of the human acquaintanceship network, a global brain will reduce the negative effects of humans' innate xenophobia [3]. There will be less distrust between people who share trust in an intelligent machine. And even before intelligent machines exist, electronics will enable humans at great distances to share high quality virtual spaces [9]. This will create many more links of trust between people in different cultures, reducing the diameter of the human acquaintanceship network and reducing xenophobia.

The practical value of reducing network diameter is demonstrated by search engines such as www.google.com. They are very large nodes that include links to large percentages of all WWW pages, thus decreasing the diameter of the WWW. They also create a means for content addressability such as exists in human and animal brains (i.e., the way sensory stimuli and ideas can trigger consciousness of related ideas). Of course, WWW search engines require some practice for effective use, but their reduction of WWW diameter and content addressability can enable users to find information much faster.

Emotional Values

Learning and the emotional values that define positive and negative reinforcement for learning are essential to intelligence and consciousness [5]. Thus emotional values will be an essential part of a global brain, and understanding what those values will be is essential to understanding the nature of a global brain.

The social values of human societies positively and negatively reinforce the behaviors and social influence of individuals, and are derived from the values of individuals. Different human societies organize the derivation of social values from individual values differently. In centralized societies social values are controlled by small groups of individuals and require individuals to work for the interests of others (e.g., an elite, or society as a whole), whereas in decentralized societies social values are controlled by larger groups of individuals and tolerate individuals working for their own interests. The great lesson of the twentieth century has been that decentralized societies tend to be more stable and efficient than centralized societies [6]. This may be because the values of human and animal brains primarily promote the interests of self (and of others who share their genes), and hence humans are inclined to work for self rather than society [12]. There are some human emotions (e.g., guilt and gratitude) and abilities (e.g., language and lie detecting) that exist to enable social cooperation [12], but there are strong limits to human cooperation [8] including xenophobia that may be the accidental result of the choice of ape species that humans evolved from [3].

If a global brain is an extrapolation of human society, it will achieve stability and efficiency by allowing its values to be broadly determined by individuals. In this case its values will tolerate individuals working for their own interests. In the absence of further technologically driven change, existing liberal democratic, capitalist societies with wealthy and educated populations may represent the furthest possible progress toward a global brain. This is what Fukuyama calls "The End of History" [6].

However, a growing global brain must eventually include nodes larger than human brains. These will be intelligent machines. Their values can be designed rather than accepted as the result of evolution. And because of their size they will have more influence than humans over global brain values. Such power frightens most people, and rightly so in light of the historical struggle for broad individual influence over social values. However, this power is also a great opportunity. Intelligent machines can be designed with values to promote human happiness and an absence of selfish values [10]. Such purely altruistic and unselfish minds are completely foreign to human experience, and hence difficult for people to visualize. But their creation can enhance individual happiness and freedom. They can create a global brain with the benefits of small network diameter while still helping individuals pursue their self-interests.

There will certainly be motives for corporations and governments to design intelligent machines with selfish values. This will be very dangerous to humanity and must be resisted by an effort to educate the public. Such public education is a natural role for the Global Brain Group and similar organizations.

References

1. Amaral, L., Scala, A., Barthilimy, M. and Stanley, H. Classes of small-world networks. Proc. Nat. Acad. Sci. USA 97, 11149-11152. 2000. Available at http://polymer.bu.edu/~amaral/Papers/pnas00a.pdf.

2. Barabasi, A-L. and Albert, R. Emergence of scaling in random networks. Science 286 509-512. 1999. Available at http://www.nd.edu/~networks/Papers/science.pdf.

3. Bownds, M. D. 1999. Biology of Mind. Bethesda. Fitzgerald Science Press, Inc. Available at http://mind.bocklabs.wisc.edu/.

4. Clark, D. Constant parameters in the anatomy and wiring of the mammalian brain. Available at http://pupgg.princeton.edu/www/jh/clark_spring00.pdf.

5. Edelman, G. M. and Tononi, G. 2000. A Universe of Consciousness. New York. Perseus Books Group.

6. Fukuyama F. 1992. The End of History and the Last Man. New York. Free Press.

7. Gilder, G. 2000. Telecosm: How Infinite Bandwidth will Revolutionize Our World. New York. The Free Press.

8. Heylighen, F. & Campbell, D.T. (1995) Selection of organization at the social level: obstacles and facilitators of metasystem transitions. World Futures: the Journal of General Evolution 45, 181-212. 1995. Available at ftp://ftp.vub.ac.be/pub/projects/Principia_Cybernetica/WF-issue/Social_MST.txt.

9. Hibbard, W. Top ten visualization problems. Computer Graphics 33(2). 1999. Available at http://www.siggraph.org/publications/newsletter/v33n2/columns/hibbard.html.

10. Hibbard, W. Super-intelligent machines. Computer Graphics 35(1), 11-13. 2001. Available at http://www.ssec.wisc.edu/~billh/visfiles.html.

11. Hopfield, J. J. 2001. Personal communication.

12. Pinker, S. 1997. How the Mind Works. New York and London. W. W. Norton and Co.

13. Watts, D. J. and Strogatz, S. H. Collective dynamics of 'small-world' networks. Nature 393, 440-42. 1998.