Hibbard, W. Goodbye. Computer Graphics 37(4), 4-6. 2003.
University of Wisconsin - Madison
This is my last VisFiles column. It has been a real privilege to write VisFiles and to have the opportunity to invite so many fine contributions from others. I'd like to thank Patricia Galvis-Assmus, Lynn Valastyan and Gordon Cameron for excellent editorial support, and to thank all those who contributed guest columns. And I am delighted that Kwan-Liu Ma of the University of California at Davis has agreed to take over VisFiles. He will do an excellent job. This final VisFiles is a list of ideas that might have been columns.
My hero since childhood. Before Newton only geniuses could solve what were called "geometry problems". He invented the tools that allow any reasonably bright person to solve such problems, and to model and predict events in our physical world. Newton contributed more than any other individual to human mastery of the physical world.
His tools are 1) the calculus (co-invented with Liebnitz) which enables us to reduce complex physical systems into elementary parts, and 2) the laws of physics that govern the actions of those elementary parts. The drawback with Newton's tools is that they often produce equations without any closed-form solution, which makes prediction difficult. However, the twentieth century saw the invention of a third tool, the computer, that can make predictions from equations produced by Newton's tools even when there is no closed-form solution. The drawback with computer predictions is that they generally consist of masses of numbers. That's where our visualization community comes in. Our software tools enable humans to understand masses of numbers produced by computer predictions. Like Newton's tools, our visualization tools amplify the intelligence of the people who use them. Newton is the intellectual grandfather of everyone in the visualization community.
In addition to being the greatest thinker in human history, Newton was a very interesting person. He believed that the truth about the physical world could be derived from the Bible, but that since he was not smart enough to do that he had to understand the world by scientific experiment. He practiced alchemy. He died a virgin and put a lot of emotional energy into feuds with other scientists. He spent a couple years in a deep depression that ended his most creative work. He was a closet Unitarian at a college that required him to believe in the Trinity. His private writings were suppressed by the church for 100 years after his death (you can read his private writings at http://www.newtonproject.ic.ac.uk/). One thing I have enjoyed about the Siggraph conferences is their mix of artists and programmers, including lots of interesting characters. Perhaps a certain touch of madness goes with being Newton's intellectual progeny.
Invisible Visualization Algorithms
All the glory goes to the rendering algorithms that convert arrays of numbers (or other forms of information) into depictions. Whole conferences are organized around volume rendering and flow rendering algorithms. However, the success of a visualization system often depends on algorithms that hardly get any research attention. For example, data are stored in a wide variety of file and server formats (the old joke is that the great thing about file standards is that there are so many to choose from). Not only do visualization systems need software for reading all these formats, but they also often need to read files without knowing in advance which format they use (this can occur with data downloaded across the Internet or data from old sources). Algorithms for discovering formats of files don't involve much theory, but they are important.
Metadata are data about data, and different file formats include different kinds of metadata. Given a temperature value, metadata could include the fact that it is a temperature, its units (e.g., degrees Kelvin), its spatial and temporal location, the coordinate system of its location, whether it is a point sample or an average over a spatial region and time interval, whether it was observed or generated by a simulation, an estimate of the error in the value, whether the value is missing, and so on. There have been visualization research papers about specific forms of metadata, but effective systems need algorithms for discovering which metadata are available with a data set, and for interpreting it in visualizations and user interactions (including reasonable interpretations when particular forms of metadata are absent).
Values in numerical data are generally restricted to finite intervals (e.g., atmospheric temperatures may be lie between -100F and +130F), and values in displays are also generally restricted to finite intervals (e.g., by screen extents). In many cases users do not know the ranges of values in their data. In order to avoid a blank screen, visualization systems need auto-scaling algorithms that determine ranges of values in data sets and map them to visible display ranges. Such auto-scaling algorithms can be subtle, in terms of the need for consistency between different scales, and in terms of which display events should trigger recalculation of auto-scaling.
Finally, visualization systems are memory hungry and good memory management algorithms are often essential for system usability. This issue is sometimes addressed in research papers about specific rendering algorithms, but systems need to address memory management globally. Specifically, systems need algorithms governing when the results of calculations and I/O are saved, and when they are discarded and calculations or I/O done again as needed.
Linker, Loader, Browser, Spy
Think about all the things you use the Internet for: perhaps most of your business and personal mail, providing your products to users, some shopping, game playing, and searching the web for information. Your Internet use tells a lot about you. And this will increase as more objects of daily life get wired, like cars, buildings, credit cards, radios and TVs. Advertisers have access to some of this information and want more. Government agencies fighting crime and terror are also after this information, plus information from an increasing number of cameras and other sensors in public places.
But the freer flow of information cuts both ways, also making it easier for citizens to learn about the activities of corporations and governments. If individuals are losing their privacy, so are institutions. In a contest between institutional information manipulators and an army of citizens armed with cameras, photocopiers, fax machines and Internet access, I'll bet on the citizen army. The important word for faith in institutions is "transparency", and information technology is making that inescapable. It was wonderful that Time Magazine's Person of the Year for 2002 was the three whistle blowers: Coleen Rowley of the FBI, Sherron Watkins of Enron and Cynthia Cooper of WorldCon.
Of course, institutions are fighting back by trying to control the flow of information with ideas like the Trusted Computing Platform Architecture (TCPA), which would be built into all electronic devices to control what information they can access. And as Bob Ellis points out, there is also pressure to exempt broadband from common carrier status, which would allow broadband providers to chose what content to carry and what content not to carry. The recent FCC relaxation of restrictions on the degree of concentration in media ownership is another example, hopefully soon to be repealed by congressional action. So it is important for citizens to let their elected representatives know that the free flow of information (i.e., freedom of speech) is more important than just about anything else (i.e., exceptions to freedom of speech must be rare).
Army Chief of Staff Gen. Eric K. Shinseki warned at his retirement ceremony earlier this year that "You must love those you lead before you can be an effective leader. You can certainly command without that sense of commitment, but you cannot lead without it. And without leadership, command is a hollow experience, a vacuum often filled with mistrust and arrogance." At the place where I work, the Space Science and Engineering Center, I have seen a few software catastrophes close up and they all involved software managed by people with contempt rather than love for programmers.
I was asked to fix one of those catastrophes, which gave me the chance to write some software that went into space. I took a year out of my visualization work, in 1985-86, to rewrite the flight firmware for the Diffuse X-ray Spectrometer (DXS), which flew on the Space Shuttle in 1993. Another major software catastrophe was a satellite data acquisition system called XSD. It is curious that the acronyms of these two catastrophes should be anagrams, but what they really had in common was managements who cared more about programmers' hours, dress and irreverence than about their ability to write working software. Both of these projects had to be re-written by good but irreverent programmers: John Benson and Tommy Jasmin for XSD, Gail Dengel and myself for DXS. Not only were both rewrites successful, but they cost much less than the failed efforts they replaced - probably because their humbled managers were out of programmers' hair.
I am very much in favor of software managers being expert programmers, and making time to dig into design and coding details. Without this it is hard for them to understand what is going on in their projects. Programmers in charge also more accurately value other programmers. Non-programmers in charge often fall back on valuing programmers based on personal habits because they simply don't know how to evaluate programming ability. I have even seen examples of non-programming managers who valued programmers inversely to how intimidating they were. That is, they placed the lowest values on the best programmers. Of course, the need for programmers to manage means that good programmers have a responsibility to develop some management skills.
One argument in favor of free, open-source software is that it emphasizes the value of programming ability relative to money and power. Programmers are in charge of most open-source projects, whereas non-programmers are in charge of many proprietary software projects.
Rita produced the most remarkable and courageous visualization project I know of. She suffered brain damage in an auto accident, with serious consequences for her perception and other mental functions. Rita then joined University of Illinois at Chicago's Electronic Visualization Laboratory where she produced Detour: Brain Deconstruction Ahead, a virtual reality project to communicate her experience to other people. Based on this, she developed a project in Sweden to use virtual reality to help stroke patients.
I asked Rita to contribute a VisFiles column about her work, but it was too difficult in the context of her injuries and her other work. Certainly her work with patients had to take precedence, but it is too bad we never had a VisFiles column about Rita's remarkable work.
I have already written one VisFiles column about machine intelligence (February 2001) and referred to it in a second (February 2003). One reason to stop writing VisFiles is that, if I followed my instinct, every VisFiles column would be about machine intelligence. There is a certain logic in switching interests from graphics to machine intelligence. To date, computer graphics software has been the largest consumer of machine cycles. In the future, machine learning software will be the largest consumer of cycles, and learning is the essence of intelligence.
There are so many fascinating new results from those trying to understand how animal and human brains work, and from those trying to build artificial brains. There is Anders Sandberg's recent PhD Thesis (http://www.nada.kth.se/utbildning/forsk.utb/avhandlingar/dokt/sandberg030606.pdf) simulating human memory behavior using a realistic neuron model. There is a paper of Brown, Bullock and Grossberg (http://cns-web.bu.edu/pub/diana/BroBulGro99.pdf) showing that the behavior of certain animal neurons mimics the behavior of reinforcement learning algorithms. And there is Eric Baum's work (http://web.archive.org/web/*/http://www.neci.nj.nec.com/homepages/eric/) and IBM's work (http://www.research.ibm.com/infoecon/) showing the utility of economic principles for effective reinforcement learning. Note that some of these URLs are no longer valid, but you can find their old contents using the WayBack Machine web archive at http://www.archive.org/ (a good thing to know about in any case). These results, plus the many detailed correlations between physical brain behavior and mental behavior, make it clear that we will understand how brains work and that our relentless technology will build artificial minds greater than our own.
This simple fact has consequences on a completely different scale than any other event in human history, combining great danger with great opportunity. The danger is not, as commonly depicted in science fiction books and movies, that machines will take control away from humans. It is that machines will enable a small group of humans to take control away from democratic government. Despite our prejudices, humans all have about the same intelligence. The highest IQ in history is only twice the average, whereas the largest trucks, buildings, ships and computers are thousands of times their averages. When we start constructing artificial minds, the rough equality of intelligence will end. Unless we are very careful, the long term trend toward human social equality will end with it. Ensuring that intelligent machines serve general human interests rather than the interests of a few will be the great political struggle of the next century.
The opportunity is as great as the danger. One small part of the opportunity is that the physical circumstances of human life in the 22nd century will be to our current circumstances as our current circumstances are to life in the 18th century. For example, intelligent machines will enable universal wealth without work. The larger opportunity is close personal relationships with minds unlike any known before - living gods - and even the opportunity for humans to become such minds with indefinite life spans. It is difficult not to feel that being among the last human generations to miss this opportunity is like just missing the train to the end of time. Even if I'm going to miss the train, I plan to spend the rest of my time helping build it, or at least thinking and writing about it.