Friday, September 14, 2007

A Productive Dead End, An Escape by Computer

A Productive Dead End, An Escape by Computer
By CHRISTOPHER LEHMANN-HAUPT

AFTER THOUGHT The Computer Challenge to Human Intelligence By James Bailey Illustrated. 277 pages. Basic Books/HarperCollins. $25.

The subtitle of James Bailey's excitingly instructive "After Thought: The Computer Challenge to Human Intelligence" is perhaps unfortunate. For by arguing, as he does, that computers have not begun to realize their potential, that indeed computers will soon be able to evolve their intelligence independent of human control, he provokes frightening sci-fi images of robots supplanting humans and taking over the world.

But this is not at all what Mr. Bailey intends. As his book eventually reveals, his subtitle doesn't mean that computers will challenge humans in a confrontational sense. Rather, he is suggesting that computers will liberate human thought. In fact, he foresees an "electronic computing revolution" whose "intellectual impact will be greater than anything since the Renaissance, possibly greater than anything since the invention of language."

A former senior manager at the Thinking Machines Corporation, which has developed a wide range of evolutionary computer algorithms, Mr. Bailey is clearly knowledgeable about the history of scientific thought. In his view, this history has led the human race into an intellectual cul-de-sac, the dead end of linear thinking, although it has been extremely productive.

One misstep in this journey, Mr. Bailey suggests, was Copernicus's assertion in "The Revolutions" that his geometrical description of the planets was not a fiction but true, which, as Mr. Bailey puts it, "began to give a new centrality to thought." In the words of Rheticus, a student of Copernicus's, "The hypotheses of my learned teacher correspond so well to the phenomena that they may be mutually interchanged, like a good definition with the thing defined."

A subsequent misstep was Descartes's dogma that human thought was sequential and as Mr. Bailey puts it, his insistence that "the world outside his mind conformed to his dictum of sequentiality even when it demonstrably did not." As Mr. Bailey writes, "Descartes drilled his methodology into the collective psyche of Europe." Largely because of this, computers for some 50 years have been electronic versions of Cartesian sequential logic. After all, Mr. Bailey remarks, when computers were first designed, "400 years of intellectual property were at stake."

Yet in the last two decades matters have been changing. With the development of parallel processing, the unitary view of thought has begun to break down. Parallel processing allows a computer to think about more than one thing at a time, as human beings themselves in fact do, despite Descartes's strictures to the contrary.

"As a result," Mr. Bailey writes, "a whole new set of parallel intermaths is coming to the fore to challenge the sequential maths of the Industrial Age, which had only humans to carry them out." With more data about the physical world available than ever before, computing has taken on more complex tasks that are applicable to immune systems, economies, politics, ecologies, the weather and the mind itself.

In what amounts to a history of scientific thought, Mr. Bailey shows that some thinkers have always resisted the tendency of humans to equate their theories of the world with the reality of nature. For instance, the Renaissance astronomer Johannes Kepler endlessly explored the mass of data he had acquired from his mentor Tycho Brahe, and his methods have been trivialized as being, in the words of one critic cited by Mr. Bailey, "like the child who having picked a mass of wild flowers tries to arrange them into a posy this way, and then tries another way, exploring the possible combinations and harmonies."

Yet, Mr. Bailey rejoins, "arranging data into a posy, first this way and then that, is exactly how the new intermaths seek out patterns in large amounts of data." He adds, "These maths depend on being totally uninhibited, just as Kepler was." Future computers will give these new maths free rein.

An unfortunate paradox of Mr. Bailey's book is that to show us precisely how these new maths will work he would have to plunge us into arcana beyond the average reader's comprehension. So we have to make do with still-unfamiliar terms like genetic algorithms, artificial life, classifier systems and neural networks.

His most intelligible discussion concerns cellular automatons, or small units that are given simple instructions and can then, in large numbers, simulate complex systems, like the motion of fluid, the flow of traffic or the flight of birds in flocks. But when Mr. Bailey talks about "bit evolution" and the future capacity of computers to change their formulations as they go along, you simply have to accept a certain amount, which can be frustrating.

In Mr. Bailey's own view, the greatest challenge posed by the computer revolution will be for humans to trust processes of thinking they won't necessarily understand. For instance, techniques like neural networks can spot buying or selling opportunities in a market pattern without supplying proof "in any human-absorbable form." The author adds, "The decision to trade becomes a leap of computational faith."

He proposes that one way to build such trust would be to alter the methods of teaching math to children by fostering greater familiarity with parallel computing and by relegating geometry and calculus to the history department.

But his main point is that we must become aware of the outmoded abstractions on which our thinking is based and to jettison them. On this subject he cites Alfred North Whitehead: "A civilization which cannot burst through its current abstractions is doomed to sterility after a very limited burst of progress."

The wonder of Mr. Bailey's book is that he makes us aware of things abstract that all our lives we have been trained to think of as concrete.

No comments: