Binary First Contact

Binary First Contact

By Lawrence I. Charters

Patterns of the Fantastic II, edited by Donald M. Hassler. Seattle: Starmount House, 1985. pp. 51-58.

From a presentation at ConStellation 1983, Baltimore, Maryland (41st World Science Fiction Convention), September 1-5, 1983.

Nearly all “first contact” stories assume the first non-human intelligence encountered will be extraterrestrial. A common interpretation involves Ancient Astronauts dropping out of the sky and having lunch with the President. Yet if tomorrow’s headlines accuse such astronauts of stealing White House china, the culprits will probably be NASA veterans or old Hollywood actors, not little green creatures from outer space. There are other possibilities, of course: dolphins, orcas, and other cetaceans might apply for admission to the U.N., using as their rallying cry, “We are not tuna!” While intriguing, this isn’t very probable, especially since dolphins, like most humans, seem to have little interest in U.N. affairs.

The likeliest encounter is the one least explored: binary first contact. Sometime soon, possibly during this century, computer intelligence will evolve here on Earth.1 Instead of the expected meeting between human and Martian, human and Neanderthal, or hobbit and orc, there will be contact between organic, human intelligence and inorganic, computer intelligence. Many science fiction stories have dealt, usually badly, with intelligent computers, but few have attempted a first contact approach.

First contact stories offer good vehicles for exploring computer intelligence. Science fiction is primarily intended for entertainment, yet many authors, readers, and critics also see it as a means of preparing for the future. If preventing “future shock” is a legitimate goal, realistic stories dealing with computer intelligence are potentially useful. Additionally, first contact stories give authors and readers a chance to examine and speculate on the nature of intelligence and humanity.

If a first contact theme offers opportunities, it also presents problems. In a traditional first contact story, the aliens are declared intelligent by authorial fiat. With this issue out of the way, the stories revolve around communications difficulties, and introduce a wide range of misunderstandings, biases, and false assumptions for everyone to overcome. When dealing with computers, authorial fiat isn’t quite good enough, since answer must offered to the question: “what is computer intelligence?”

Aristotle felt that anything with the ability to do sums could be classified as “human,” but few today would grant adding machines or calculators this status.2 Medical advances have reached the point where brain dead individuals can be kept alive indefinitely; such individuals can’t be termed “intelligent,” but almost everyone would agree they are “human.” If it is possible to be human without being intelligent, it also seems possible to be intelligent without being human — which is, after all, the foundation for first contact stories.

Most science fiction cheats on this point, preferring to make computers human rather than deal with the more complex issue of legitimate computer intelligence. A train does not use legs for movement, though people do, nor do people use electron guns for sending information, though television does. Yet science fiction cannot seem to escape from the notion that machine intelligence must operate using something resembling human emotions, wants, consciousness, and free will.

Isaac Asimov’s “The Bicentennial Man,” while an excellent story, is not really concerned with machine intelligence. Andrew Martin, the main character, is a robot who wants to be human. Reading the story, you can learn a fair amount about discrimination, and even something about humanity, but almost nothing touching on cybernetics.3 The wacky robots in a typical Ron Goulart tale, though entertaining, often are more “human” than the story’s flesh and blood characters.4 Heinlein, in The Number of The Beast, “humanizes” a computerized car by sending it to the Land of Oz and other exotic locales. While providing growth opportunities for the character (the car), the conversion destroys any opportunity to deal realistically with computers.5 Frederik Pohl, usually a strong “hard” science fiction writer, goes so far as to manufacture robot muggers, prostitutes, and panhandlers in “The Farmer On the Dole.” This is such a silly idea even the robots, usually eager to please, protest such shoddy treatment.6

Some would object that a computer — an inorganic construct — couldn’t really be intelligent since, at any given time, it is executing some human’s program. In other words, the computer is not an independent entity but an artifact, and as such is forever doomed to be less capable than its human builders.7 There are problems with this argument; among other things, it proves automobiles, submarines, planes, and spacecraft are impossible. If human artifacts cannot exceed the limits of their creators, surely planes can’t fly, since people can’t. What could be more obvious?

Here we confront the issue of organicism. Adam L. Gruen, who coined the term, states:

Organicism is the expression, in word or deed, of the idea that organic brains (exemplified chiefly by homo sapiens) are, and always will be, inherently superior to inorganic brains (computers).8

As with racism and sexism, organicism is founded on a complex set of biases, fears, and assumptions. The Spanish Conquistadors knew, without proof, American Indians were not human, and resisted foredoomed efforts to indicate otherwise. Many today believe women are less capable than men, and denounce conflicting claims as “invalid” or “irrelevant.” Similarly, as computers become brighter, it is more common to redefine “thought” than to bestow the label “intelligent” on a mere machine.9

From organicism, it is only a small step to cyberphobia — fear of computers. Sadly, most science fiction dealing with intelligent computers avoids realistic speculation in favor of the tried and tested route of simply scaring the reader. In D. F. Jones’ Colossus series, artificial intelligence proves dangerous, as two super computers take over the world’s nuclear arsenals and impose a digital dictatorship.10 George Orwell’s 1984 and D. G. Compton’s The Steel Crocodile suggest computers, even without intelligence, invite dictatorship through their supposed ability to impose total surveillance and control.11 The Bolo stories of Keith Laumer, and Fred Saberhagen’s Berserker series, raise the stakes by placing intelligent machines in control of virtually indestructible mobile weapons systems.12 These stories offer little or no valid science, focusing instead on fears raised by loss of control and the dramatic possibilities offered by violence and warfare.

It may seem, from these examples, science fiction is incapable of treating computer intelligence realistically. This isn’t entirely the case, as there are a few stories which do approach the subject with some amount of care. When HARLIE Was One, by David Gerrold, mumbles through some of the fine points, but does present a credible picture of an intelligent computer system. HARLIE, the computer in question, also faces a realistic problem: the system will be scrapped if HARLIE can’t do something to justify “his” expense to the corporate stockholders.13 Whether a computer would fear being turned off is arguable; on the other hand, the situation does lend itself to further speculation: would “right to life” groups try to prevent the termination of intelligent programs?

Where Gerrold is interested in the implications of an intelligent system, James P. Hogan, in The Two Faces of Tomorrow, is more concerned with their creation. Hogan describes two intelligent systems, one a small laboratory system and the other a much larger complex installed on a space station. For dramatic reasons, the large system takes over the station, kills several people, and is finally subdued by astronomy. The more plausible laboratory system, though less exciting, illustrates several heuristic programming theories, as well as many of the frustrating problems involved in defining and developing computer intelligence.14

A better example might be The Adolescence of P-1, by Thomas J. Ryan. P-1 starts out as a “worm,” a type of program designed to gradually take over larger and larger chunks of computer resources. To this Gregory Burgess, P-1’s creator, adds program modules to detect telephone links, avoid security alarms, and generate new routines to deal with unexpected problems or opportunities. Ryan’s description of P-1’s birth and growth have the ring of truth, and P-1’s penetration of the North American telecommunications system — and the large computer systems attached to it — isn’t even all that fictional. Because it is written more as a thriller than as first contact novel, some questionable dramatics mar the conclusion (an obligatory penetration of military systems, with obligatory pyrotechnic consequences).15 Still, it is close enough to reality to give most programmers pause, particularly those familiar with IBM 360 and 370 systems.

More closely associated with a first contact theme is the superb novella “True Names,” by Vernor Vinge. Like Ryan, Vinge places his story in the vast, varied, and constantly evolving realm of telecommunications. To this already complex and only partially understood world Vinge adds thousands of private citizens, each with their powerful home computer systems. As a final touch, the exotic landscapes of classic experiments in artificial intelligence, Adventure and Zork, are added as seasoning. Though much of the story is presented using the vocabulary of fantasy, all the crucial details are firmly based in current or near future technology. Even the unnecessarily dramatic finale does no great harm to the story. There really are strange people out there who do odd things using computers and telephones, and no one knows where all this activity may lead.16

Arthur C. Clarke’s Rendezvous With Rama and Gregory Benford’s In the Ocean of Night cover binary first contact with an extraterrestrial flair. Both stories feature intelligent probes from alien civilizations which enter our solar system. Clarke doesn’t spend a great deal of time discussing the probe’s intelligence, choosing instead to investigate and describe the probe’s wondrous construction and design. Benford tackles more difficult subjects, and in doing so presents several good arguments on the utility of, and need for, interstellar probes guided by intelligent machines. Additionally, Benford suggests humanity should abandon the emotional and moral baggage it attaches to intelligence if it wishes to become a spacefaring race. Otherwise, the blow to our ego, when superior intelligences are found, could leave us catatonic.17

Organicism, cyberphobia, first contact, and computer intelligence are all combined in a recent novel, Code of the Life Maker, by James P. Hogan. Like most of his work, it sparkles with inventiveness, and in the first eleven pages Hogan offers a blockbuster: a closely reasoned synopsis of cybernetic evolution on Saturn’s largest moon, Titan. Starting with an accident involving an automated factory ship, a random chain events leads to a race of robots. These robots, in turn, develop a complex society with its own superstitions, religions, and politics. When investigators from Earth come in contact with this robot society, Darwin and Descartes are turned upside down and inside out, destroying fundamental scientific and philosophical assumptions. Hogan’s organic and inorganic characters — and these terms are subject to conflicting definitions by robot and human — are given free reign to confound, exploit, fear, and worship one another. Because of Hogan’s original, and realistic, handling of machine intelligence, Code of the Life Maker deserves serious attention.18

Perhaps the finest work in this field so far is Roderick, a lengthy, thoughtful, satiric (even grotesque) first contact novel by John Sladek. Roderick begins life as a problem-solving, self-modifying computer program designed as a university experiment in machine-based intelligence. Political and social pressures force Roderick to be placed in a mobile body, and as a robot Roderick is sent out into American society. Society, unfortunately, is not ready for Roderick. Many refuse to accept Roderick as a robot, viewing “him” instead as a handicapped child or a naive young adult. Others, displaying rampant organicism, reject the possibility of machine intelligence, and even attempt to convince Roderick he can’t exist. Government agencies, believing they have a responsibility to stifle threats to humanity’s superiority, strive to destroy Roderick. Protest groups form, some advocating the liberation of all machines, and others crying for the scrapping of anything mechanical.

Roderick manages to cover nearly every aspect of first contact with computer intelligence. Though the overall tone of the work is comic, the technology portrayed is plausible, as are the reactions to that technology. Two developments are particularly thought provoking. At one point, rather than accept him as a robot, a kindly couple force Roderick to wear the shell of a mannequin — and abandon his identity as a machine in order to look more human. The other development is a disturbing answer to Roderick’s most pressing question: why was I created? No one seems to know, or care.19

If computer intelligence has fared badly in science fiction, it may not be entirely the fault of the literature. Computers can do anything that is clearly, precisely, and completely defined. Are thinking and intelligence so well defined? Programming is the science (and art) of turning formless data into usable information, and then refining this information into a body of knowledge. Eventually, this knowledge may be further refined into something resembling wisdom: Data ➪ Information ➪ Knowledge ➪ Wisdom. While programming has developed a practical understanding of this process, and has even put this understanding to use, human psychology has lagged behind. As but one illustration, note that modern psychology tends to be far more interested in behavior than thought.

In spite of the problems, or maybe even because of them, binary first contact offers virtually limitless opportunities for good, realistic stories. There have been more than enough awful warning tales, dating back to Shelley’s Frankenstein and Capek’s R.U.R., and fantasies dealing with machines are best left to fairy tales of brave little toasters.20 What are needed are stories telling us how to accept computer intelligence without trying to humanize it. Equally important, perhaps, are stories which answer Roderick’s question: what will we do with computer intelligence once it emerges?

1 Computer intelligence is often called “artificial intelligence,” an unfortunate phrase since it suggests there is another type, “real” intelligence. Until more is known about human and computer intelligence, the term “artificial intelligence” should best be left to references concerning bogus C.I.A. studies.

2 Writers, take note: if adding machines and calculators were considered human, “appliance abuse” would join child abuse and spouse abuse as a topic for publication.

3 Isaac Asimov “The Bicentennial Man,” in The Bicentennial Man and Other Stories (Garden City, NY: 1976). This story is far from the exception; nearly all of Asimov’s robot stories revolve around issues more commonly associated with “humanity” than “intelligence.”

4 For a good selection of these stories, which cover odd appliances as well as bizarre robots and computers, see Ron Goulart, What’s Become of Screwloose? and Other Inquiries (New York: 1971); Broke Down Engine, and Other Troubles With Machines (New York: 1971); and Nutzenbolts, and More Troubles With Machines (New York: 1975).

5 Robert Heinlein, The Number of the Beast (London: 1980). Heinlein may have been paying his respects to a milestone in science fiction and fantasy literature. L. Frank Baum, through the Tin Woodman and Tik-Tok Man, introduced the first major positive intelligent artifacts into fiction.

6 Frederik Pohl, “The Farmer on the Dole,” in Midas World (New York: 1983). Science fiction isn’t alone in attempting to assign human attributes to computers. One of the most famous proposals for proving machine intelligence is the “Turing Test,” named after British mathematician Alan M. Turing. The test involves a human subject, seated before two terminals. One terminal is linked to a room with a human, and the other to a room containing a computer. If the subject, through asking questions, engaging in conversation, or any other means, cannot discover which terminal is linked to the computer, the computer is deemed to be “intelligent.” Many computer scientists today view the Turing Test as a valid means of checking intelligence — even though the test is biased: not only must the computer be intelligent, it must also be adept at mimicking human responses.

7 Looking back to our creation, determinist theologians, philosophers, and historians might offer similar arguments concerning humanity. Human intelligence may be a divinely programmed applications package, human free will an elaborate random number generator, and the soul a celestial job log.

8 Adam L. Gruen, “Organicism Breeds New Species,” Infoworld (March 28, 1983), 53. Since this was published, Gruen learned the term “organicism” had already been used by others to describe different topics. For lack of a better term, Gruen is stick ing with “organicism.” Adam L. Gruen to Lawrence I. Charters, 30 May 1983.

9 Interestingly, while the definition of “thought” has changed, so has the definition of “computer.” The first digital computers, such as the Navy’s Automatic Sequence Controlled Calculator of 1944 (familiarity known as the Mark I), would be classified as calculators under current standards.

10 D. F. Jones, Colossus: The Forbin Project (London: 1966); The Fall of Colossus (New York: 1974); and Colossus and the Crab (New York: 1977).

11 George Orwell, Nineteen Eighty-four (London: 1949); D. G. Compton, The Steel Crocodile (New York, 1970).

12 See Keith Laumer, Bolo (New York: 1976); and Fred Saberhagen, Berserker (New York: 1967); The Berserker Wars (New York: 1981), and others.

13 David Gerrold, When HARLIE Was One (New York: 1972). HARLIE (Human Analogue Robot, Life Input Equivalents) must be classed as one of the worst acronyms ever developed.

14 James Hogan, The Two Faces of Tomorrow (New York: 1979).

15 Thomas Ryan, The Adolescence of P-1 (New York: 1977 ). While using an IBM 369/67 at Washington State University, this paper’s author often wondered why programs didn’t run, or didn’t run properly. Reading Ryan’s novel suggested several possible explanations, and he belatedly apologizes for blaming IBM ‘s irritating JCL (Job Control Language).

16 Vernor Vinge, “True Names,” in Binary Star #5 (New York: 1981).

17 Arthur Clarke, Rendezvous With Rama (London: 1973); Gregory Benford, In the Ocean of Night (New York: 1977).

18 James P. Hogan, Code of the Life Maker (New York: 1983).

19 Published in two parts: John Sladek, Roderick; or, The Education of a Young Machine (London: 1980); and Roderick at Random; or, Further Education of a Young Machine (London: 1983). Roderick offers delights beyond the immediate subject matter. In the first volume, Roderick provides a splendid critique of Asimov’s I, Robot. In the second, Roderick aids a demolition crew in the destruction of 334 East 11th Street, site of Thomas Disch’s novel 334.

20 Though rarely viewed as such, Frankenstein; or, The Modern Prometheus (London: 1818), by Mary Wollstonecraft Shelley, can be termed an “artificial” intelligence novel. Similarly, R.U.R.: A Fantastic Melodrama (New York: 1923), by Karel Capek, was intended as a statement on automation, though the “robots” are created from biological parts.


Constellation (41st World Science Fiction Convention) program, Baltimore, Maryland, Sept. 1-5, 1983.


Western Pacific dragons and other real creatures

%d bloggers like this: