Mechanical men, or, to use Capek's now universally-accepted term, robots, are a subject to which the modern science-fiction writer has turned again and again. There is no uninvented invention, with the possible exception of the spaceship, that is so clearly pictured in the minds of so many: a sinister form, large, metallic, vaguely human, moving like a machine and speaking with no emotion.

The key word in the description is "sinister" and therein lies a tragedy, for no science-fiction theme wore out its welcome as quickly as did the robot. Only one robot-plot seemed available to the average author: the mechanical man that proved a menace, the creature that turned against its creator, the robot that became a threat to humanity. And almost all stories of this sort were heavily surcharged, either explicitly or implicitly, with the weary moral that "there are some things mankind must never seek to learn."

Advertisement

This sad situation has, since 1940, been largely ameliorated. Stories about robots abound; a newer viewpoint, more mechanistic and less moralistic, has developed. For this development, some people (notably Mr. Groff Conklin in the introduction to his science-fiction anthology entitled "Science-Fiction Thinking Machines," published in 1954) have seen fit to attach at least partial credit to a series of robot stories I wrote beginning in 1940. Since there is probably no one on Earth less given to false modesty than myself, I accept said partial credit with equanimity and ease, modifying it only to include Mr. John w. Campbell, Jr., editor of " Astounding Science-Fiction," with whom I had many fruitful discussions on robot stories.

My own viewpoint was that robots were story material, not as blasphemous imitations of life, but merely as advanced machines. A machine does not "turn against its creator" if it is properly designed. When a machine, such as a power-saw, seems to do so by occasionally lopping off a limb, this regrettable tendency towards evil is combated by the installation of safety devices. Analogous safety devices would, it seemed obvious, be developed in the case of robots. And the most logical place for such safety devices would seem to be in the circuit-patterns of the robotic "brain."

Let me pause to explain that in science-fiction, we do not quarrel intensively concerning the actual engineering of the robotic "brain." Some mechanical device is assumed which in a volume that approximates that of the human brain must contain all the circuits necessary to allow the robot a range of perception-and-response reasonably equivalent to that of a human being. How that can be done without the use of mechanical units the size of a protein molecule or, at the very least, the size of a brain cell, is not explained. Some authors may talk about transistors and printed circuits. Most say nothing at all. My own pet trick is to refer, somewhat mystically, to "positronic brains," leaving it to the ingenuity of the reader to decide what positrons have to do with it and to his good-will to continue reading after having failed to reach a decision.

In any case, as I wrote my series of robot stories, the safety devices gradually crystallized in my mind as "The Three Laws of Robotics. " These three laws were first explicitly stated in "Runaround. " As finally perfected, the Three Laws read as follows.

First Law-A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are firmly built into the robotic brain, or at least the circuit equivalents are. Naturally, I don't describe the circuit equivalents. In fact, I never discuss the engineering of the robots for the very good reason that I am colossally ignorant of the practical aspects of robotics.

The First Law, as you can readily see, immediately eliminates that old, tired plot which I will not offend you by referring to any further.

-- Advertisement --

Although, at first flush, it may appear that to set up such restrictive rules must hamper the creative imagination, it has turned out that the Laws of Robotics have served as a rich source of plot material. They have proved anything but a mental road-block.

An example would be the story "Runaround" to which I have already referred. The robot in that story, an expensive and experimental model, is designed for operation on the sunside of the planet Mercury. The Third Law has been built into him more strongly than usual for obvious economic reasons. He has been sent out by his human employers, as the story begins, to obtain some liquid selenium for some vital and necessary repairs. (Liquid selenium lies about in puddles in the heat of Mercury's sunward side, I will ask you to believe.)

Unfortunately, the robot was given his order casually so that the Second Law circuit set up was weaker than usual. Still more unfortunately, the selenium pool to which the robot was sent was near a site of volcanic activity, as a result of which there were sizable concentrations of carbon monoxide in the area. At the temperature of Mercury's sunside, I surmised that carbon monoxide would react fairly quickly with iron to form volatile iron carbonyls so that the robot's more delicate joints might be badly damaged. The further the robot penetrates into this area, the greater the danger to his existence and the more intensive is the Third Law effect driving him away. The Second Law, however, ordinarily the superior, drives him onward. At a certain point, the unusually weak Second Law potential and the unusually strong Third Law potential reach a balance and the robot can neither advance nor retreat. He can only circle the selenium pool on the equipotential locus that makes a rough circle about the site.

Meanwhile, our heroes must have the selenium. They chase after the robot in special suits, discover the problem and wonder how to correct it. After several failures, the correct answer is hit upon. One of the men deliberately exposes himself to Mercury's sun in such a way that unless the robot rescues him, he will surely die. That brings the First Law into operation, which being superior to both Second and Third, pulls the robot out of his useless orbit and brings on the necessary happy ending.

It is in the story "Runaround," by the way, that I believe I first made use of the term "robotics" (implicitly defined as the science of robot design, construction, maintenance, etc). Years later, I was told that I had invented the term and that it had never seen publication before. I do not know whether this is true. If it is true, I am happy, because I think it is a logical and useful word, and I hereby donate it to real workers in the field with all good will.

None of my other robot stories spring so immediately out of the Three Laws as does "Runaround" but all are born of the Laws in some way. There is the story, for instance, of the mind-reading robot who was forced to lie because he was unable to tell any human being anything other than that which the human in question wished to hear. The truth, you see, would almost invariably cause "harm" to the human being in the form of disappointment, disillusion, embarrassment, chagrin and other similar emotions, all of which were but too plainly visible to the robot.

Then there was the puzzle of the man who was suspected of being a robot, that is, of having a quasi-protoplasmic body and a robot's "positronic brain." One way of proving his humanity would be for him to break the First Law in public, so he obliges by deliberately striking a man. But the story ends in doubt because there is still the suspicion that the other "man" might also be a robot and there is nothing in the Three Laws that would prevent a robot from hitting another robot.

And then we have the ultimate robots, models so advanced that they are used to precalculate such things as weather, crop harvests, industrial production figures, political developments and so on. This is done in order that world economy may be less subject to the whims of those factors which are now beyond man's control. But these ultimate robots, it seems, are still subject to the First Law. They cannot through inaction allow human beings to come to harm, so they deliberately give answers which are not necessarily truthful and which cause localized economic upsets so designed as to maneuver mankind along the road that leads to peace and prosperity. So the robots finally win the mastery after all, but only for the good of man.

The interrelationship of man and robot is not to be neglected. Mankind may know of the existence of the Three Laws on an intellectual level and yet have an ineradicable fear and distrust for robots on an emotional level. If you wanted to invent a term, you might call it a "Frankenstein complex." There is also the more practical matter of the opposition of labor unions, for instance, to the possible replacement of human labor by robot labor.

This, too, can give rise to stories. My first robot story concerned a robot nursemaid and a child. The child adored its robot as might be expected, but the mother feared it, as might also be expected. The nub of the story lay in the mother's attempt to get rid of it and in the child's reaction to that.

My first full-length robot novel, "The Caves of Steel" (1954), peers further into the future, and is laid in a time when other planets, populated by emigrating Earthmen, have adopted a thoroughly robotized economy, but where Earth itself, for economic and emotional reasons, still objects to the introduction of the metal creatures. A murder is committed, with robot-hatred as the motive. It is solved by a pair of detectives, one a man, one a robot, with a great portion of the deductive reasoning (to which detective stories are prone) revolving about the Three Laws and their implications.

I have managed to convince myself that the Three Laws are both necessary and sufficient for human safety in regard to robots. It is my sincere belief that some day when advanced human-like robots are indeed built, something very like the Three Laws will be built into them. I would enjoy being a prophet in this respect, and I regret only the fact that the matter probably cannot be arranged in my lifetime.

This essay was written in 1956. In the years since, "robotics" has indeed entered the English language and is universally used, and I have lived to see roboticists taking the Three Laws very seriously.

-- Advertisement --