Advertisement

Man in Loop Can Only Be as Flawless as Computers

Share via
<i> Peter D. Zimmerman, a physicist, is a senior associate at the Carnegie Endowment for International Peace and director of its Project on SDI Technology and Policy. </i>

The captain of the U.S. warship Vincennes may have been misled by the computer system needed to operate the radar aboard his cruiser when he mistakenly gave the order to fire and downed an Iranian Airbus with 290 aboard.

The computers aboard ship use artificial intelligence programs to unscramble the torrent of information pouring from the phased array radars. These computers decided that the incoming Airbus was most probably a hostile aircraft, told the skipper, and he ordered his defenses to blast the bogey (target) out of the sky. The machine did what it was supposed to, given the programs in its memory. The captain simply accepted the machine’s judgment, and acted on it. There was a “man in the loop”--a human commander to sort the data and make the final decision--yet tragedy still occurred.

The battle management computers for any kind of space-based strategic defense system will also rely mostly on data from giant versions of the same kind of phased array radars as aboard an Aegis-equipped cruiser. The programs themselves will use the same kind of artificial intelligence to evaluate the data stream and will present judgments, camouflaged as recommendations or analyzed information, also to a man in the loop--one whose role has been mandated by Congress.

Advertisement

Despite the fact that the Aegis system has been exhaustively tested at the RCA lab in New Jersey and has been at sea for years, it still failed to make the right decision the first time an occasion to fire a live round arose. The consequences of a similar failure in a “Star Wars” situation could lead to the destruction of much of the civilized world.

It’s not hard to see how that could happen. A simple mistake by the computer as a result of ambiguous or conflicting signals could lead to the belief that a radio transmission from a Soviet satellite was the first shot against the U.S. space defenses. In another scenario, the computer could decide that a Soviet missile test was an accidental launch of a live missile with nuclear warheads. The man in the loop would have little more to go on than the flickering displays of his monitors and radar screens. He would no more have eye contact with the “hostile” target than did Capt. Will C. Rogers III of the Vincennes. His function in the loop would hardly be one to second-guess the computers; he would have no information on which to do so. The man in the loop would be there primarily to tell his computers to open fire.

Once the superpowers have “effective” defenses in place, the first shot against the United States in a nuclear exchange must be to neutralize American defenses. U.S. computers will almost surely be programmed to accept as hostile any signals that indicate that the Soviets have placed our defenses at risk. In those circumstances, our computers might well be instructed to consult their artificial intelligence to decide whether the suspicious actions are from a “threat” or from some kind of exercise. The data in the system are guaranteed to be incomplete and inconsistent; that is the nature of combat. The computers will almost surely be instructed to err on the side of protection of the United States, just as those in the Persian Gulf chose to accept conflicting data indicating an F-14 rather than an Airbus.

Advertisement

When the man in the loop activates U.S. weapons based on computer indications, our system will start shooting at Soviet satellites to make sure that our own retaliatory missiles can get through; we will not be able to make any other choice. The only choice the Soviet computers can tell their masters to make is that the U.S. has begun an attack on the Soviet Union by trying to wipe out its defensive space shield. Then, the Soviet man in the loop will be as much captive of his computers and sensors as will be the American.

The targets are too far away--the curve of the Earth hides the action from both sides. The probable outcome of such a series of actions is the launch of one side’s strategic ballistic missiles. In the fog of war, nuclear explosions can further disrupt the functioning of the command systems.

The advocates of strategic defense can argue, perhaps plausibly, that we have now learned our lesson. The computers must be more sophisticated, they will say. More simulations must be run and more cases studied so that the artificial intelligence guidelines are still more precise.

Advertisement

But the real lesson from the tragedy in the Persian Gulf is that computers, no matter how smart, are fallible. Sensors, no matter how good, will often transmit conflicting information. The danger is not that we will fail to prepare the machines to cope with expected situations. It is the absolute certainty that crucial events will be ones we have not anticipated.

Congress thought we could prevent a strategic tragedy by insisting that all architectures for strategic defense have the man in the loop. We now know the bitter truth that the man will be captive to the computer, unable to exercise independent judgment because he will have no independent information, he will have to rely upon the recommendations of his computer adviser. It is another reason why strategic defense systems will increase instability, pushing the world closer to holocaust--not further away.

Advertisement