RobotRepublicJim Hendler — I was sad to hear about the recent death of  Stanislav Petrov, 77. Petrov is best known as the man who saved the world back in the 1980s.

Now, I never met Petrov, but I’ve always admired him as a profile in courage — and someone all of us working in artificial intelligence and robotics can learn from.

In September 1982, Petrov was the duty officer at the secret command center where the Soviet Union monitored for potential US military activities, especially potential nuclear launches from the US to Russian territory. A set of warning lights lit up his computer system saying the US had launched missiles on the USSR. Petrov’s job was to alert Soviet officials so they could return such an attack. Petrov didn’t. He decided the computer had made an error.

As it turned out, Petrov made the right choice: it was a false alarm.

Thanks to that decision, you are alive today to read this article. The other option, of course, would’ve led to an all-out nuclear war.

In an obituary in the New York Times Times, Petrov was quoted as telling the BBC that:

“There was no rule about how long we were allowed to think before we reported a strike … but we knew that every second of procrastination took away valuable time, that the Soviet Union’s military and political leadership needed to be informed without delay. All I had to do was to reach for the phone; to raise the direct line to our top commanders — but I couldn’t move. I felt like I was sitting on a hot frying pan … when people start a war, they don’t start it with only five missiles …”

The lesson to learn here goes far beyond the issue of AI and war, though.

It speaks to to the dangers in replacing humans with AIs in general, and what we need to understand in deciding when or if to do so.Petrov attributed his judgment to both his training and his intuition,” as the Times points out, and that is the crucial observation for us to focus on.

Where intuition and training meet

While I don’t know the details of the training given to Russian officers, I did get to observe some of the training exercises that Petrov’s American counterparts went through in more recent years.

Over and over again, usually knowing it was a training exercise, but sometimes not, the officers were exposed to scenarios that represented potential Russian nuclear attacks. I can’t share the classified details of these exercises but it exposes no secrets to say that the scenarios were many and varied.

The officers were trained in what to look for and what might be artificial, regardless of whether it was caused by computer error or mistake.

The goal in training these officers was that they would develop strong knowledge of what an attack looked like.

In Petrov’s case, however, the situation he faced that day in 1982 was in all likelihood totally different from any training he had received. We can assume that he had been trained in how to recognize an attack, and possibly to recognize an error, but the actual events of that day did not fit into either category neatly.

As he stated later, intuition was critical.

The observation that it simply didn’t make sense to start a war with five missiles didn’t come just from his training.

To make the decision Petrov had to use knowledge such as what it would mean to start such a war. He also used knowledge about the greater context – what humans do, what the properties of computers were, etc.

stanislav petrov robot republic responsiblerobotics robotrepublicWhen he saw the situation, his training, that he should respond when the machines indicated a real launch, and his intuition, that it just didn’t make sense for the US to launch only five missiles, were in conflict.

His human understanding of the world won out. That is why he decided, thankfully, not to react as if it were a real US launch against Moscow.

The bigger picture

The key breakthrough technology causing the modern boom in AI is in the area of machine learning —  and particularly the deep learning of a new generation of neural networks.

These systems primarily learn from what is called supervised learning: They are  exposed to a variety of situations and at the end are told of the properties that characterize each.

For instance, if you wanted to teach an AI system the difference between a cat and a duck you might show it multiple pictures of each animal so it could learn how to distinguish between them. But you’d have to show it pictures of animals other than cats or ducks, too. Otherwise, it would assume every animal it saw was one or the other.

To distinguish many kinds of images — or phrases, sounds or whatever) — we just add more and more known examples to the system, and train with a lot of computer power and some cool learning tools.

In some cases, such training is sufficient. Driving is a good example of this. You might think of driving as something very complicated, but it turns out that all it takes is training to create a good driver. That is one reason why self-driving vehicles are actually projected to be better drivers than people in the future.

Driving, especially on highway driving, has a surprisingly low amount of variance. There aren’t too many unusual situations that arise that need decisions.

Because AI systems can be equipped with sensors that can be better than our human sensors, autonomous vehicles may indeed make great drivers.But think about this concept of variance in the context of the challenge Petrov faced.

We train our AI systems in a similar way to how Petrov was trained: Lots of examples, lots of situations, but all in the specific context of the problem being solved. 

But we haven’t yet figured out how to get AI systems to look beyond their training when making decisions.

Petrov had to trust his intuition, based on his knowledge of the world. If we are going to replace people with AI, whether in weapons systems or anywhere else, we must learn to give them the ability to somehow go further.

The critical AI-Human combo

Now consider the case of a medical system vs. a doctor. Even if the medical system excels in figuring out what tests to run on one patients, only the human doctor has the critical knowledge of which patients tend toward hypochondriac and which ones can mentally tolerate getting tested to rule out some rare condition.

We could easily train the computer to recognize the order being made by someone in a fast-food restaurant, but could the Ai system notice that someone was bleeding and that an ambulance should be called?

AIs can exceed human capabilities, especially in low level, low variance tasks like driving. We just have to be careful not to take this too far.

On the other hand,  we should be careful not to let people rely too much on intuition either. In a lot of cases, people get things wrong. And the literature is full of issues such as bias, prejudice or lack of reaction speed that have caused tremendous harm to humans. When large airliners crash nowadays it is more likely to be caused by human error than by machine failures.

The decision to go to war effects more people than many of the choices made during the battles.

Maybe AI systems trained to make better predict outcomes will help make these choices more rational.

There’s no question that, currently, certain AI systems respond better to training than humans in some situations. Yet humans are able to use a combination of knowledge and intuition, which is what Petrov did.

This is why, of course, the human-AI combination is so very critical.

The combined decision-making of a trained AI system as coupled with an intuitive human is the best solution for years to come.

Looking for ways to augment human decision-making thought interactions with AI is a hugely promising direction.

Looking for ways to replace humans is fraught with unanticipated consequences. We should allow that only when there is a mountain of compelling evidence showing that can work in all cases.

We must all strive to be like Petrov and learn to trust the combination of AI training and human intuition.

That is how he saved the world, after all. We are all well advised to learn from his example.
For RobotRepublic, I’m Jim Hendler.


Cover image: The Los Angeles Times, All Rights Reserved