RobotRepublic — Ever heard of Stanislav Petrov? If not, you need to get up to speed. Petrov, you see, is the Russian who saved the world. He saved us on Sept. 26, 1983, just a few minutes after midnight in Moscow. The Soviet lieutenant colonel was the guy responsible for watching radar and alerting Soviet leadership in the case of a US military nuclear missile strike. His alert would of course be too late to save the USSR, but it would at least enable it to retaliate with a total, all-out nuclear counterattack against the US. That fall night in 1983 it happened: His computer alarms went off. A single American nuclear missile was on its way, the radar said. But Petrov hung back and disobeyed his standing orders to inform his superiors about it. It must be a computer error, he thought. The odds of the US sending just one missile seemed pretty slim.. But then the alarms went off again — and again — wailing louder and louder as his screen informed him that a second missile was on its way — and a third, fourth and fifth, too. His monitor flashed the Russian word for LAUNCH — in tall, bright letters. The software was urgently telling Petrov that the USSR should quickly launch its massive counterstrike. The alarms grew deafening, but Petrov still sat there, unmoving. Should he follow orders and call Soviet leadership, as protocol demanded? Or should he trust his gut? The stakes were huge, he knew. If he were wrong, U.S. missiles would wreak destruction on the USSR and all the things he held dear, without any counter at all. But what if he were right? Petrov did nothing. And in a few minutes, when the sky above was still quiet, clear and nuke-free, he knew he’d made the right decision. He’d bucked military orders and he feared he was going to have to answer for that, Petrov recalled to The Moscow News years later. But he’d saved the world from nuclear war. Through gut and intuition — and an unwillingness to question the technology in front of him — Petrov became the Russian who saved the world.
Could Petrov save the world in the age of AI?
The story of Petrov and his distrust of technology is a story that may well soothe readers now. But it makes some AI and robotics experts mighty anxious. They fear that today and in coming years, as humans put more and more trust in increasingly complex AI systems and robotic weapons, such a save becomes ever unlikely. “Petrov made his decision by taking into account the complex context around him. He was trained in a specific context — his machine and the lights,” says Jim Hendler, a Rensselaer Polytechnic Institute professor of computer, web and cognitive sciences and author, with Alice Mulvehill, of Social Machines: The Coming Collision of Artificial Intelligence, Social Networking and Humanity. “But when things went down, he looked beyond that context and reasoned that it didn’t make sense. And he took appropriate action.” The question now, Hendler says, is what happens the next time this happens, and Petrov isn’t around. “My bigger worry,” explains Hendler, “has to do with AI getting smarter (because) at some point we’re going to remove Petrov from the loop.” Removing humans from key warfare decisions is already a topic of discussion around drone and cyber warfare, he added. The “issue is having someone, like a human, being somewhere in the loop before the missiles get launched,” he said.
Should technology be compliant with the rules of war? Can it be?
Hendler isn’t alone in his gnawing concerns here. Noel Sharkey, who is the cofounder of the Foundation for Responsible Robotics and the chair of the International Committee for Robot Arms Control, worries about whether a Petrov save could happen again, too. Couldn’t smart machines and robots be programmed with caution as well as with the laws of war in mind, I wondered. No, not really, Sharkey said. “I see no way we can guarantee compliance with the laws of war,” he added. “This is the real worry for international security and one many people are missing — the fact that we have no idea what will happen.” “Just as we did with the Internet in the 90s, we seem to be sleepwalking into this,” Sharkey said. Humans are always reluctant, but “they have got to take responsibility for the technology we create.” The problem is trust, of trusting technology too much, he added. Another problem is that we trust the makers of the technology when they tell us not to worry, that everything will be okay.
A matter of trust
Humans often have no clue what the future holds so far as technology is concerned. Yet the makers of new technologies often overpromise. The makers of self-driving cars are especially guilty of this. They constantly repeat the refrain that their products will save lives, Sharkey said. But will they? “We don’t know,” said Sharkey. “Nobody does. They really need to stop saying that.” Many argue that the risks can be handled by just putting humans in the decision loop — and not letting AI and other emerging tech to lead with critical decisions. But Sharkey suggests that humans shouldn’t be putting AI, robots or other smart machines in the decision loop at all. Rather, we should just use AI-enabled technology as sensors, not deciders. Only humans, human values and human intuition should be in the driver’s seat, Sharkey said. RPI’s Hendler echoed that sentiment. There are three worrisome questions, he said. “The first is: Will AI (systems) work well enough during unplanned and unlikely events (that) they were not trained (for)?” The second is whether humans will even have the ability to properly question the determination of machines, especially when they’re endowed with what seems to them to be a superior set of smarts, he said. “But the third question,” said Hendler, is the one that really scares me. And that is: Will there even be a human there at all?”
Sleepwalking into that good night
Instead of worrying about future sentient computers taking out humanity — an idea thinkers like Stephen Hawking have pedaled — we should worry about the very real, near future threat we face as a result of trusting AI systems too much. The risk is that humans who are eager to offload hard decisions and difficult work to systems they think are smarter will write themselves out of the decision-making process, Hendler and Sharkey both told me. Yes, HAL 9000 attacked humans in 2001: A Space Odyssey. But really, so what? In 1983, Stanislav Petrov sat in front of his computer at Serpukhov-15 and decided that the state-of-the-art ballistic missile early warning system before him was not to be trusted. He saved the world and we all lived to tell about it. Now we must do everything possible to ensure that, in the future, conditions are right to allow that to happen again. For RobotRepublic, I’m Gina Smith. Cover image: Wikimedia Commons, All Rights Reserved. Inset images: Petrow, All Rights Reserved.