RobotRepublicScott Robbins RobotRepublic Foundation for Responsible Robotics — There’s no shortage of misguided ideas out there about how to create ethical robots.

An article that appeared recently in the journal Frontiers for AI and Robotics is a prime example. In Empowerment as Replacement for the Three Laws of Robotics, Christoph Salge and Daniel Polani propose a set of rules meant to formalize and implement Isaac Asimov’s Three Laws of Robotics.

The point, they say, is to recast them into a shape that makes sense in robot designs today. But that’s just silly.

And the thing is, they go about this argument without any sense of irony. If they gave Asimov even a quick read, after all, they’d know that Asimov’s fictional laws break down easily in the real world. Asimov would’ve been first to admit that. He showed the fallacy of his own fictional robot laws again and again in his stories throughout his career.

isaac asimov three laws of robotics empowerment robot republic It’s not as if that’s a secret, either.

As roboticist Daniel Wilson famously told The Brookings Institution way back in 2009. “Asimov’s rules are neat, but they are also bullshit.”

And science fiction author Robert J. Sawyer puts it this way:

Asimov’s “Laws” are hardly laws in the sense that physical laws are laws; rather, they’re cute suggestions that made for some interesting puzzle-oriented stories half a century ago. I honestly don’t think they will be applied to future computers or robots. We have lots of computers and robots today and not one of them has even the rudiments of the Three Laws built-in. It’s extraordinarily easy for “equipment failure” to result in human death, after all, in direct violation of the First Law.

Asimov’s Laws assume that we will create intelligent machines full-blown out of nothing, and thus be able to impose across the board a series of constraints. Well, that’s not how it’s happening. Instead, we are getting closer to artificial intelligence by small degrees and, as such, nobody is really implementing fundamental safeguards.

In the real world, machines in general and robots set out to harm humans all the time. If they followed Asimov’s first law, for example, robots could never harm humans by taking their jobs. And they certainly couldn’t be part of military research, which drives and funds so much of the field.

And there’s another big problem with the argument the writers make. That’s their use of the very construct of empowerment. It is so misleading. A robot that strives to stay out of a human’s way is hardly empowered, even in the heuristic sense of the word. The authors appear to be confusing necessary conditions with sufficient conditions.

It is of course necessary to be empowered so that you’re not blocked from moving or unable to live your life at all. But just being alive and free to move about isn’t being empowered, not in any sense of the word.

This is the reason such arguments are so flawed. In holding that is even possible to build ethical robots from the ground up, they reduce and simplify ethics to a point that it really isn’t ethics at all.

Read the article for yourself, below. I’d love to hear what you think in the comments below. Or, you can email me at Scott@RobotRepublic.org.

For RobotRepublic,I’m Scott Robbins.

Empowerment as Replacement for the Three Laws of Robotics uploaded by RobotRepublic on Scribd

Cover image: Wikia.com, All Rights Reserved.