When President Barack Obama played soccer with Asimo — Honda’s humaniod robot — last month, even he had to admit walking and talking robots are “a little scary.”

Perhaps the Commander in Chief should hear about his Defense Department’s plan to develop robots that decide what is right and what is wrong.

Associated Press

Secretary of Defense Chuck Hagel met one of the Defense Advanced Research Projects Agency’s robots in April, and now the DoD will spend $7.5 in the next five years to build robots with a preprogrammed moral code (Image source: Associated Press).

The Office of Naval Research has $7.5 million set aside in grant money over the next five years for university researchers to build a robot with moral reasoning capabilities.

Proponents of the plan argue a “sense of moral consequence” could allow robotic systems to operate as one part of a more efficient — and truly autonomous — defense infrastructure. And some of those advocates think pre-programmed machines would make better decisions than humans, since they could only follow strict rules of engagement and calculate potential outcomes for multiple different scenarios.

And that does make sense to some, according to Gizmodo. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundro told Defense One.

“Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that,” he said.

So can a robot be preprogrammed with “operational morality?”
Share:

“There’s operational morality, functional morality, and full moral agency,” Wendell Wallach, Wendell Wallach, Yale Technology and Ethics Study Group chair, said. “Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses… Functional morality is where the robot starts to move into situations where the operator can’t always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear.”

Defense One reports:

“Currently, the United States military prohibits lethal, fully-autonomous robots. And semi-autonomous robots can’t “select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator,” even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive.

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

On the other hand, dissenting voices merely have to point to any one of the dozens of rather creepy movies like Terminator and I, Robot — or any classic, heroic moments where the off-beat decision by the brazen firefighter or police office saved more lives than standard regulations would allow.

So what do you think? Are autonomous, DoD-controlled robots a good idea? What if they were only used overseas?

(H/T: Gizmodo)

Follow Elizabeth Kreft (@elizabethakreft) on Twitter. 

Other Must-Read Stories