Saturday, May 23, 2009
Robot warriors to get a guide to ethics
Thanks to Mike Treder for blogging about this over at the IEET Ethical Technology blog.
According to this MSNBC story:
Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an "ethical governor," a package of software and hardware that tells robots when and what to fire. His book on the subject, "Governing Lethal Behavior in Autonomous Robots," comes out this month.
He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.
It will be interesting to see what Arkin says in his book. Is anyone sufficiently interested in this subject to get hold of a copy and review it for JET?
Western forces engaged in operations in the Middle East already use robots of various kinds, but always with a human in the decision-making loop. The question that confronts us in the future is what happens when we develop robots that are capable of making their own decisions - and, indeed, whether we should be doing this at all:
No matter where the robots are deployed however, there is always a human involved in the decision-making, directing where a robot should fly and what munitions the robot should use if it encounters resistance.
Humans aren't expected to be removed any time soon. Arkin's ethical governor is designed for a more traditional war where civilians have evacuated the war zone and anyone pointing a weapon at U.S. troops can be considered a target.
Arkin's challenge is to translate the 150-plus years of codified, written military law into terms that robots can understand and interpret themselves. In many ways, creating an independent war robot is easier than many other types of artificial intelligence because the laws of war have existed for over 150 years and are clearly stated in numerous treaties.
I notice that this involves a relatively low level of sophistication, since Arkin is thinking of situations where there are no civilians in the area to complicate things. Even if his software is effective, it's a long way from software that will enable an "autonomous" fighter bot to discriminate between combatants and non-combatants in the sorts of unconventional wars that are likely to be typical in the twenty-first century.