Dissecting the Scientists’ Call to Ban Autonomous Lethal Robots

by Mark R. Waser (originally appeared March 27, 2013 at Transhumanity.Net)

The last six months has seen a rising flood of publicity about “killer robots” and autonomy in weapons systems.  On November 19, 2012, Human Rights Watch (HRW) issued a 50-page report “Losing Humanity: The Case against Killer Robots” outlining concerns about “fully autonomous weapons that could select and engage targets without human intervention” and claiming that a “preemptive prohibition on their development and use is needed”.  Two days later, the United States Department of Defense released Directive 3000.09 which “assigns responsibility for the development and use of autonomous and semi-autonomous functions in weapon systems”.  Now, social media is all abuzz because the “International Committee for Robot Arms Control” (ICRAC) has issued a Scientists’ Call to Ban Autonomous Lethal Robots “in which the decision to apply violent force is made autonomously”.

harpyArms control is an immediate critical issue.  Weapons have already been fielded that are disasters just waiting to happen.  The most egregious example described by the HRW report is the Israeli Harpy – a fire-and-forget “loitering attack weapon” designed to autonomously fly to and patrol an assigned area and attack any hostile radar signatures with a high explosive warhead.  Indeed, HRW invokes our worst fears by quoting Noel Sharkey’s critique that “it cannot distinguish between an anti-aircraft defense system and a radar placed on a school by an enemy force”.  Yet, the Harpy has been sold to Chile, India, South Korea, The People’s Republic of China and Turkey. Continue reading