The last six months has seen a rising tsunami of interest in “killer robots” and “autonomy” in weapons systems. We argue that most of the “debate” has been derailed by emotionally inflammatory terms, red herrings facilitated by the overuse of “suitcase” words rather than precisely defined terms and entirely spurious arguments. There have been numerous proposals for “robot arms control” but we contend that such framing is a major part of the problem even when done by responsible parties. Our proposal is to move forward by referring back to the one necessary but simple core concept of responsibility.
Published June 2013 in International Journal of Machine Consciousness 5(1): 59-74
Artificial intelligence, the “science and engineering of intelligent machines”, still has yet to create even a simple “Advice Taker” [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a “self” that can “learn” to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins’  speculation that “perhaps consciousness arises when the brain’s simulation of the world becomes so complete that it must include a model of itself” by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and “free-will” that continue to pave the way towards the creation of safe/moral autopoiesis.
Presentation Powerpoint presented May 5, 2013 at the First Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics (Chandler, Arizona)
In the near future . . . . New York SWAT teams are equipped with “smart” rifles that prevent the shooting of unarmed targets. Hostage shootings by SWAT personnel immediately drop dramatically followed by steady increase in successful outcomes and a minor rebound in hostage shootings by SWAT personnel. Studies reveal that the most successful SWAT personnel have adopted a strategy of “shoot everything that moves and let the gun sort it out” and that it would take a better than ten-fold increase in the error rate of the rifles before this wasn’t the best strategy in terms of outcome. The “smart” rifle has become the arbiter of who lives and who dies.
In Los Angeles, SWAT teams begin to take advantage of “armed telepresence” using modified DARPA disaster-relief robots. Particularly popular/effective are the “pre-targeting” and “aim-correction” functionalities which provide inhuman speed and accuracy to even the rawest recruits. Unfortunately, hostage shootings by LA SWAT personnel rise as the new assisted-human speed outpaces unassisted human judgment. Using the “smart” rifles would solve that problem but effectively take the human entirely out of the loop. The “killer robot” will have arrived. Continue reading