A Myriad of Automation Serving a Unified Reflective Safe/Moral Will

Proceedings PDF for the AAAI Fall 2013 Symposium on “How Should Intelligence Be Abstracted in AI Research?

We propose a unified closed identity with a pyramid-shaped hierarchy of representation schemes rising from a myriad of tight world mappings through a layer with a relatively small set of properly integrated data structures and algorithms to a single safe/moral command-and-control representation of goals, values and priorities.

The Bright Red Line of Responsibility

Presentation Powerpoint * Proceedings PDF presented July 15, 2013 at IACAP 2013: The Annual Meeting of the International Association for Computing and Philosophy: “Minds, Machines and Morals”

The last six months has seen a rising tsunami of interest in “killer robots” and “autonomy” in weapons systems. We argue that most of the “debate” has been derailed by emotionally inflammatory terms, red herrings facilitated by the overuse of “suitcase” words rather than precisely defined terms and entirely spurious arguments. There have been numerous proposals for “robot arms control” but we contend that such framing is a major part of the problem even when done by responsible parties. Our proposal is to move forward by referring back to the one necessary but simple core concept of responsibility.

Safe/Moral Autopoiesis & Consciousness (Mark R. Waser)

Published June 2013 in International Journal of Machine Consciousness 5(1): 59-74
Article PDF

Artificial intelligence, the “science and engineering of intelligent machines”, still has yet to create even a simple “Advice Taker” [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a “self” that can “learn” to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins’ [1976] speculation that “perhaps consciousness arises when the brain’s simulation of the world becomes so complete that it must include a model of itself” by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and “free-will” that continue to pave the way towards the creation of safe/moral autopoiesis.