Implementing a Safe “Seed” Self

Proceedings PDF for the AAAI 2014 Spring Symposium on Implementing Selves with Safe Motivational Systems & Self-Improvement (Palo Alto, CA)

An intentional “self” is a necessity to answer quandaries ranging from Hume’s is-ought problem to artificial intelligence’s philosophical “frame problem” to questions about meaning and understanding. However, without a good blueprint for that intentionality, the new self could conceivably pose an existential risk for humanity. A critical early design decision is how human-like to make the self, particularly with respect to requiring moral emotions that cannot be self-modified any more than those of humans in order to ensure safety, stability and sociability. We argue that Haidt’s definition of morality – to suppress or regulate selfishness and make cooperative social life possible – can be reliably implemented via a combination of the top-down intentionality to fulfill this requirement and the bottom-up emotional reinforcement to support it. We suggest how a moral utility function can be implemented to quantify and evaluate actions and suggest several additional terms that should help to reign in entities without human restrictions.

A Myriad of Automation Serving a Unified Reflective Safe/Moral Will

Proceedings PDF for the AAAI Fall 2013 Symposium on “How Should Intelligence Be Abstracted in AI Research?

We propose a unified closed identity with a pyramid-shaped hierarchy of representation schemes rising from a myriad of tight world mappings through a layer with a relatively small set of properly integrated data structures and algorithms to a single safe/moral command-and-control representation of goals, values and priorities.

The Bright Red Line of Responsibility

Presentation Powerpoint * Proceedings PDF presented July 15, 2013 at IACAP 2013: The Annual Meeting of the International Association for Computing and Philosophy: “Minds, Machines and Morals”

The last six months has seen a rising tsunami of interest in “killer robots” and “autonomy” in weapons systems. We argue that most of the “debate” has been derailed by emotionally inflammatory terms, red herrings facilitated by the overuse of “suitcase” words rather than precisely defined terms and entirely spurious arguments. There have been numerous proposals for “robot arms control” but we contend that such framing is a major part of the problem even when done by responsible parties. Our proposal is to move forward by referring back to the one necessary but simple core concept of responsibility.