Instructions for Engineering Sustainable People

Video * Presentation Powerpoint * Proceedings PDF presented Aug. 2, 2014 at The Seventh Conference on Artificial General Intelligence (Quebec City)

Exactly as Artificial Intelligence (AI) did before, Artificial General Intelligence (AGI) has lost its way. Having forgotten our original intentions, AGI researchers will continue to stumble over the problems of inflexibility, brittleness, lack of generality and safety until it is realized that tools simply cannot possess adaptability greater than their innate intentionality and cannot provide assurances and promises that they cannot understand. The current short-sighted static and reductionist definition of intelligence focusing on goals must be replaced by a long-term adaptive one focused on learning, flexibility, improvement and safety. We argue that AGI must claim an intent to create safe artificial people via autopoiesis before its promise(s) can be fulfilled.

Evaluating Human Drives and Needs for a Safe Motivational System

Presentation Powerpoint * Proceedings PDF presented March 25, 2014 by Morgan J. Waser at the AAAI 2014 Spring Symposium on Implementing Selves with Safe Motivational Systems & Self-Improvement (Palo Alto, CA)

The human motivational system can be viewed as either being composed of drives or of needs. Our actions can be explained as being based upon reflexes, desires and goals evolved from pressures to maintain or fulfill instrumental sub-goals. Or we can use Maslow’s hierarchy of needs as another lens to provide a different view. Both correlate well with the ways we look at decisions when we are making them as well as how they interact over time and build upon one another to better meet our needs and fulfill our goals. We also focus on two drives in particular that seemingly drive the factionalism in machine intelligence safety.

Implementing a Safe “Seed” Self

Proceedings PDF for the AAAI 2014 Spring Symposium on Implementing Selves with Safe Motivational Systems & Self-Improvement (Palo Alto, CA)

An intentional “self” is a necessity to answer quandaries ranging from Hume’s is-ought problem to artificial intelligence’s philosophical “frame problem” to questions about meaning and understanding. However, without a good blueprint for that intentionality, the new self could conceivably pose an existential risk for humanity. A critical early design decision is how human-like to make the self, particularly with respect to requiring moral emotions that cannot be self-modified any more than those of humans in order to ensure safety, stability and sociability. We argue that Haidt’s definition of morality – to suppress or regulate selfishness and make cooperative social life possible – can be reliably implemented via a combination of the top-down intentionality to fulfill this requirement and the bottom-up emotional reinforcement to support it. We suggest how a moral utility function can be implemented to quantify and evaluate actions and suggest several additional terms that should help to reign in entities without human restrictions.