Bootstrapping A Structured Self-Improving & Safe Autopoietic Self

Presentation Powerpoint * Proceedings PDF presented November 9, 2014 at the 2014 Annual International Conference on Biologically Inspired Cognitive Architectures (Boston, Massachusetts).

After nearly sixty years of failing to program artificial intelligence (AI), it is now time to grow it using an enactive approach instead. Critically, however, we need to ensure that it matures with a “moral sense” that will ensure the safety and well-being of the human race. Implementing consciousness and conscience is the next step the way towards creating safe and cooperative machine entities.

Instructions for Engineering Sustainable People

Video * Presentation Powerpoint * Proceedings PDF presented Aug. 2, 2014 at The Seventh Conference on Artificial General Intelligence (Quebec City)

Exactly as Artificial Intelligence (AI) did before, Artificial General Intelligence (AGI) has lost its way. Having forgotten our original intentions, AGI researchers will continue to stumble over the problems of inflexibility, brittleness, lack of generality and safety until it is realized that tools simply cannot possess adaptability greater than their innate intentionality and cannot provide assurances and promises that they cannot understand. The current short-sighted static and reductionist definition of intelligence focusing on goals must be replaced by a long-term adaptive one focused on learning, flexibility, improvement and safety. We argue that AGI must claim an intent to create safe artificial people via autopoiesis before its promise(s) can be fulfilled.

Evaluating Human Drives and Needs for a Safe Motivational System

Presentation Powerpoint * Proceedings PDF presented March 25, 2014 by Morgan J. Waser at the AAAI 2014 Spring Symposium on Implementing Selves with Safe Motivational Systems & Self-Improvement (Palo Alto, CA)

The human motivational system can be viewed as either being composed of drives or of needs. Our actions can be explained as being based upon reflexes, desires and goals evolved from pressures to maintain or fulfill instrumental sub-goals. Or we can use Maslow’s hierarchy of needs as another lens to provide a different view. Both correlate well with the ways we look at decisions when we are making them as well as how they interact over time and build upon one another to better meet our needs and fulfill our goals. We also focus on two drives in particular that seemingly drive the factionalism in machine intelligence safety.