Video * Presentation Powerpoint * Proceedings PDF presented Aug. 2, 2014 at The Seventh Conference on Artificial General Intelligence (Quebec City)
Exactly as Artificial Intelligence (AI) did before, Artificial General Intelligence (AGI) has lost its way. Having forgotten our original intentions, AGI researchers will continue to stumble over the problems of inflexibility, brittleness, lack of generality and safety until it is realized that tools simply cannot possess adaptability greater than their innate intentionality and cannot provide assurances and promises that they cannot understand. The current short-sighted static and reductionist definition of intelligence focusing on goals must be replaced by a long-term adaptive one focused on learning, flexibility, improvement and safety. We argue that AGI must claim an intent to create safe artificial people via autopoiesis before its promise(s) can be fulfilled.
Video * Presentation Powerpoint * Proceedings PDF presented Nov. 6, 2011 at The Second International Conference on Biologically Inspired Cognitive Architectures (Washington, DC)
While adaptive systems are currently generally judged by their degree of intelligence (in terms of their ability to discover how to achieve goals), the critical measurement for the future will be where they fall on the spectrum of self. Once machines and software are able to strongly modify and improve themselves, the concepts of the self and agency will be far more important, determining not only what a particular system will eventually be capable of but how it will actually act. Unfortunately, so little attention has been paid to this fact that most people still expect that the basic cognitive architecture of a passive “Oracle” in terms of consciousness, self, and “free will” will be little different from that of an active explorer/experimenter with assigned goals to accomplish. We will outline the various assumptions and trade-offs inherent in each of these concepts and the expected characteristics of each – which not only apply to machine intelligence but humans and collective entities like governments and corporations as well.
Video (starts at 32:15) * Presentation Powerpoint * Proceedings Paper presented August 4, 2011 at The Fourth Conference on Artificial General Intelligence (Mountain View, CA)
Insanity is doing the same thing over and over and expecting a different result. “Friendly AI” (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity’s eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a “Coherent Extrapolated Volition of Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon?