Video * Presentation Powerpoint presented Nov. 14, 2010 at The First International Conference on Biologically Inspired Cognitive Architectures (Washington, DC)
Reverse engineering human ethics so that they can be reconstructed from first principles reveals not only that evolution, as would be expected, has located a locally optimal solution but that there exists a clear path to a better solution for all forms of intelligence.
1. First Principles
Defining intelligence as the ability to fulfill complex goals in complex environments leads to defining all intelligent entities as goal-driven entities. Actions that those entities take should be judged by Continue reading
Video (starts 27 minutes in) * Powerpoint (unfortunately not visible in the video) presented March 8, 2010 at The Third Conference on Artificial General Intelligence (Lugano, Switzerland)
Inspired by a question after the previous day’s Designing a Safe Motivational System for Intelligent Machines presentation. Immediately followed by a rebuttal by and debate/Q&A with SIAI’s Roko Mijic.
Video * Presentation Powerpoint * Proceedings Paper presented March 7, 2010 at The Third Conference on Artificial General Intelligence (Lugano, Switzerland)
As machines become more intelligent, more flexible, more autonomous and more powerful, the questions of how they should choose their actions and what goals they should pursue become critically important. Drawing upon the examples of and lessons learned from humans and lesser creatures, we propose a hierarchical motivational system flowing from an abstract invariant super-goal that is optimal for all (including the machines themselves) to low-level reflexive “sensations, emotions, and attentional effects” and other enforcing biases to ensure reasonably “correct” behavior even under conditions of uncertainty, immaturity,error, malfunction, and even sabotage.