A Game-Theoretically Optimal Basis for Safe and Ethical Intelligence

Video * Presentation Powerpoint presented Nov. 14, 2010 at The First International Conference on Biologically Inspired Cognitive Architectures (Washington, DC)

Reverse engineering human ethics so that they can be reconstructed from first principles reveals not only that evolution, as would be expected, has located a locally optimal solution but that there exists a clear path to a better solution for all forms of intelligence.

1. First Principles
Defining intelligence as the ability to fulfill complex goals in complex environments leads to defining all intelligent entities as goal-driven entities. Actions that those entities take should be judged by Continue reading

Deriving a Safe Ethical Architecture for Intelligent Machines

Presentation Powerpoint presented Oct. 5, 2010 at ECAP 2010 : The 8th European Conference on Computing and Philosophy (Munich, Germany)

The biggest challenge now facing humanity is how to advance without being enslaved or rendered extinct by our own actions and creations or their unintended consequences. In particular, intelligent machines (IMs) appear to be a tremendous risk. As IMs become more intelligent and take on more responsibilities, their decisions must be informed and constrained by a coherent, integrated ethical structure with no internal inconsistencies for everyone’s safety and well-being.

Unfortunately, no such structure is currently agreed upon to exist. Indeed, there is little agreement even on exactly what morality is. Unfortunately, human ethics are obviously implemented as emotional rules of thumb which are culture-dependent; provably not accessible to conscious reasoning (Hauser et al, 2007); often not optimal in a given situation; and frequently not applied either due to selfishness or inappropriate us vs. them distinctions. Indeed, it is the attempt to analyze such rules under varying, incomplete and inappropriate circumstances that has stymied philosophers for millennia and blocked Wallach and Allen (2009) when they discussed top-down and bottom-up approaches to morality and merging the two but could come to no conclusions. Continue reading

Does a “Lovely” Have a Slave Mentality? – OR – Why a Super-Intelligent God *WON’T* “Crush Us Like A Bug”

Video (starts 27 minutes in) * Powerpoint (unfortunately not visible in the video) presented March 8, 2010 at The Third Conference on Artificial General Intelligence (Lugano, Switzerland)

Inspired by a question after the previous day’s Designing a Safe Motivational System for Intelligent Machines presentation. Immediately followed by a rebuttal by and debate/Q&A with SIAI’s Roko Mijic.