Deriving a Safe Ethical Architecture for Intelligent Machines

Presentation Powerpoint presented Oct. 5, 2010 at ECAP 2010 : The 8th European Conference on Computing and Philosophy (Munich, Germany)

The biggest challenge now facing humanity is how to advance without being enslaved or rendered extinct by our own actions and creations or their unintended consequences. In particular, intelligent machines (IMs) appear to be a tremendous risk. As IMs become more intelligent and take on more responsibilities, their decisions must be informed and constrained by a coherent, integrated ethical structure with no internal inconsistencies for everyone’s safety and well-being.

Unfortunately, no such structure is currently agreed upon to exist. Indeed, there is little agreement even on exactly what morality is. Unfortunately, human ethics are obviously implemented as emotional rules of thumb which are culture-dependent; provably not accessible to conscious reasoning (Hauser et al, 2007); often not optimal in a given situation; and frequently not applied either due to selfishness or inappropriate us vs. them distinctions. Indeed, it is the attempt to analyze such rules under varying, incomplete and inappropriate circumstances that has stymied philosophers for millennia and blocked Wallach and Allen (2009) when they discussed top-down and bottom-up approaches to morality and merging the two but could come to no conclusions. Continue reading