Presentation Powerpoint presented March 24, 2014 by Mark R. Waser at the AAAI 2014 Spring Symposium on Implementing Selves with Safe Motivational Systems & Self-Improvement (Palo Alto, CA)
An intentional “self” is a necessity to answer quandaries ranging from Hume’s is-ought problem to artificial intelligence’s philosophical “frame problem” to questions about meaning and understanding. However, without a good blueprint for that intentionality, the new self could conceivably pose an existential risk for humanity. A critical early design decision is how human-like to make the self, particularly with respect to requiring moral emotions that cannot be self-modified any more than those of humans in order to ensure safety, stability and sociability. We argue that Haidt’s definition of morality – to suppress or regulate selfishness and make cooperative social life possible – can be reliably implemented via a combination of the top-down intentionality to fulfill this requirement and the bottom-up emotional reinforcement to support it. We suggest how a moral utility function can be implemented to quantify and evaluate actions and suggest several additional terms that should help to reign in entities without human restrictions.
By Mark R. Waser (originally appeared Jan. 31 2014 in The Wave Chronicle)
The headline over at Huff Post Tech actually reads “Google’s New A.I. Ethics Board Might Save Humanity From Extinction” and the article is filled with a lot of the typical nonsensical, fear-mongering idiocy — BUT the predominant side-effect of a well-funded, high-profile, COMPETENT Ethics Board could well mitigate a world of pain . . . . Continue reading