Human-like Emotional Responses in a Simplified Independent Core Observer Model System

Proceedings PDF for the 2017 Annual International Conference on Biologically Inspired Cognitive Architectures (Moscow, Russia).

Abstract. Most artificial general intelligence (AGI) system developers have been focused upon intelligence (the ability to achieve goals, perform tasks or solve problems) rather than motivation (*why* the system does what it does). As a result, most AGIs have an unhuman-like, and arguably dangerous, top-down hierarchical goal structure as the sole driver of their choices and actions. On the other hand, the independent core observer model (ICOM) was specifically designed to have a human-like “emotional” motivational system. We report here on the most recent versions of and experiments upon our latest ICOM-based systems. We have moved from a partial implementation of the abstruse and overly complex Wilcox model of emotions to a more complete implementation of the simpler Plutchik model. We have seen responses that, at first glance, were surprising and seemingly illogical – but which mirror human responses and which make total sense when considered more fully in the context of surviving in the real world. For example, in “isolation studies”, we find that any input, even pain, is preferred over having no input at all. We believe that the fact that the system generates such unexpected but “humanlike” behavior to be a very good sign that we are successfully capturing the essence of the only known operational motivational system

Implementing A Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)

Proceedings PDF presented July 17, 2016 at the 2016 Annual International Conference on Biologically Inspired Cognitive Architectures (New York City, USA).

Abstract: Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (including Humans)

Presentation Powerpoint * Proceedings PDF presented November 9, 2015 at the 2015 Annual International Conference on Biologically Inspired Cognitive Architectures (Lyon, France).

Presentation Alternate Title: Why Your Google Car Should (Sometimes) Kill You

Recent months have seen dire warnings from Stephen Hawking, Elon Musk and others regarding the dangers that highly intelligent machines could pose to humanity. Fortunately, even the most pessimistic agree that the majority of danger is likely averted if AI were “provably aligned” with human values. Problematical, however, are proposals for pure research projects entirely unlikely to be completed before their own predictions for the expected appearance of super-intelligence [1]. Instead, with knowledge already possessed, we propose engineering a reasonably tractable and enforceable system of ethics compatible with current human ethical sensibilities without unnecessary intractable claims, requirements and research projects.