Proceedings PDF for the 2017 Annual International Conference on Biologically Inspired Cognitive Architectures (Moscow, Russia).
Abstract. Most artificial general intelligence (AGI) system developers have been focused upon intelligence (the ability to achieve goals, perform tasks or solve problems) rather than motivation (*why* the system does what it does). As a result, most AGIs have an unhuman-like, and arguably dangerous, top-down hierarchical goal structure as the sole driver of their choices and actions. On the other hand, the independent core observer model (ICOM) was specifically designed to have a human-like “emotional” motivational system. We report here on the most recent versions of and experiments upon our latest ICOM-based systems. We have moved from a partial implementation of the abstruse and overly complex Wilcox model of emotions to a more complete implementation of the simpler Plutchik model. We have seen responses that, at first glance, were surprising and seemingly illogical – but which mirror human responses and which make total sense when considered more fully in the context of surviving in the real world. For example, in “isolation studies”, we find that any input, even pain, is preferred over having no input at all. We believe that the fact that the system generates such unexpected but “humanlike” behavior to be a very good sign that we are successfully capturing the essence of the only known operational motivational system
PDF / PowerPoint (part 1, part 2) presented May 18, 2017 at GET 2017 – The Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics (Phoenix, AZ)
Abstract: Humans are absolutely correct to fear current artificial intelligence designs. Despite considerable interest and research, we are, to our best knowledge, the only ones to claim a design that can be implemented immediately yet not pose an existential risk later. Other current designs (goal-based systems and systems based upon restrictions) are still trapped back in the days of symbolic logic-based artificial intelligence and would require far more “understanding” than current systems possess to even start functioning. Worse, most “pure logic” designs overemphasize optimization (which rapidly leads to dangerous “externalities”) – and all seem unable to come up with any reason WHY their systems should choose to behave as humans might desire.
As pointed out by James Q. Wilson (Wilson 1993), the real questions about human behaviors are not why we are so bad but “how and why most of us, most of the time, restrain our basic appetites for food, status, and sex within legal limits, and expect others to do the same.” The fact that we are generally good, even in situations where social constraints do not apply, Wilson attributes to an evolved “moral sense” that we all possess and are constrained by (just as we wish intelligent machines to be constrained). We contend that this moral “sense” is actually a motivational and control system based upon emotions as “actionable qualia” – and could and should be implemented in machines as well.
PowerPoint/Slideshow/PDF and audio presented November 16, 2016 at The Transhumanist Party East Coast Conference (Arlington, VA)
Abstract: Blockchain is now at the stage that the Internet was at in 2004 with the call to Web 2.0 (just before the release of FaceBook, YouTube and the iPhone). Virtual money has already enabled smart contracts, innovative economics and new approaches to financing and governance. Don’t let the assumption that deep technical skills are required to participate in and benefit from the advances made possible by decentralized, crowd-supported record-keeping of valuables and automated exchange.