Implementing A Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM)

Proceedings PDF presented July 17, 2016 at the 2016 Annual International Conference on Biologically Inspired Cognitive Architectures (New York City, USA).

Abstract: Arguably, the most important questions about machine intelligences revolve around how they will decide what actions to take. If they decide to take actions which are deliberately, or even incidentally, harmful to humanity, then they would likely become an existential risk. If they were naturally inclined, or could be convinced, to help humanity, then it would likely lead to a much brighter future than would otherwise be the case. This is a true fork in the road towards humanity’s future and we must ensure that we engineer a safe solution to this most critical of issues.

Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (including Humans)

Presentation Powerpoint * Proceedings PDF presented November 9, 2015 at the 2015 Annual International Conference on Biologically Inspired Cognitive Architectures (Lyon, France).

Presentation Alternate Title: Why Your Google Car Should (Sometimes) Kill You

Recent months have seen dire warnings from Stephen Hawking, Elon Musk and others regarding the dangers that highly intelligent machines could pose to humanity. Fortunately, even the most pessimistic agree that the majority of danger is likely averted if AI were “provably aligned” with human values. Problematical, however, are proposals for pure research projects entirely unlikely to be completed before their own predictions for the expected appearance of super-intelligence [1]. Instead, with knowledge already possessed, we propose engineering a reasonably tractable and enforceable system of ethics compatible with current human ethical sensibilities without unnecessary intractable claims, requirements and research projects.

Bootstrapping A Structured Self-Improving & Safe Autopoietic Self

Presentation Powerpoint * Proceedings PDF presented November 9, 2014 at the 2014 Annual International Conference on Biologically Inspired Cognitive Architectures (Boston, Massachusetts).

After nearly sixty years of failing to program artificial intelligence (AI), it is now time to grow it using an enactive approach instead. Critically, however, we need to ensure that it matures with a “moral sense” that will ensure the safety and well-being of the human race. Implementing consciousness and conscience is the next step the way towards creating safe and cooperative machine entities.