Designing, Implementing and Enforcing a Coherent System of Laws, Ethics and Morals for Intelligent Machines (including Humans)

Presentation Powerpoint * Proceedings PDF presented November 9, 2014 at the 2015 Annual International Conference on Biologically Inspired Cognitive Architectures (Lyon, France).

Presentation Alternate Title: Why Your Google Car Should (Sometimes) Kill You

Recent months have seen dire warnings from Stephen Hawking, Elon Musk and others regarding the dangers that highly intelligent machines could pose to humanity. Fortunately, even the most pessimistic agree that the majority of danger is likely averted if AI were “provably aligned” with human values. Problematical, however, are proposals for pure research projects entirely unlikely to be completed before their own predictions for the expected appearance of super-intelligence [1]. Instead, with knowledge already possessed, we propose engineering a reasonably tractable and enforceable system of ethics compatible with current human ethical sensibilities without unnecessary intractable claims, requirements and research projects.

Human-Robot Interaction, the Future of Work & the Meaning of Life

Powerpoint presented Wednesday, July 29, 2015 at the Juniata Transhumanism Conference (Huntingdon, PA).

Structural unemployment, particularly unemployment caused by automation, is an exemplar of Rittel and Webber’s wicked social policy problems. Unfortunately, their beliefs that, in a pluralistic society:

  • there is nothing like the undisputable public good;
  • there is no objective definition of equity;
  • policies … cannot be meaningfully correct or false;
  • it makes no sense to talk about “optimal solutions”; and
  • even worse, there are no “solutions” in the sense of definitive and objective answers

predictably lead to humanity blundering from one sub-optimal partial solution to the next.  All of our wicked social policy problems will never be solved until we take a coherent systems engineering approach and stop trying to solve what are really sub-problems without knowing/defining our true goal(s) — The Meaning of Life.

(Yes, of course this features far more artificial intelligence than the abstract indicates :-) )

What Does It Mean To Create A Self?

Powerpoint presented Thursday, November 13, 2014 at the AAAI Fall 2014 Symposium FS14-07 on The Nature of Humans and Machines: A Multidisciplinary Discourse (Washington, DC).

The frame problem (McCarthy & Hayes, Dennett), the symbol grounding problem (Harnad) and the semantic grounding problem (Searle) all strongly indicate that we are not going to create an artificial intelligence until we create an artificial self. Of course, creating an artificial self immediately raises important safety and moral issues (while solving others). We outline an initial plan to safely and morally create the infrastructure for and the actuality of an artificial self.