Safely Crowd-Sourcing Critical Mass for a Self-Improving Human-Level Learner/“Seed AI”

Presentation Powerpoint * Proceedings PDF presented Nov. 2, 2012 at The Third International Conference on Biologically Inspired Cognitive Architectures (Palermo, Sicily)

Artificial Intelligence (AI), the “science and engineering of intelligent machines”, still has yet to create even a simple “Advice Taker” (McCarthy 1959). We argue that this is primarily because
more AI researchers are focused on problem-solving or rigorous analyses of intelligence rather than creating a “self” that can “learn” to be intelligent and secondarily due to the excessive amount of time that is spent re-inventing the wheel. We propose a plan to architect and implement the hypothesis (Samsonovich 2011) that there is a reasonably achievable minimal set of initial cognitive and learning characteristics (called critical mass) such that a learner starting anywhere above the critical knowledge will acquire the vital knowledge that a typical human learner would be able to acquire. We believe that a moral, self-improving learner (“seed AI”) can be created today via a safe “sousveillance” crowd-sourcing process and propose a plan by which this can be done.

Safety and Morality REQUIRE the Recognition of Self – Improving Machines as Moral/Justice Patients & Agents

Presentation Powerpoint * Proceedings PDF presented July 3, 2012 at the AISB/IACAP World Congress 2012 (Birmingham, England)

One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. We argue that this is solely due to an insufficient understanding of exactly what morality is and why it exists. To solve this, we draw from evolutionary biology/psychology, cognitive science, and economics to create a safe, stable, and self-correcting model that not only explains current human morality and answers the “machine question” but remains sensitive to current human intuitions, feelings, and logic while evoking solutions to numerous other urgent current and future dilemmas.