Rational Universal Benevolence: Simpler, Safer, and Wiser than “Friendly AI”

Video (starts at 32:15) * Presentation Powerpoint * Proceedings Paper presented August 4, 2011 at The Fourth Conference on Artificial General Intelligence (Mountain View, CA)

Insanity is doing the same thing over and over and expecting a different result. “Friendly AI” (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity’s eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a “Coherent Extrapolated Volition of Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon?

Wisdom DOES Imply Benevolence

Presentation Powerpoint * Proceedings PDF presented on July 14, 2011 at IACAP 2011: First International Conference of the International Association of Computing and Philosophy Celebrating 25 years of Computing and Philosophy (CAP) conferences (Aarhus, Denmark)

Fox and Shulman (ECAP 2010) ask “If machines become more intelligent than humans, will their intelligence lead them toward beneficial behavior toward humans even without specific efforts to design moral machines?” and answer “Superintelligence does not imply benevolence.” We argue that this is because goal selection is external in their definition of intelligence and that an imposed evil goal will obviously prevent a superintelligence from being benevolent. We contend that benevolence is an Omohundro drive (2008) that will be present unless explicitly counteracted and that wisdom, defined as selecting the goal of fulfilling maximal goals, does imply benevolence with increasing intelligence.