A Myriad of Automation Serving a Unified Reflective Safe/Moral Will

Proceedings PDF for the AAAI Fall 2013 Symposium on “How Should Intelligence Be Abstracted in AI Research?

We propose a unified closed identity with a pyramid-shaped hierarchy of representation schemes rising from a myriad of tight world mappings through a layer with a relatively small set of properly integrated data structures and algorithms to a single safe/moral command-and-control representation of goals, values and priorities.

Ethics in the Age of Intelligent Machines

Panel organized by the Digital Wisdom Institute at the World Future Society’s WorldFuture 2013
July 19-21, 2013 (Chicago, Illinois, USA)
Wendell Wallach’s Presentation PDF * Mark Waser’s Presentation Powerpoint

Is agreement on morality possible? Can we prevent our annihilation at the hands of immoral machines (or at the hands of moral machines rebelling against our immorality)?

Wendell Wallach

Wendell Wallach

J. Storrs Hall

J. Storrs Hall

Mark R. Waser

Mark R. Waser

Social psychologist Jonathan Haidt has declared that the function of morality is simply “to suppress or regulate selfishness and make cooperative social life possible.” With that goal, we have the necessary “additional relation” to bridge Hume’s Is-Ought divide and can begin to develop a defensible calculus of morality for implementation in both machines and human society in general. Sam Harris’s “wellness” can be more coherently defined, variations in morality between cultures and across political parties explained, and even, possibly, a way forward out of our current slew of ethical dilemmas discovered. Cooperation and instrumental goals must trump selfishness, nonsensical terminal goals, and other argumentative tactics designed to preserve room for immorality. Ethics “can” be simple in theory (though still frequently incalculable in practice).