Presented Oct. 1, 2014 at the CCNLSI‘s Public Symposium (Georgetown University) on
Machines, Minds & Meaning: Technical, Philosophical & Neuroethical Issues in AI
Event Flyer * Presentation PDF * Presentation Powerpoint
Published Sept. 5, 2014 in Machine Medical Ethics (Intelligent Systems, Control and Automation: Science and Engineering) edited by Simon Peter van Rysewyk & Matthijs Pontier
PDF * Product Flyer * Purchase from Amazon
Implementation of ethics in medical machinery is, necessarily, as machine-dependent as ethics is context-dependent. Fortunately, as with ethics, there are broad implementation guidelines that, if followed, can keep one out of trouble. In particular, ensuring correct codification and documentation of the processes and procedures by which each decision is reached is likely, in the longer view, even more important than the individual decisions themselves. All ethical machines must not only have ethical decision-making rules but also methods to collect data, information and knowledge to feed to those rules; codified methods to determine the source, quality and accuracy of that input; trustworthy methods to recognize anomalous conditions requiring expert human intervention and simple methods to get all of this into the necessary hands in a timely fashion. The key to successful implementation of ethics is determining how best to fulfill these requirements within the limitations of the specific machine.
Video * Presentation Powerpoint * Proceedings PDF presented Aug. 2, 2014 at The Seventh Conference on Artificial General Intelligence (Quebec City)
Exactly as Artificial Intelligence (AI) did before, Artificial General Intelligence (AGI) has lost its way. Having forgotten our original intentions, AGI researchers will continue to stumble over the problems of inflexibility, brittleness, lack of generality and safety until it is realized that tools simply cannot possess adaptability greater than their innate intentionality and cannot provide assurances and promises that they cannot understand. The current short-sighted static and reductionist definition of intelligence focusing on goals must be replaced by a long-term adaptive one focused on learning, flexibility, improvement and safety. We argue that AGI must claim an intent to create safe artificial people via autopoiesis before its promise(s) can be fulfilled.