Implementation Fundamentals for Ethical Medical Agents (Mark R. Waser)

Published Sept. 5, 2014 in Machine Medical Ethics (Intelligent Systems, Control and Automation: Science and Engineering) edited by Simon Peter van Rysewyk & Matthijs Pontier
PDF * Product Flyer * Purchase from Amazon

9783319081076Implementation of ethics in medical machinery is, necessarily, as machine-dependent as ethics is context-dependent. Fortunately, as with ethics, there are broad implementation guidelines that, if followed, can keep one out of trouble. In particular, ensuring correct codification and documentation of the processes and procedures by which each decision is reached is likely, in the longer view, even more important than the individual decisions themselves. All ethical machines must not only have ethical decision-making rules but also methods to collect data, information and knowledge to feed to those rules; codified methods to determine the source, quality and accuracy of that input; trustworthy methods to recognize anomalous conditions requiring expert human intervention and simple methods to get all of this into the necessary hands in a timely fashion. The key to successful implementation of ethics is determining how best to fulfill these requirements within the limitations of the specific machine.

Safe/Moral Autopoiesis & Consciousness (Mark R. Waser)

Published June 2013 in International Journal of Machine Consciousness 5(1): 59-74
Article PDF

Artificial intelligence, the “science and engineering of intelligent machines”, still has yet to create even a simple “Advice Taker” [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a “self” that can “learn” to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins’ [1976] speculation that “perhaps consciousness arises when the brain’s simulation of the world becomes so complete that it must include a model of itself” by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and “free-will” that continue to pave the way towards the creation of safe/moral autopoiesis.