Selfishness, Interdependence & the Algorithmic Execution of Entity-Derived Intentions & Priorities

Presentation Powerpoint presented May 27, 2014 at the Second Annual Conference on Governance of Emerging Technologies: Law, Policy & Ethics (Scottsdale, Arizona)

It’s the ultimate nightmare scenario . . . an innocent mistake is made, misinterpreted by the other side, and the ensuing escalation – easily avoided by trust, communication or even sufficient knowledge of the other – instead leads to Armageddon. It’s a storyline that is so universal that it is the plot-line for just as many romantic comedies as action-packed thrillers. Him vs. her, us vs. them, our need to remain comfortable and in control incorrectly supersedes our true goal of having things turn out for the best.

The problem is that, along with increasing knowledge, improved technology and enhanced possibilities, the world is moving ever faster and becoming ever more complicated. In order to keep up, reflexive reactions, automation and pre-determined choices have become indispensable facts of life – at the cost of flexibility, adaptability and even safety in unknown or anomalous circumstances. And, nowhere is this more obvious and critical than when we are discussing warfare.
Soldiers must quickly and unhesitatingly follow orders – unless ordered to do something in violation of the laws of war. Autonomous machines must immediately do as they are ordered – unless they violate the rules of distinction, proportionality or military necessity. Even the fast pace of modern warfare must not lead commanders to escalate an enemy mistake (much less a misunderstood anomaly) past a point of no return – not to mention when malicious third-parties spoof or hack military hardware.

Our lives are becoming increasingly intertwined even as algorithms, corporations and governments forcibly intermediate our relationships and reduce our choices in the name of safety and security. Worse, even more options are increasingly lost to unanalyzed unintelligent automated “black-box” defaults like the invisible effects of the various on-line filter bubbles. While blame is frequently laid at the feet of increasing automation, it is much more appropriately attributed to our culture of selfishness – greedy algorithms of over-optimization and short-sighted micro-management. Such a culture wants to treat our world as a single complex adaptive system to be controlled as a single entity and wants to believe that automated systems can always be rigidly and accurately controlled, even under tremendously adverse circumstances.

In reality, the only way to evade the crippling conundrums of symbol grounding and the frame problem is to center, stabilize and protect any sufficiently complex, powerful and/or dangerous system as an autopoietic (self-recreating) entity rather than a mindless tool. The problem with derived intentionality, as abundantly demonstrated by systems ranging from expert systems to robots, is that it is brittle and breaks badly as soon as it tries to grow beyond closed and completely specified micro-worlds and is confronted with the unexpected (“no plan survives contact with the enemy”). Instead of selfishly being afraid of machine “others” and what they might be trying to gain from us (or that they may destroy us), we need to learn and teach them the opportunities gained by diversity, trust and interdependence lest we miss the ultimate in life-affirming relationships.

Comments are closed.