Dissolving the “Hard Problem” of Consciousness

by Mark R. Waser (originally appeared Feb. 21, 2013 at Transhumanity.Net

Maybe I’m missing something, but it looks like everyone is overlooking the obvious when discussing the so-called “hard problem” of consciousness (per Chalmers [1995], the “explanatory gap” of “phenomenal consciousness” or “qualia” or “subjective consciousness” or “conscious experience”). So let’s make a few assumptions and try a little thought experiment.

Axioms (Definitions & Assumptions)

1. There really is an external reality composed of physical objects (and their movements extending to the extent of complete processes like orbiting a star, etc.).

2. Per Hofstadater [2007], Damasio [2010], and others Llinas [2001], consciousness/self is simply a process running on a physical (“strange”) loop/feedback network created per Richard Dawkins’ [1976] speculation that “perhaps consciousness arises when the brain’s simulation of the world becomes so complete that it must include a model of itself”.

Thus, we end up with three systems, nested like Russian matryoshka dolls: external reality (objects and processes), self/consciousness (a process running on a physical substrate), and the self/consciousness’s self-model (stored in a physical substrate and constantly altered). Clearly, it is the self/consciousness itself which is the subject of phenomenal consciousness (P-consciousness) and the self-model which is the subject of access consciousness (A-consciousness) and the storage place of “memories”.

If we are functioning effectively, Chalmers’ [1995] “double aspect” theory is obviously true because our experiencing any phenomena should lead to them being added to our self-model. However, when we try to reason solely from our “memories”/self-model, we ourselves run into Harnad’s [1990] grounding problem (but as we are rather tightly coupled to our environment, this merely manifests as a minor bit of epistemological angst for those who over-cogitate ). Is this not, exactly, the so-called “explanatory gap” that is theoretically so mysterious?

Frank Jackson’s [1982] monochrome Mary’s self-model CANNOT predict (nor can anyone else provide) the knowledge of what the phenomenal experience of seeing red because it is not only true that:

a) a smaller/contained system (the self-model) cannot fully/reliably predict the behavior of a more complex, fully-interacting containing system (self/consciousness), but also that

b) no system can fully compute the behavior of any complex system with input and/or output (or both) with infinite degrees of freedom (even ones as simple and deterministic as the one described in the three-body gravitational problem).

On the other hand, however, that it is certainly possible that someone (including Mary) with the ability to examine internal models and an extensive knowledge of internal models in general would probably (though not be guaranteed to) be able to correctly predict Mary’s model’s eventual representation of red and implant a correct “memory” of seeing red when she hadn’t – i.e. one that she wouldn’t subsequently be able to recognize as incorrect after seeing red.

Blindsight (the ability of people who are cortically blind to respond to visual stimuli that they do not consciously see) is a clear example of how Chalmers’ [1996] p-zombies would function (with “zoning out” while driving or being “in the flow/zone” during sports being other possible examples – only “possible” because it is also entirely possible we are still conscious during those operations but merely don’t “remember”/haven’t stored the data from doing so). The problems with Chalmers’ p-zombie world are the facts that a) consciousness is necessary at some point to “automatize” [Franklin et al 2007] these actions down to unconscious reflexes and b) consciousness is necessary to handle anomalies [Perlis 2010] and new experiences. Thus, Chalmers p-zombie world could exist for a (brief) period of time if created, but could not evolve itself or survive long without outside intervention (or internal consciousness).

Qualia questions should be answered from the viewpoint of the consciousness/self without conflating the viewpoint of the self-model and without forgetting both that the conscious self affects the physical self and that a given stimulus can affect the physical self directly as well as affecting the consciousness. If anesthesia merely suppressed memory formation rather than suppressing pain, this would be a bad thing and probably fairly obvious due to the stresses on the body from “conscious” pain responses over and above the stresses from “merely” physical trauma. Reversed qualia would have to pay attention to whether and where they directly affect the body or “unconscious” mental processes.

Finally, the physical substrate of consciousness is certainly bound to be important in terms of how fast consciousness can operate with respect to the external physical world. A consciousness “running” on a substrate of macroscopic physical levers or vacuum tubes would be far too slow to experience our world in a fashion anywhere close to the one we experience it. On the other hand, if the substrate passed information through in a timely fashion without altering it, there is absolutely no reason to expect qualia to “fade” or for consciousness not to appear in non-biological settings.

So – it seems to me as if the hard problem of consciousness doesn’t really exist. Is there something critical that I’m missing? Or is it that this is all only obvious in hindsight?

References

Chalmers, D. [1996] The Conscious Mind: In Search of a Fundamental Theory (Oxford University Press).

Chalmers, D. [1995] “Facing Up to the Problem of Consciousness”, Journal of Consciousness Studies 2(3), pp. 200-219.

Damasio, A. R. [2010] Self Comes to Mind: Constructing the Conscious Brain (Pantheon).

Damasio, A. R. [1999] The feeling of what happens: body and emotion in the making of consciousness (Houghton Mifflin Harcourt).

Dawkins, R. [1976] The Selfish Gene (Oxford University Press).

Franklin, S., Ramamurthy, U., D’Mello, S., McCauley, L., Negatu, A., Silva R., & Datla, V. [2007] “LIDA: A computational model of global workspace theory and developmental learning,” In AAAI Tech Rep FS-07-01: AI and Consciousness: Theoretical Foundations and Current Approaches (AAAI Fall Symposium ‘07) (Arlington, VA), pp. 61-66.

Harnad, S. [1990] “The symbol grounding problem,” Physica D 42, pp. 335-346.

Jackson, F. [1982] “Epiphenomenal Qualia,” Philosophical Quarterly 32, pp. 127-36.

Llinas, R. R. [2001] I of the Vortex: From Neurons to Self (MIT Press).

Perlis, D. [2010] “BICA and Beyond: How Biology and Anomalies Together Contribute to Flexible Cognition,” International J. Machine Consciousness 2(2), pp. 1-11. doi: 10.1142/S1793843010000485.

Comments are closed.