Searching for the Self: Predictive Processing and the Evolution of Subjectivity

A deflationary account of selfhood and qualia. Touches on psychedelic research, Karl Friston's Free Energy Principle, and visual illusions as intuition pumps.

The precursor to this essay, written around a year earlier, can be found here.


Physicalism satisfyingly explains much of how reality works. We readily accept the explanation that heat is the motion of molecules and that DNA is the code of life. However, when talking about the experience of seeing the color red or the taste of coffee, it seems a purely physical explanation of color perception or gustation is lacking. What it lacks, according to many, is the phenomenal, what-it-is-likeness of the experience, something that can only be known from a first-person perspective (Nagel, 1974). No matter how well we explain sensation and perception in physical terms, no explanation will ever be epistemically satisfying; we will always face an “explanatory gap” between our qualitative experience and our explanations of it in physical terms (Levine, 1983).

The explanatory gap is predicated on the existence of a metaphysically-special class of properties that are distinct from the physical: the so-called mental or phenomenal properties. However, as I’ll argue, these properties do not exist, and our belief that they do is due to an erroneous belief in the existence of a metaphysically-special subject of experience to whom we attribute these mental properties—the self. Once we show that there is no subject of experience, we’ll see that the apparent existence of the explanatory gap is due to the limits of human intuition, not the existence of a unique class of mental properties.

The aim of this paper is to provide a deflationary account of selfhood. I aim to explain where our intuitions about selfhood come from, why they are a poor guide to Truth, and how they can be shifted. In doing so, I hope to shift your intuitions about the existence of a subject of experience, and by extension, the existence of mental properties. I will argue that what we call “the self”—the subject that witnesses all the events of consciousness—is a “hallucination” generated by the brain, and this hallucination can, in principle and (eventually) in practice, be explained in fully physical terms.

My argument is divided into three parts. In Part 1, I’ll explain why we ought to treat physicalism as the null hypothesis and favor it prima facie; I’ll then define the term “self,” and explain where our intuitions about subjectivity come from. In Part 2, I’ll discuss the predictive processing theory, which claims that our brain predicts and generates what we call reality in order to reduce uncertainty; I’ll then situate the the self within this framework and explain why would have been selected for by evolution. In Part 3, I’ll explain how even if we provide a comprehensive explanation of the self, we will never intuitively grasp the fact that what we call consciousness is a purely physical process; therefore, for all intents and purposes, the metaphysical fact of the matter isn’t important, because it will always intuitively “feel” like we are undergoing a subjective experience.

1.1 The physicalist null hypothesis

For the purposes of this paper, I’ll take a Bayesian perspective and argue that a physicalist (i.e., naturalistic, as opposed to some variety of dualism or panpsychism) explanation of the self—the apparent existence of a metaphysically-special self is a hallucination generated through purely physical processes—is more likely true than a dualist explanation—there exist non-physical properties which supervene on the physical. I think a physical explanation is more likely because of induction: those things which we thought were beyond physical explanation—how life is different from non-life—were ultimately explained in physical terms—respiration, excretion, etc. So it seems safe to assume that the march of reductionism will eventually explain everything in terms of basic physics. We can call this our physicalist null hypothesis. The alternative hypothesis is that we will never be able to explain certain phenomena—specifically the what-it-is-likeness of qualitative experience—in physical terms.

1.2 Intuitions about the self

The self is a nebulous, multidimensional term. As I’ll show, this is because what we call the self is a pastiche of disparate mental states. But for the purposes of argument, we can distinguish between two types of selfhood, broadly speaking. The narrative self is the volitional agent with which you identify across time. This self has goals and is referred to as “I”. It is implicated in social thinking. The embodied self is your representation of your body in space: the boundaries between you and the external world and your interoceptive, visceral sensations like hunger, pain, or drowsiness. It appears to be the immediate subject of what some call qualia (Nagel, 1974).

You refer to both these aspects of the self all the time. If someone asks you who is experiencing the sensations and percepts your brain generates, you say “I am,” not “the amalgamation of atoms that form the human being that is uttering this sentence is.”  When you speak, you refer to “me” and “my,” as if there were some transtemporal entity like a soul sitting inside your skull. Thus, even if we aren’t substance dualists, we all act as if we were. We readily distinguish between “I” and the outside world. It is in this way that the existence of the self is intuitive.

But if you simply direct attention inward, toward the contents of experience in the present moment—the feeling of the seat pressing up into the body, the temperature of the air in the room—you will find that there are only sensations.[3] The feeling of self arises when you connect these sensations to another sensation: that of being a subject of experience. A modified version of the Block’s (2007) refrigerator light illusion applies here: when we aren’t introspecting on the nature of selfhood—e.g. when we’re lost in a good conversation or deep in a meditative state—it’s uncertain if the light of selfhood is on or off. But when we check to see if it’s there—when we wonder what the other person thinks of us, or think about some social fumble we made in the past, or actively think about the nature of selfhood—the light of selfhood is on (Blackmore, 2016).

1.3 An evolutionary account of intuitions

I claimed that the existence of the self is intuitive. By this I mean that the feeling of being a subject of experience produces a minimal amount of internal confusion. It simply feels like it’s true. However, as the history of science shows, just because something feels intuitive doesn’t mean it’s true; and just because something feels unintuitive doesn’t mean it’s necessarily false. The idea that the Earth orbits the Sun is unintuitive; the idea that our biological blueprint is encoded in a substrate called DNA is unintuitive; and the idea that our universe is composed of microscopic particles is unintuitive. Yet we treat these explanations as true because their explanatory power is satisfying enough to make us disregard our intuitions.

Selfhood is intuitive because evolution selected for us to be prosocial, naive-realists that survive and reproduce. It selected for us experience life as if we are looking through a transparent, diaphanous pane of glass onto the contents of experience (Moore, 1903; Tye 2002). It did not select for us to see the pane of glass and try to explain what it is made of. This is one of the reasons why arguing the self is a hallucination is so unintuitive: our given “epistemic situation with regards to consciousness”—feeling that we are subjects with direct, epistemically-privileged access to the contents of our subjective experience—makes it difficult to rationally question the nature of consciousness (Ball, 2009).

But again, just because feeling we are subjects of experience is intuitive does not make it true. Assuming intuitions are a good guide to truth is a naturalistic fallacy of sorts. It’s fallacious because it presupposes that evolution selected for us to understand the true nature of reality, which, as just established, is not how evolution necessarily works.

Visual illusions illustrate this well: we are constantly generating reality through an internal model, a model that uses heuristics and is biased is profound ways. We overcome these biases by extending our cognition with empirical tools: the telescope, the microscope, and mathematics, for example. In the case of the Ebbinghaus illusion (see the following page), while the left orange circle appears smaller than the right, we can measure the diameters of the two circles to see that they are actually the same size. This empirical explanation is satisfying enough for us to disregard the intuition that one circle is smaller than the other.

The Ebbinghaus illusion.

2.1 The predictive processing theory, illusions and hallucinations

Setting aside the self for a moment, let’s turn our attention to the predictive processing theory of cognition, an overarching model of how the brain works. The model argues that our brain does not directly see reality in a naive-realist fashion; rather, our brain is constantly predicting and simulating what we call reality. We can explain the Ebbinghaus illusion through this model. Per the model, our misinterpretation of the visual data is caused by our prior expectations (i.e. our priors). Our brain uses a top-down generative model—items surrounded by smaller items look bigger—to construct your visual reality from the incoming bottom-up sense data. Illusions like the Ebbinghaus illusion occur when our these models are tricked (Clark, 2016). However, these models are quite useful, as they allow us to simplify all the sensory data the brain receives; were we not to have them, our brains would be overflowing with information.

On its own, the predictive processing theory is nothing new. In fact, it originates in the work of 19th century physician Hermann von Helmholtz (Clark, 2016). However, the so-called Bayesian brain hypothesis adds a probabilistic, quantitative component to the predictive processing model and allows us to cash out our psychological models in terms of Bayesian language (Friston, 2010). Recent work (Abbott et al., 1997; Yu & Dayan, 2005; Doya, 2002; Friston, 2010) in computational neuroscience supports the idea that we are Bayesian prediction machines, showing that belief and uncertainty are instantiated on a neural level through changes in synaptic gain (i.e. changes in the strength of connection between the presynaptic and postsynaptic terminal) and competition between local populations of neurons, among other mechanisms. As Anil Seth and Karl Friston put it, “the brain is not an elaborate stimulus-response link but a statistical organ that actively generates explanations for the stimuli it encounters” (Seth & Friston, 2016).

The Bayesian brain hypothesis is intimately tied to the free-energy principle, an idea from statistical thermodynamics and information theory. It states that “any self-organizing system that is at equilibrium with its environment must minimize its free energy” (Friston, 2010). For our purposes, free energy is synonymous with entropy, a measure of disorder or uncertainty. Physical systems (like humans) must reduce uncertainty in order to resist their tendency toward entropy. We do this by changing our predictions about the environment—predictive processing—and by changing the data we receive by changing our environment—so-called “active inference” (Friston, 2010). I will expand on the importance of the free-energy principle in section 2.4.

2.2 Predictive processing of the self model

With the predictive processing theory and Bayesian brain hypothesis in place, we can now step outside our limited epistemic situation and put subjective experience within a broader context. If we take the predictive processing theory seriously, it has some two interesting implications.

We do not directly “see,” “hear,” or “smell” anything. All our sensations and perceptions are internally generated, going through some sort of filtering based on top-down control. For example, when you hear a car alarm go off, your brain slowly habituates, attenuating the signal. We can explain this in Bayesian terms: the sound initially had a high degree of surprise—you weren’t expecting a car alarm to go off—hence why it was so salient to you. But over time you came to expect the repeated blaring of the car horn and it became less salient. The car horn stayed the same, but repeated sense data caused your top-down predictive models to expect the sound, allowing you to suppress your Bayesian surprise; this manifests itself in the experience of not hearing the car as loudly (unless you attend to the sound again). Or, instead of habituating to the sound, perhaps you went somewhere where the horn couldn’t be heard, a form of active inference.

As shown with the Ebbinghaus illusion, our internal models often cause us to misinterpret incoming sense data. However, if we are constantly predicting and constructing reality in a top-down fashion instead of passively viewing it from the bottom-up, then we might not just be subject to the occasional illusion: we may be hallucinating our reality all the time. By hallucinate, I do not mean that we are misperceiving incoming sense data. I mean that we erroneously generate sensations from within, e.g. the sensation that all the physical things happening in the present moment are part of some grand subjective experience we are undergoing. This hallucination is at the core of what it feels like to be a self, to be a locus of consciousness. It is bold to claim the self is a hallucination, but in the next section I’ll explain the ways this hallucination may be implemented.

2.3 The rubber-hand illusion and other experimental perturbations of interoception

The rubber hand illusion (RHI) is a simple example of how the embodied self is integrated and how it can easily be perturbed. In the RHI, the subject’s hand rests on the table and is occluded from view by box; a rubber hand is then placed in front of the subject. As the subject looks at the rubber hand, the experimenter strokes both hands simultaneously. Subjects report feeling like the rubber hand is their own  (Botvinick & Cohen, 1998). When the rubber hand is threatened—for example, when the experimenter stabs it with a fork—many subjects elicit a statistically significant response in areas of the brain associated with anxiety such as the insula (Ehrsson, 2007).

The RHI is quite a simple example of how the predictive mechanisms of the brain can be fooled. But recent experimental work (Suzuki et al., 2013) shows that our sense of embodied selfhood can be perturbed in much more significant ways. In one experiment, subjects wore a virtual reality (VR) headset and a heartrate monitor. The subjects looked at their hand through the VR headset. In one condition the hand pulsed in sync with their heartbeat; in another condition it did not. Subjects reported feeling much more identified with the VR hand when it pulsed in sync with their heartbeat, suggesting  that the embodied selfhood model of the brain inferred the hand to be part of the body.

These experiments and others (see Slater et al. 2010 for an example of a VR-induced full-body illusion) suggest that feelings of body ownership are due to multiple levels of feedback mechanisms between interoceptive signals your brain receives (e.g. your brain unconsciously monitoring your heart rate) and the brain’s predictions of these signals. For example, your hand feels like your hand because the visual data you receive when you watch it move is similar to the internal data that generate the movement of the hand. When the visual data does not match up with the motor data, this creates a large “prediction error” and tells your brain to update its predictions. This self-modelling of the body through prediction coding has been called “active interoception” (Seth & Friston, 2016). Per this model, what we call emotions are high-level predictive concepts that help our brain make sense of the low-level autonomic signals it receives from throughout the body. When we stub our toe, what we identify as pain is caused by high-level prediction of the causes of the inflammation in our toe, the localization of those signals to our toe, and a slight decrease in affect, among other things (Seth, 2013).

Both the RHI and the VR experiment show how the embodied self is a generative model our brain uses to make sense of the environment. Thus the hand that is your own is not necessarily the one that is attached to your body, but the one you feel is attached to your body. But, as I’ve argued, there is no special subject of experience to whom the hand belongs; the feeling of being a subject is merely a hallucination the brain generates in order to reduce uncertainty, as I’ll show in the next section.

2.4 An evolutionary explanation of embodied selfhood

Selfhood is a way of minimizing free energy, of better predicting the nature of future state spaces of the environment and thus being better adapted to them. Setting aside our human conception of selfhood, imagine a single-cell organism whose survival is dependent on maintaining boundaries with the external world. In order to maintain homeostasis, certain things need to come in (e.g. food), and other things need to stay out (e.g. intruders). The cell accomplishes this feat by recognizing patterns in the environment, patterns of what is part of itself and what is not. Though it does not think “my DNA is part of me and this virus is not,” it acts as if this were the case: it uses its proto-immune-system to attack intruders and protect its internal environment. This is a form of basic selfhood, distinguishing between that which is me (i.e. that which has similar genetic interests as I do) and that which is not.

What we call embodied selfhood comes from the same imperative the single-cell organism was following: maintain homeostasis by controlling the boundary between me and the outside world. Yes, our sense of selfhood is much more complicated, a pattern-recognition schema that evolution produced through billions of years of selection. But it is the same form of pattern-recognition nonetheless.

What we call the narrative self is the an extremely high-level version of this embodied selfhood. There are reasons why evolution would have selected for us to attribute a higher-level narrative self to ourselves and others. Attribution of a narrative self to ourselves allows us to plan our own behavior, which would have been hugely beneficial for survival and reproduction. Attribution of a narrative self to others allows us to predict their behavior and therefore cooperate with them. This facilitates the emergence of prosocial behavior, hence why it was selected (perhaps via group selection.) Add on top of that the ability to articulate beliefs and desires in terms of language, and a positive feedback loop would develop. Framed in this light, it’s clear why we feel we are subjects of experience.

2.5 Narrative selfhood and inducing ego dissolution

Evolution did not select for us to cut through the hallucination of the narrative self, just as it didn’t select for us to question our agency and the nature of free will. However, empirical work on altered states of consciousness induced by psychedelics and meditation show that the hallucination of the self can dissolve temporarily.

Psychedelics and meditation often occasion what is termed “oceanic boundlessness” or “ego dissolution,” a breakdown of what we’ve henceforth referred to as the narrative self (Carhart-Harris & Nutt, 2010). One might view these altered states of consciousness as hallucinations. However, I believe that they are examples of the dissolution of high-level predictive schema the brain uses to generate the hallucination of the self.

Carhart-Harris et al. (2014) has recently proposed a theory to explain psychedelic states called the Entropic Brain hypothesis. Using functional connectivity neuroimaging, which measures correlations between different voxels in the brain while at rest, they argue that psychedelics experiences are an example of primary states, those characterized by low information integration, and thus high entropy. Conversely, normal conscious waking life is an example of a secondary state characterized by low entropy and a high degree of information integration in the brain.[4]

Carhart-Harris et al. measured what they call “entropy” by looking at the variance within a network of the brain (e.g. the variance in activation of voxels within the default mode network); those networks with high variance were defined as having high levels of entropy. There were two condition, the control condition and the experimental psilocybin condition. When looking at subjects in a normal conscious state, they found little entropy in all brain networks. In the experimental psilocybin condition, they found increased entropy in many of the higher-level networks, like the default mode network (which has been associated with social cognition and the narrative self) and the salience network.

Homological scaffolds representing brain network connectivity in the placebo (left) and psilocybin (right) groups.

While Carhart-Harris et al. did not offer a fully mechanistic account of how psychedelics such as psilocybin cut through the hallucination of the self, their entropic brain hypothesis is a promising explanation, and fits in nicely with the predictive processing account of cognition. They argue that psychedelics relax our top-down priors, the high level predictive schema we employ in day-to-day life such as “this is who I am” and “this is my body,” hence the feelings of ego dissolution and oceanic boundlessness.

Psychedelic experiences and their neural correlates are not evidence for the self being a hallucination. However, they do show that it is possible to cut through the first-person, transparent perspective through which we view the world, and thus shift our intuitions about the nature of experience in profound ways.

3.1 On shifting intuitions

In the previous two sections I’ve attempted to define the multiple facets of the term self, situate it within the predictive processing theory, and provide an account of how and why the brain generates it. However, you may still feel that you are a subject of experience. This is understandable. As I’ve stated, evolution selected for our intuitions about selfhood to be deeply hardwired in our brain; these intuitions are only changed through serious effort (e.g. a 3-month vipassana meditation retreat or a strong dose of psychedelics.) From a functionalist perspective, I myself probably don’t even believe the self is a hallucination (see, I just referred to “I,” “myself,” and “believe.”)

I’ve argued that these intuitions ought to be doubted because we are merely machines running complex software. The real authority, and where we should look for explanations of consciousness, is the designer of it all: the forces of evolution by natural selection. However, even if you have your doubts, if for a moment we assume the null hypothesis that our experience of selfhood is nothing more physical patterns of neural firing, where are we left metaphysically?

We are left with the following: there’s no Cartesian theater or spotlight of consciousness. Every neuron in your brain is an actor and every glial cell a member of the crew, and they’re orchestrating the most magnificent theatre production imaginable. It’s just something they all happen to do as an accident of billions of years of evolution. Sometimes the theatrical ensemble may act as if there is an audience there, but alas, there is no audience. There is no person sitting in the lighting room and moving the spotlight of consciousness, and there is no Cartesian theatre in the brain of the hypothetical person sitting in the lighting room. If there were, we’d face an infinite regress, the homunculus problem. We can avoid this by biting the bullet and accepting that what we call the self is an extremely strong hallucination our brain generates in order to facilitate homeostasis.

With respect to qualia, if there is no metaphysically-special self, then there is no reason to think that we have epistemically-privileged access to our conscious experience; this is because there is no self, no subject of experience, that is distinct from the sensations (i.e. patterns of neural firing) being experienced. While the organism that is you has direct access to the neural firing going on in its brain, this doesn’t mean that you—the hallucinatory subject of experience—has qualitatively distinct access to what’s going on in your brain as compared to me. Thus we’d have to bite the bullet on qualia being some ineffable, metaphysically-special class of properties which we have access to.

It follows that if I had an exact atom-for-atom representation of what’s going on in your brain and had a sophisticated enough model of my own neural architecture, then I’d understand your experience as well as you do. And if I had sophisticated technology that allowed me to selectively activate my brain in very precise ways, then I’d be able to recreate the “experience” you’re undergoing.

You might object, “if everything I perceive is a hallucination, does anything exist at all?” As a physicalist, I’d say yes.[5] The world is made of physical matter—atoms, quarks, etc.—and our brain represents different organizations of this matter as patterns of neural firing. But that is it. Believing—that is, acting as if you believe and saying you believe—that there exists some class of metaphysically-special properties called “the mental” is merely a result of this complex neural firing.

It also follows that the representation of “cat” in my brain does not possess some special mental property of “catness.” When we refer to the concept “cat,” we’re referring to a physical pattern found in the weighted connections between neurons in human brains; this pattern is isomorphic to those organisms out in the world we call cats. For example, their is some level of retinotopic mapping in higher level visual areas such that my neural representation of “cat” is cat-like, in the sense it bears some, albeit convoluted, resemblance to the physical structure of cats. We say “that’s a cat” when we see what is putatively a cat; and if you ask me about my cat, Dixie, I might report experiencing a visual representation of an animal that likes to kill mice and birds and bring them to my doorstep. But we can cash this all out in functionalist terms, and so there’s no need to appeal to a special mental experience which supervenes on the physical representation of a cat in your brain. The appearance of this special mental component is merely part of the greater hallucination of selfhood.

3.2 Realism, Pragmatism, and the intentional stance

We ought to favor a physical explanation of the self over a dualist one if only because it’s more consistent with physicalist explanations of the world, and therefore more parsimonious. In addition, physicalist theories offer greater potential explanatory power, whereas dualist theories inevitably run into a wall (or a gap, as it were.)

That being said, just as we can’t live acting as if cats aren’t real, we can’t live acting as if the self isn’t real. Believing the self isn’t real would lead to a high degree of psychopathology, something akin to dissociative identity disorder. So, for the sake of social function, we ought to take “intentional stance” towards the self, acting as if it exists and that other human beings are real subjects of experience (Dennett, 1991). If we didn’t treat other people as agential, subjective beings, society would collapse. So in the Pragmatist, Darwinian sense of Truth, selves do indeed exist.


Abbott, L. F., Varela, J. A., Sen, K. & Nelson, S. B. 1997. “Synaptic Depression and Cortical Gain Control.” Science 275:220–224.

Ball, Derek. 2009. “There Are No Phenomenal Concepts.” Mind, 118(472): 935-962.

Blackmore, Susan. 2016. “Delusions of Consciousness.” Journal of Consciousness Studies, 23(11): 52–64.

Blanke, Olaf & Thomas Metzinger. 2008. “Full-body Illusions and Minimal Phenomenal Selfhood.” Trends in Cognitive Science, 13(1): 7-13.

Block, Ned. 2007. “Consciousness, Accessibility, and the Mesh Between Psychology and Neuroscience.” Behavioral and Brain Sciences. 30: 481-548.

Botvinick, Matthew, & Jonathan Cohen. 1998. “Rubber Hands ‘Feel’ Touch That Eyes See.” Nature 391(19): 756.

Carhart-Harris, R. L., and Nutt, D. J. 2010. “User Perceptions of the Benefits and Harms of Hallucinogenic Drug Use: a Web-based Questionnaire Study.” Journal of Substance Use, 15: 283–300.

Carhart-Harris, R.L., Leech, Robert, Hellyer, P.J., Shanahan, Murray, Feilding, Amanda, Tagliazucchi, Enzo, Chialvo, D.R., & David Nutt. 2014. “The Entropic Brain: A Theory of Conscious States Informed by Neuroimaging with Psychedelic Drugs.” Frontiers in Human Neuroscience, 8(20): 1-22.

Clark, Andy. 2015. Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.

Dennett, Daniel C. 1991. “Real Patterns.” The Journal of Philosophy, 88(1): 27-51.

Doya, K. 2002. “Metalearning and neuromodulation.” Neural Networks. 15: 495–506.

Ehrsson, Henrik H, Wiech, Katja, Weiskopf, Nikolaus, Dolan, Raymond J., & Richard E. Passingham. 2007. “Threatening a Rubber Hand that You Feel is Yours Elicits Cortical Anxiety Response.” PNAS, 104(23): 9828-9833.

Friston, Karl. 2010. “The Free-energy Principle: A Unified Brain Theory?” Nature Reviews Neuroscience, 11:127-138.

Levine, Joseph. 1983. “Materialism and Qualia: The Explanatory Gap.” Pacific Philosophical Quarterly, 64: 354–361.

Limanowski, J. & Blankenburg F. 2013. “Minimal Self-Models and the Free Energy Principle.” Frontiers of Human Neuroscience 7:547.

Moore, G.E. 1903. “The Refutation of Idealism.” Mind, 12: 433-53.

Nagel, Thomas. 1974. “What Is It Like to Be a Bat?” The Philosophical Review, 83(4): 435–450.

Raatikainen, Panu. 2015. “Gödel’s Incompleteness Theorems.” The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta.

Seth, Anil K. 2013. “Interoceptive Inference, Emotion, and the Embodied Self.” Trends in Cognitive Science, 17(11): 565-573.

Seth, Anil K., & Karl Friston. 2016. “Active Interoception and the Emotional Brain.” Phil Trans. R. Soc. B 371: 1-10.

Slater M., Spanlang B., Sanchez-Vives M.V., & Blanke O. 2010. “First Person Experience of Body Transfer in Virtual Reality.” PLoS ONE 5(5): e10564.

Suzuki, Keisuke, Garfinkel, S.N., Critchley, H.D., & Anil K. Seth. 2013. “Multisensory Integration across Exteroceptive and Interoceptive Domains Modulates Self-experience in the Rubber Hand Illusion.” Neuropsychologia, 51(13): 2909-2917.

Tononi, G. 2012. “Integrated Information Theory of Consciousness: An Updated Account.” Archives Italiennes de Biologie, 150: 290-326.

Tye, Michael. 2002. “Representationalism and the Transparency of Experience.” Noûs, 36(1): 137-151.

Yu, A. J. & Dayan, P. 2005. “Uncertainty, Neuromodulation and Attention.” Neuron 46: 681–692.

[1] Granted, it still currently leaves many questions unanswered, such as what preceded the Big Bang, but these questions seem to be answerable in principle.
[2] This is analogous to mathematician Kurt Gödel’s 2nd Incompleteness Theorem: no formal system of logic can prove its own consistency (Raatikainen, “Gödel’s Incompleteness Theorems”). It’s also analogous to David Hume’s claim about not being able to derive a normative “ought” from only a descriptive “is.” Only once you’ve established your axiomatic “ought”—e.g. the highest moral imperative is reducing the suffering of conscious beings—can descriptive facts about the world be brought to bear on ethics.

[3] Or, speaking from a physicalist perspective, there are only patterns of neural firing that you post-hoc identify as sensations.

[4] This measure of entropy is inversely related to what Tononi (2012) calls phi, integrated information.

[5] But, as stated in section 1.1, I have no proof of this. It’s merely a claim I take to be axiomatic.