More philosophy >

Integrated Information Theory

Integrated Information Theory, or IIT for short, is a brave attempt by Giulio Tonini and colleagues to capture scientifically the essence of what it is to be a rational, conscious mind. Its focus on information is surely correct, but at least in its current form it suffers serious shortcomings.

The mind is information

When we look for the "seat of consciousness" in our minds, there is a growing acceptance that it cannot be found simply by examining the brain in minute detail. The wiring of the brain's thinking centres does not encapsulate what we are thinking about any more than the wiring of a computer chip encapsulates the program running on it. To understand thoughts and experiences we must turn not to the wiring but to the information flowing along those circuits. If we stop the information flowing in our thinking centresthe wiring remains but consciousness vanishes. Reawaken consciousness and the buzz of information passing to and fro resumes. It is clearly the information in these regions of our brains which makes us consciously aware of ourselves and our experiences. Indeed this information completely characterises any given experience: change the information in the slightest degree, say with a little electrode, and you change the experience. Brain surgeons use such techniques increasingly often to help diagnose brain functions during an operation. In this way, we come to understand the human mind as a vast store of information, constantly updating itself.

The nature of complexity

But not every vast and busy database is conscious. Studies of mental states suggest that consciousness involves not only huge amounts of data but the ability to join the right bits together, say to join the sight of my fingers with the feel of the keyboard under them, the deliberate moving of my arms and the ideas I want to share, all brought together in the act of typing. Integrated Information Theory suggests that this bringing together, or integration, of information is the key to consciousness.

For Tononi, a neurologist and sleep researcher, this is key because it provides an explanation for how a conscious mind can fall asleep when the brain stops integrating so much stuff.

Yet he makes one astonishing blunder. He suggests that a sufficiently interconnected array of information will be conscious even when the array is static and no information is being processed. This is arrant nonsense. Consider dumping that arary to a printer. Are we seriously to believe that the resulting mile-high pile of paper is conscious? Even if we do subscribe to some form of panpsychism (which Tononi explicitly denies), that applies regardless of the sophistication of the information intergration level of the Universe, it has nothing whatever to do with Tononi's theory. Proponents of IIT point to the static nature of conscious experience during deep meditiation, suggesting that such stasis requires accommodating. I would in turn suggest that there is a fundamental difference between a smoothly-flowing river and a stagnant pool. The brain is not idle during meditiation, it is still measurably active.

It is not enough to simply examine different causally-related states and describe that relation as something called "time". Rather, we must understand consciousness to be a flow of information, a dynamic complexity in real-time, like a film reel unwinding and projected onto a screen as a moving picture. The film analogy is particularly apt, as the human brain employs exactly this kind of stop-frame animation to capture our visual sense data and integrated it into a conscious experience of smooth movement. For Tononi to describe the differences between frames, and expect to have thus described the experience of motion which the brain extracts from these differences, is wholly inadequate.

Axiomatic foundations

IIT rather pretentiously defines the properties of conscious experience as a set of axioms. Such axioms of consciousness place IIT squarely in the arena of philosophy, not only of the mind but also of formal logic. But Tonini is a doctor of medicine, not of philosophy. It can come as no surprise to anybody with genuine philosophical knowledge that his axiomatic edifice does not stand up to scrutiny.

His first axiom is merely Descartes' cogito ergo sum in new clothes; our conscious experience is the foundational reality which affirms our personal existence. The Buddha would have had a few things to say about that anyway, not least that all of conscious experience is by its very nature illusion.

His remaining axioms model the information that consciousness attaches itself to. To my eye they appear a muddled collection of basic imperatives for any formal system, together with some cod definitions of acceptable substructures within a complex experience. They may make sense to a psychiatrist, they make very little to a philosopher (and believe you me, philosophers have entertained some cray-zee notions in their time!).

It is important to understand that the mathematical fun and games which bulk out the theory are thus built on sand. They might or might not prove a consistent edifice. But even if they do, any relation to human (or any other) consciousness is far from guaranteed.

Levels of integration

The next step in the theory is to provide a mathematical description of integrated information, so that the level of integration of any data set can be calculated. The higher the level of integration, the more conscious the information becomes. Physicist Max Tegmark encapsulated the principle beautifully when he remarked that:

"consciousness is what information feels like when it reaches a certain level of complexity".

I find it hard to disagree with that.

So, how is the level of integration calculated? Well, let us say that several formulae have been bandied about but the theory is at far too early a stage for these to be more than poster children for a useful definition. Some critics have dismissed IIT because these embryonic ideas are, well. so embryonic. For example a if you were to take all of some masive database and compress it into one file, you may find that, according to your preferred formula, the act of compression adds hugely to the integration of the output file. Yet nobody (except perhaps Tonini) would claim that a compressed archive of Google Earth is conscious.

In truth, all these early models have shown is that we do not yet understand the brain enough to create a useful definition of conscious-level integration. Indeed, the failure to incorporate dynamic information flow and transformation and into the model is disastrously naive. The qualitative and time-related characteristics of the integration are at least as important as the sheer complexity and extent. For example one might expect complex dynamic cross-relationships, far more layered than mere data compression. Along with a high level of dynamic flow or change, the presence of key structures such as internal representations of time passing while things happen, of one's self as a conscious entity, and so forth, almost certainly need to be present.

But to dismiss the theory on account of its crude first steps is unfair. It would be more useful to propose ways of taking the theory forward.

The hard problem

In the theory of mind, there is an issue known to philosophers simply as "the hard problem". In a conscious mind, every nuance of information is accompanied by a subjective experience. For example every time my brain signals it has seen something red, I experience a visual quality of redness. We say that the particular brain signal and the particular experience are "correlates" of one another. The trouble is, that no matter how exactly and minutely anyone may describe that pattern of visual information, nowhere does the subjective quality of redness appear in that description. Nor does asking me help you very much. I will just say, "Yes, it was red", an answer which you could have predicted from the brain signal anyway. But what I cannot communicate to you is what that quality of redness felt like. You too probably feel an experience of redness when you see something red, but equally you cannot explain its quality to me. On the other hand, perhaps you are colourblind and experience something else. Worse still, whether you are colourblind or not, there is no way of telling whether your experience and mine are anything like each other's. I have no idea of how you, personally, experience redness and, apparently, no way of ever knowing. Explaining these gaps, between the physical neural phenomenon and the internal "quale" of each different person's experience, is the hard problem.

Some people hotly deny that there is any problem. They see a complete, logical identity between the brain signals and the inner experience: there is no gap because they are just the one thing. And indeed, any good logician will tell you that if two things are, by any applicable yarddstick, identical in character, then they must be the same thing. The next step goes that, since inner experience is inaccessible to objective science, the only possible scientific yardstick is the measured brain signal. This signal can then be correlated anecdotally to the quale, but that is just how the brain signal manifests in the mind, it is not something addidional or of a wholly different kind. This is in essence a flat denial that there is any kind of distinction between a pattern of signals in the brain and the feeling of redness that correlates with it. At first, one might note that it is hard to argue with a flat denial, one can only disagree.

IIT introduces a highly significant subtlety into the picture, by introducing the idea that an experience is information. Writers on IIT tend to include the information in with the subjective aspect as part of what they mean by a quale. However information, as encoded in the neural correlate, is accessible once one has learned the way the brain works, while the root philosophical motivation for qualia is that they are by definition inaccessible. Setting aside this philosophical hiccup, the nature of this information is still well worth pursuing.

One might seek to lump in the information with the neural signal instead, but here too there is a difficulty. No two brains are wired exactly alike, no two signals directly comparable synapse-by-synapse. The quale accompanying a signal also depends on the context where that signal occurs; in one place it might be coding for blue, in another G flat, or the smell of a cabbage, or kicking out to the right. Moreover the brain changes with time, and correlations may change with it. The best we can do is to establish what information that particular signal pattern in that particular place at that particular tiome is carrying.

Thus, we have an apparent tryptich of correlates - neural activity, information content and subjective experience. Yet the information (of a certain shade of red) remains unchanging, directly correlated only with the quale. This has led some to question the notion of a neural "correlate" at all. IIT neatly sidesteps this by focusing on the information.

However IIT is quite unable to shed light on the Hard Problem. Whatever information we succeed in identifying through brain scans, or any other method, we are still none the wiser as to the experiential quality - the quale - of that information. The first axiom of IIT states that consciousness exists; it is a done deal. All the remaining axioms do is model the information it attaches itself to. In doing this, it in effect acknowledges the hard problem but deliberately sidesteps any attempt to grapple with the consequences.

For example, suppose that we eventually produce some marvellous and ingenious equation for integrated awareness, which captures a threshold of consciousness that doctors can reliably apply to patients, simply by plugging their brain scanner into a computer. The computer reports a conscious sensation of redness, the doctor shows the patient a red card and the patient says, "Yes, it is red". What is this quality of redness that has attached itself to the mental information and the patient has experienced? IIT leaves us not the slightest bit wiser than we have ever been.

The limitations of IIT

Besides its gross mathematical immaturity, IIT fails to sensibly model the stream of consciousness, i.e. consciousness as information flowing in time. Through a fundamental philosophical misunderstandg on the nature of subjective experience, IIT fails to address the hard problem.

The first three problems can potentially be put right. Giving its axioms a professional philosophical makeover would be a good start. Turning its mathematical expression into a dynamic flow (would it be naive to suggest a derivative with respect to time along the lines of Φ = dp/dt where p is the complicated bit?) might help open the path to a more realistic mathematical model. But it had best avoid the hard problem and make no pretence of meeting it.

All that the theory really is at the moment is the idea that consciousness is a property of information rather than of physical objects, and it requires a high degree of organised complexity. It may one day help us to quantify consciousness, which might be a help to neurologists, animal psychologists and artificial intelligence researchers, but it can never explain it. At best it might shed light on the relationships between such things as intelligence, conscious states, sentience and self-awareness. Ultimately, it can never be more than a theory of mental information. It should stop pretending to be anything else.

Updated 10 Dec 2022