When I started at Oxford University in 1979, I was told I'd have to take statistics as a minor for a major in philosophy and psychology. So I nervously went to meet the professor, worrying how I hadn't studied calculus in a long time. But he had been sitting in front of an Apple 2, on a table in the Psychology Dept. lounge, for a solid week. He had read the manuals and written a couple of programs, then he had just sat there looking at it. For a week.

What is it like to be a bat?
What is it like to be a bat?

When I arrived, he looked up and said, "Look! It's Oxford's first personal computer! What do you think?" And that was actually true. Even by the time I left three years later, Oxford still only had two personal computers, for all 8,000 students.

So, there the first personal computer was, and I just looked at it numbly myself. So the professor continued, "when this arrived at the beginning of the term, I had to rethink what to teach for a long time. And they asked me if they should buy more for all of you lot too, but I said, that will be a waste of money for a long time. I said, they are going to improve them so much, they will totally change, even by the time you finish your degree. So the real question for us now is not how to use a computer, but what are they really good for?"

I finally figured out something to say, at least complimentary if still stupid. "I don't know," I said. "That's the kind of thing I came here to learn, from people like you."

He smiled and said, "the real fact is, this technology is going to make enough pointless numbers to make hell boil over. So I'm throwing all the textbooks on what I should be teaching you out the window. I used to teach how to do the equations. But now these machines will do the equations for you. So my job now is to stop you taking this thing for granted, and think about what numbers it generates that have any meaning. Because I swear to God, you will be seeing everyone else citing whatever numbers it produces as some kind of irrefutable truth the rest of your life."

The immediate upshot of that view was that statistics became tested by an open-book exam, as it is to this day. But it was only this last week that I realized how profound what he said was, far beyond that, because the same applies for words as for numbers. So the same has gradually boiled over into all philosophy--the computer has replaced people's search for meaning with a search for some empty model that means nothing, yet appears to describe everything. I witness people discussing Wittgenstein's theory as a 'game-theory simulation.' and stating that deflationary theories, up to and including truth nihilism, are realist rather than logical positivist. And that is what some who call themselves philosophers are doing. No wonder we now live in a ‘post-truth era.’

Early Hopes for Artificial Intelligence

By the time I left Oxford University, Apple had almost finished its 'Lisa' computer, named for Steve Jobs' daughter; and I when I moved to Silicon Valley, I had no idea I would have a long feud with Steve Jobs. Back at Oxford, people were still working on a AI program called 'Eliza' which could talk with people (Weizenbaum, 1976). If someone said to it, "I had dinner with my father last week," Eliza would reply "tell me about your father." Pundits noted that many people preferred talking to the computer program than to a therapist, even though it had no sense of humor. For example, if you told it your gerbil had beaten an elephant in a thumb-wrestling match, it would ask how you felt about your gerbil.

Yet even so, people were just enamored that a machine could talk back at all, and wondered whether computers would become so human-interactive that they would even replace teachers altogether. At first, it did not seem so far fetched, but as people tried, the extent of the problem became very apparent. It took 30 years for MIT to develop a even a simple puppet that could help teach children (Breazeal, 2002). What computers have really taught us is not how simple and predictive reality is, but rather, our minds are far more complicated than we really ever appreciated before.

While some scientists continually hope to reduce the phenomenon of consciousness to some simply explicable mechanical model, the actual size of the problem continues to defeat even the most extensive efforts. The issue of consciousness therefore remains largely beyond the reach of psychology to explain in any definitive manner. The best it can offer is suggestions to simplified models. But the extraordinary complexity of just one mind, let alone the possible number of interactions between two minds, makes the problem close to 'NP-complete' -- impossible to solve within the finite limits of time.

Neocortical Complexity

How powerful a computer would be required to model just the neocortex in one human brain alone? While people tend to think of computing power in terms of calculations, for brain modeling, the issue is more storing sufficient data. According to best estimates, it would only require about 2,000 current-day personal computers to model the activity of one human brain's neocortical area if each neuron's activity only required one calculation each. However, each of those calculations actually need to affect about a thousand others, thus requiring about 2 million computers and creating a data size problem. To provide sufficient storage for the interactions in the human brain, every single person on the planet would need to have one billion 1-terabyte disk drives each. To model all the neocortex of one person, the 2 million computers modeling neocortical activity would need to access about 10 trillion disk drives per second, which is about 10 billion times faster than the Internet.

  • Specifically, according to scientific studies, there are 23.9 billion neurons in the human neocortex (2.39x10E10) and 164 trillion synapses(1.54x10E14]. With an estimated word population of 7.5 billion in April 2017, that places the total number of human neocortical cells at 1.8x10E20, and the total number of synapses at 1.2x10E30. The number of possible interneuron relationships during human interaction is therefore 3.2x10E40, and the total number of intersynaptic relationships are 1.4x10E60 (Tang, 2001; Nguyen, 2010). If we are to compare that to the observable universe, there's a total of between 10E23 and 10E24 stars, and between 10E76 and 10E82 atoms. For a comparison in humanly conceivable terms, there are about the same number of synaptic interactions possible on the planet as there are atoms in our galaxy (Howell, 2014; Villeneuva, 2015). Neuronic activity is difficult to estimate, in computer terms, because the human brain expends more energy suppressing synaptic activity than actually firing connections. But if we are to consider active neurons alone, then the firing rate of any one neocortical cell is an average of 0.14 Hz (AI Impacts, 2015). That places the total number of neocortical neuron interactions at 2.2x10E13 per second.
  • The fastest personal computers now store close to the same amount of data as the world's fastest supercomputer, IBM Watson, 4 terabytes (4x10E12). Let's assume 64 bits are enough to store the data value and suppressive state of each neocortical synapse, together with data fields indicating its interconnection. that would be 9.6x10E30 bytes, or a storage space equivalent to about 10E18 1-terabyte disk drives, giving rise to the conclusion at the beginning of this section. And that is the real problem with a complete model of a single human brain’s activity.

Consciousness and Bats

In the 1990s, Dennett expanded the argument for machine consciousness in his landmark book, Consciousness Explained. This opened a new field in Artificial Intelligence called "hard consciousness." While it had been postulated before that a computer could model brain activity (due to the capacity of a Turing machine to all execute any operation via logical rules of functional equivalency), it had not extensively considered before how this could result in 'computer consciousness.'

The rival camp considers that Dennett should have called his book Consciousness Ignored. It holds that consciousness might not be a direct property of matter, was led by such as Thomas Nagel in "What is it like to be a bat?" (1974). Nagel in particular indicates that the experience of a bat, with its radar senses, is so different from human experience that it is not really possible to know what any consciousness that a bat possesses would be, if a bat does have consciousness; and the same would be true of a computer consciousness.

What is notable, overall, is that the two camps of thought argue their cases in totally incompatible manners. Consider for example, McDermott's overview (2007), which places many of the arguments from both sides next to each other.

  • Those advocating hard AI theories point to the ability of propositional logic to explain all rational processes, and the necessity of materialism to produce the conscious experience. Others, such as Dawkins (2006), extend the same line of argument in a claim that it disproves the existence of God.
  • Those advocating that consciousness is not definable in such terms might acknowledge that it is a product of material reality, but they do not accept that the conscious experience itself cannot be expressed in terms of ones and zeroes, even if logical propositions can explain the production of that experience from material reality.

When one listens to each end of the argument advance their views, it is rather like they are talking themselves into believing their own point of view, rather than trying to understand the opposing perspective, and answer it in their own terms. So as the argument has advanced, the two perspectives have gradually progressed away from each other, further into detailed examinations of their own perspectives, and they are increasingly less able even to render the opposing view in their own language.

The Resulting Disparities

The overall result, now, is each side denying the opposing view with increasingly vehemence, rather than seeking to synthesize the perspectives. Hence all I can do, in this short topic, is frame the extent of the problem.

  • AI theorists will continue to find new ways to define ‘consciousness’ in new mathematical ways, baffling the public with enough numbers to make hell boil over.
  • Some philosophers of mind will continue to pose the problematic nature of defining what consciousness actually is, in terms that would be comprehensible not only to us, but to the computers too.

The hard AI argument has not even progressed so far to consider that consciousness might not be the product of a single brain, but rather conceived by the interaction between them; which, if true, requires that we educate computers to consider experience in the same way as us, a larger topic I started to explore in a deeper way on by blog (Meyer, 2014).


- Ernest Meyer