Machine Consciousness is Inevitable

Can a robot become self-conscious? Humanity has been obsessed with this question since before it even know what a “robot” is.

Mythologies all around the world are full of artificial creatures that become alive. From the Greek myth of Talos (created by the God Hephaestus) to the Jewish Golem. From the mechanical watchdogs of King Alcinous to the human automata built by the artificer Yan Shi to entertain King Mu of Zhou.

In this century, though, due to the rampant advancements in Artificial Intelligence and Robotics, the question is no longer a vague philosophical thought experiment or a mythological cautionary tale. Now, the question is very close to becoming real.

Sure, we are many decades away from an android having to defend in court its status as a self-aware entity like in Asimov’s “The Bicentennial Man."1 However, the question is moving from the pages of science fiction writers into the debates of philosophers, neuroscientists, and AI researchers.

While it is a fascinating and intellectually engaging question, my opinion is that the wrong question. We will never be able to prove that a machine is conscious, nor we should. In the end, we can reduce this question to the old debate between dualism and non-dualism of the human mind. If we embrace the non-dual nature of the human mind (as I am convinced is the case), then machine consciousness is inevitable.

Dualism and the Philosophical Zombie

If you do not know what dualism is, let me give you a brief introduction. In philosophy of mind, dualism is the short form for mind-body dualism: the idea that the subjective mind (a.k.a., consciousness) is made of a different substance than the rest of the body. Dualism is a very old idea as many religious beliefs, such as the existence of the soul, are a form of dualism. Still, we owe its modern philosophical definition to René Descartes: the first to rationalize this view and consider consciousness separately from intelligence (whose functions are part of the brain and, therefore, the body).

The original dualism, though, is nowadays completely confuted by modern science. It is, in fact, tough to agree that our consciousness is the result of some non-physical substance that is entirely separated by ordinary matter and that it cannot be known by science.

However, for some reason, dualism is not dead. On the contrary, there are many modern variations of dualism: property dualism, predicate dualism, panpsychism, and more (even if the boundary between philosophy and pseudo-philosophy gets thinner and thinner).

One of the more vocal contemporary defenders of dualism is the Australian philosopher David Chalmers. One of his most famous arguments is the so-called thought experiment of the philosophical zombie (also called pZombie).2

According to this thought experiment, we need to imagine the existence of a being (the zombie) that is externally completely similar to a human being but lacking internally any kind of conscience, self-awareness, or qualia. Thus, the zombie will behave like a human being, react to pain, talk, and everything else, however, without being conscious.

For the proponents of the philosophical zombie, the mere fact that such a zombie is logically possible is a confutation of non-dualism and a proof of dualism for the problem of consciousness. Consciousness, therefore, is not a necessary consequence of intelligence, emotions, sensations, and more: it is something else.

The problem, however, is that it is not.

Zombie, Robots, AI, and Consciousness

If we accept the pZombie argument, we can conclude that it will be possible to create a robot without consciousness even if we make it incredibly similar to a human.

It is easy to see that we can repeat the same thought experiment replacing the zombie with an android equipped with a human-like AI.

There is a problem, though: the philosophical zombie thought experiment is a clear case of circular argument (as stated on several occasions by many researchers, philosophers, and AI experts such as Marvin Minsky3, Richard Brown, and Gualtiero Piccinini4).

The premise of the zombie claim is that it is possible to construct a physical object that is identical to a human but without subjective experience. But to allow this, we should refute the fact that subjective experiences are caused by the same physical interactions and inner mechanisms that the zombie will emulate perfectly. In other words: it starts by assuming what it’s trying to prove. The negation of non-dualism cannot be used to prove dualism.

The fact is that there is no way to prove that the zombie/android does not have internal subjective experience since, by definition, the zombie is indistinguishable from other human beings (that we assume to be self-aware). If this would be true for the zombie/android, why can we not say the same for the other human beings? What if, for some cosmic dysfunction, only 50% of human beings are born with “consciousness” while the others 50% are just zombies? There would be no way to know.

And that’s the point: even for our advanced androids, there will be no way to prove if they are self-aware or not. So we will have to assume they are: in this sense, consciousness will be inevitable.

Degrees of Self-Awareness

Of course, we are decades or centuries away from having to face an android that is a perfect replica of a human being. Meanwhile, though, we will meet increasingly complex artificial entities, and we will have to answer more difficult questions. For example, after which point of complexity can we assume a robot to be self-aware? Do different degrees of consciousness exist? After which level should we start to care about them? Which “mental elements” will we consider in this regard? Intelligence? Emotions? Sufferance?

As you can see, we could fill a book with just these questions and several books attempting to answer them. I am not the right person to answer them. However, I want to give you a greater understanding of which questions are genuinely fascinating and essential. “Do robots will ever have consciousness?” is not one of them: as I said, that is just inevitable. The good questions are in all the details we will face along the road.

It will be an exciting journey.


  1. To be perfectly candid, I am bored of this question in many recent books, movies, and games. It is time to stop milking this cow in such a superficial way. I am talking to you, “Detroit: Become Human." It is time to move into more exciting details and consequences of artificial consciousness. ↩︎

  2. Technically, the idea of the philosophical zombie was introduced by Robert Kirk. However, the current formulation has been widely popularized by Chalmers. ↩︎

  3. https://www.edge.org/3rd_culture/minsky/minsky_p2.html ↩︎

  4. Piccinini, Gualtiero (2017). Access Denied to Zombies. Topoi 36 (1):81-93. ↩︎

Header Image
The Trolley Cart Problem is not an AI problem

Every time there is a discussion on the future of AI-powered Autonomous Vehicles, somebody put the Trolley Cart Problem (TCP) on the table. And every time this happens, I am annoyed. …

Read
Header Image
Overview of Three Techniques for Procedural Storytelling

Inspired by a recent paper I read this week, I decided to explain the three major “classic solutions” to the generative storytelling problem: Simulation, Planning, and Context-Free Grammars. …

Read
Header Image
Marginalia: Rebooting AI by Gary Marcus and Ernest Davis

With this new year, let’s try a new format. Marginalia will be a series in which I’ll share notes and comments on interesting books I read. The name is directly inspired by the old word …

Read
comments powered by Disqus