AI meets Human Consciousness – Is there Machine Consciousness?

A moment of “AI meets Human Consciousness – Is there Machine Consciousness?”.

Last December, the Luminous[1] project organized a networking session at the ICT 2018 Exhibition, a central event of R&D activities in the EU. The networking session aimed to open an interdisciplinary discussion on consciousness. Specifically, Prof. Joanna Bryson (Artificial Intelligence expert, Dept. of Computer Science, University of Bath), Prof. Antonio Chella (Machine Consciousness expert, Dept. of Computer Engineering, University of Palermo) and I (Luminous coordinator) addressed the fundamental question of how AI and machine consciousness share some commonalities with human consciousness.


We started by discussing the definition of consciousness. In my opinion, consciousness is an emergent property of any complex system that mediates its relationship with its environment. Luminous studies consciousness in the human brain but the definition can be broadened to other complex systems (i.e. AI systems, societies). The Information Integration Theory of consciousness assigns consciousness to those complex systems that maintain a balance between information (in this particular theoretical context also denoted as differentiation, i.e. diversity) and integration (i.e. unified experience). Because this balance between differentiation and integration in complex systems is responsible for consciousness[2], we can state that its quantification can lead to accurate metrics of consciousness, which are also measured by EEG.


One way to measure this balance is based on complexity metrics as discussed in the networking session. In this context, complexity characterizes the interaction between different components of a system. In our case, the different components are the nodes of a brain network. Hence, complexity can be measured in EEG signals either after perturbation or by measuring it in resting state (spontaneous EEG). The best accepted form of perturbational metrics is based on the application of Non-Invasive Brain Stimulation through Transcranial Magnetic Stimulation (TMS) and subsequent measurement of complexity via EEG. This metric – denoted as Perturbational Complexity Index (PCI) – has been proposed by the group of Marcello Massimini (Luminous partners)[3].


Given the bulky nature of TMS equipment whereby PCI is obtained, Luminous aims to discover metrics that can be applied to spontaneous EEG. Here two different approaches have been proposed during Luminous project works. The first one is based on the concept of the Kolmogorov Complexity[4], which in the case of brain signals is related to the ability of the brain to set up models of the environment. The complexity of these models (or better to say a proxy of them as applied on the EEG signals) can be related to the brain’s level of consciousness through an electrophysiological response.

On the other hand, we can measure the complexity of the networks established in a brain in resting state as measured through EEG. This is also a proposed approach in Luminous in regard to patients suffering from Disorders of Consciousness[5]. Here the approach is to measure integration and differentiation of the networks established in resting state EEG through the application of complex network features.


Given the relationship of these metrics to the broad concepts of information and complexity, we can link the understanding of consciousness to other disciplines. Since EEG measures digital time series, it may be possible to apply the same type of metrics to digital time series generated by artificial systems. In fact, we already make use of a particular complexity measure, which is denoted as Lempel-Ziv[6], when compressing files in ZIP format. The same complexity measure is used in the Kolmogorov Theory of Consciousness (KT), which has been generated within Luminous works, as a proxy for the complexity of brain models.

But, are we measuring consciousness of a file when compressing it? Obviously not. I would say a complete aspect of consciousness is missing – the phenomenological one, which is related to the environment. This aspect may emerge in more elaborate artificial systems, such as robots, decision support systems, and automatized personal assistants. We tried to initiate a transdisciplinary consensus document on consciousness through the generated “Vienna Declaration on Artificial and Human Consciousness”. This declaration deals mainly with phenomenological aspects of consciousness and tries to summarize a list of important skills in order to bring artificial agents and machines closer to become “conscious” (if in any case possible, of which I do not really have a clue).


The Vienna Declaration supports the further development of artificial systems towards achieving several features: self-awareness, subjective experience, social capabilities, biological memory, intelligence, and emotions. The question in my opinion is whether this is possible without a change in the way computers work nowadays, without a new paradigm for Artificial Intelligence. Generally, AI today is dominated by rule-based expert knowledge systems, artificial neural networks, and data-driven computational intelligence. In this context, Joanna Bryson advocated for the importance of embodiment. Antonio Chella closed the session with a beautiful idea: “all know consciousness, but just as an intuitive idea”. Advancing and deepening our understanding of consciousness is one of the main goals of the Luminous project. Stay tuned for our updates in a year from now at project closure.


[1] http://www.luminous-project.eu/

[2] https://www.pnas.org/content/pnas/103/28/10799.full.pdf

[3] https://www.ncbi.nlm.nih.gov/pubmed/23946194

[4] https://academic.oup.com/nc/article/2017/1/nix019/4470874

[5] https://www.sciencedirect.com/science/article/pii/S2213158219301913

[6] https://en.wikipedia.org/wiki/Lempel-Ziv_complexity

Leave a Reply

NEUROELECTRICS BARCELONA, S.L.U. collects and processes personal data in accordance with the EU General Data Protection Regulation (GDPR).