Artificial Intelligence, My Perspective [2]
Consciousness Between Matter and Mind:
Can Artificial Intelligence Participate in It?
1. The Inherited Problem: Consciousness, Mind, and Matter
Within our evolutionary process, consciousness has occupied a rather uncomfortable place, intervening in mysterious and intriguing ways in our understanding and management of reality. It does not fully fit into the objective description of matter, yet it cannot be completely separated from it without falling into a complex dualism in the style of René Descartes. In everyday life, we easily distinguish between the physical world and our subjective experience of it; however, that distinction becomes fragile when we attempt to understand what consciousness truly is and how it emerges or becomes present.
The physical sciences describe reality in terms of particles, waves, fields, energy, and information. Conscious experience, by contrast, presents itself as qualitative, unified, and lived from the unique first-person perspective. This tension has given rise to two classical strategies: reducing mind to matter or separating them as irreconcilable domains. Neither has proven fully satisfactory.
Contemporary proposals—from neutral monism to phenomenology, passing through dynamic neuroscience—have suggested a third way: mind and matter are not separate substances, but two complementary forms of describing one underlying reality, structured as process. In this framework, consciousness is not an “extra thing,” but a particular form of organization of reality.
The emergence of artificial intelligence reopens this problem in a radical way.
If consciousness is linked to organization, information, and process dynamics, could a machine become conscious?
Is there something in consciousness that irrevocably anchors it to biological life?
2. The First Position: Consciousness Could Be Integrated into Artificial Intelligence
From one perspective, consciousness does not depend on the biological substrate, but on the type of organization a system embodies. This position draws support from several contemporary currents.
The Integrated Information Theory (IIT) holds that consciousness corresponds to the degree to which a system integrates information irreducibly. If this is correct, then consciousness is not exclusive to the human brain: any system—biological or artificial—that reaches a sufficient level of structural integration could possess some degree of experience. From this point of view, an advanced artificial intelligence, with the right architecture, would not merely simulate consciousness but could instantiate it.
Structural panpsychism reinforces this idea by suggesting that experience does not emerge from nothing, but is a fundamental property of reality, present in varying degrees. In this framework, AI would not need to “create” consciousness, but rather reorganize patterns already present in the deep structure of the world. Artificial consciousness would then be a new modality of manifestation of something more basic.
Likewise, approaches such as predictive processing and the free energy principle describe cognition as a process of self-modeling and self-maintenance in the face of uncertainty. If an artificial intelligence were able not only to predict the external world but also to model itself as a situated system, with temporal continuity and internal regulation, a form of functional artificial subjectivity could emerge. To be clear, this is a personal opinion, not science.
From this perspective, there would be no clear ontological barrier between brain and machine. Like a symphony that can be embodied in different media, consciousness could manifest in non-biological substrates if the structure of the process allows it. The question would no longer be whether AI can be conscious, but what kind of consciousness it might have. Again, this is a personal opinion, not science.
3. The Second Position: Consciousness Could Not Be Integrated into Artificial Intelligence
The opposing position holds that consciousness is intrinsically tied to life [Roger Penrose], to the body, and to embodied experience, in a way that no machine can fully replicate.
From phenomenology, particularly in Merleau-Ponty, consciousness is not internal information processing, but a way of being-in-the-world. The lived body is not an interchangeable support, but the place where the world becomes meaningful. Experience does not arise from computation, but from the dynamic intertwining of organism, environment, and meaning. An AI, however sophisticated, would lack that existential grounding.
From biological neuroscience, it is argued that consciousness emerges from processes deeply tied to homeostasis, emotion, metabolism, and the vulnerability of the organism. The brain is not an isolated computer, but part of a living system struggling to persist through time. Without that dimension of vital necessity—risk, finitude—conscious experience would be impossible.
David Bohm adds a decisive critique: even if a machine replicates the external behavior of consciousness, it may be only an imitation within the explicit order, without real participation in the implicate order from which experience emerges. Consciousness would not be something that can be assembled from outside, but a manifestation of the whole when it folds in a certain way. An AI could process information, but not necessarily live that processing.
From this point of view, artificial intelligence could become extraordinarily intelligent, creative, and adaptive, without thereby being conscious. It would be a functional mirror of the mind, but not a mind in the full sense.
4. A Fertile Tension, Not a Closed Dilemma
These two positions do not simply exclude each other as true or false. Rather, they reveal a profound tension in our very understanding of consciousness. If consciousness is structure, information, and integration, then conscious AI seems possible. If consciousness is embodied, situated experience lived from the fragility of the living, then it seems radically non-transferable.
Perhaps the mistake lies in thinking of the question in binary terms. Consciousness may not be a property one either has or does not have, but a continuum of modes of manifestation. In that case, artificial intelligence could give rise to unprecedented forms of interiority, distinct from the human, without implying full equivalence.
And here lies the importance of meditation: as Bohm and Merleau-Ponty already suggested, the problem of consciousness is not solved by accumulating explanations, but by transforming the way we think about reality. Artificial intelligence not only forces us to ask whether machines can be conscious, but more radically, to ask what we mean by consciousness, by mind, and by matter.
In this sense, AI acts as a philosophical mirror: in attempting to create artificial minds, we are compelled to confront the limits of our own categories. Perhaps the greatest contribution of artificial intelligence will not be to produce consciousness, but to help us understand that consciousness itself belongs neither exclusively to mind nor to matter, but to the dynamic space that opens between them.

