
While the whole world watches with interest as Sam Altman refines ChatGPT and OpenAI progresses toward creating AGI, the Canadian startup
Their approach is radically different from traditional methods: Nirvanic plans to achieve this breakthrough using a quantum computer, based on the ideas of Nobel Prize laureate
Penrose
In collaboration with neurobiologist Stuart Hameroff, Penrose developed the theory of
Penrose and Hameroff proposed that these microtubules can hold quantum superpositions that collapse under the influence of gravity. It is this process that generates subjective experience—the foundation of consciousness.
If Penrose’s hypothesis is correct, then creating true artificial consciousness would require more than just a powerful computer; it would need a device capable of operating with quantum states. This is precisely what the Canadian startup Nirvanic is working on today, aiming to develop the world’s first AI endowed with consciousness by harnessing quantum computing.
Despite the lack of conclusive evidence, Penrose’s ideas continue to inspire researchers by offering an alternative perspective on the nature of the mind. If the Orch-OR theory is confirmed, it could revolutionize not only our understanding of consciousness but also the very concept of artificial intelligence.
But let’s return to the question of whether a “conscious” AI can be created. Do we actually know what consciousness is and how it originated in humans? And does humankind truly possess consciousness at all?
Unfortunately, there is no single, universally accepted definition of consciousness, but I will attempt to offer a concise, synthesized overview.
Consciousness is the capacity for subjective experience, awareness of oneself, and the surrounding world. It encompasses thinking, perception, emotions, memory, and self-reflection. Most commonly, it is viewed as a product of brain activity that enables a person to be aware of their actions and thoughts (sources
But I would like to highlight David Chalmers’ definition of consciousness, which he presents in his book
It might seem that all signs point to the impossibility of recreating human consciousness in a “machine,” given that virtually all definitions of “consciousness” extend beyond mere computation and algorithms. How can we teach a machine intuition, emotions, and self-reflection? More importantly, is it possible to imbue it with the capacity for subjective experiences and a sense of inner experience?
However, it’s worth noting that there are also academic perspectives that question whether consciousness itself truly exists. Daniel Dennett argues that consciousness is an illusion arising from numerous unconscious processes.
In his book
This concept can be somewhat unsettling; I’d like to believe I’m writing this article consciously and of my own free will! However, Dennett’s definition brings us closer to the idea of creating a “conscious” AI—at least in his sense of the term. Neural networks, especially Large Language Models (LLMs), have already reached a level where we can assume they may “simulate” multiple parallel processes—commonly referred to as reasoning. This, in turn, could give rise to that “illusion of a unified self.”
After all, if consciousness isn’t regarded as something mystical but rather as a structure that emerges in the brain (or any complex information-processing system), then an AI could theoretically develop something akin to consciousness, provided its architecture is sufficiently intricate to replicate similar processes. The real question is which criteria for consciousness we would use and what level of complexity and organization AI must achieve for us to consider it “conscious.”
Despite their differing views on how human consciousness originates, Roger Penrose and Daniel Dennett share a common conviction that consciousness is grounded in the physical world and does not require immaterial explanations. Both reject classical