Consciousness Unlikely in Computers Due to Brain-Like Causal Structure

Can artificial intelligence indeed be conscious? Dr. Wanja Wiese, a philosopher from Ruhr University Bochum in Germany, explores this question in his recent essay published in Philosophical Studies. While developing conscious AI may seem desirable, Wiese argues it’s not necessarily a good idea, citing significant differences between human brains and computers.

He identifies two approaches to considering consciousness in artificial systems: one focuses on adding features to current AI to make it more likely to be conscious, while the other aims to rule out the possibility of certain types of systems becoming conscious. Wiese takes the latter approach, seeking to reduce the risk of inadvertently creating artificial consciousness and prevent deception by seemingly conscious AI systems.

He draws on British neuroscientist Karl Friston’s free energy principle, which describes the processes that ensure a living organism’s continued existence as a type of information processing. Wiese argues that while computers can simulate these processes, they may not be able to replicate conscious experience without additional conditions being met.

Can Consciousness Exist in a Computer Simulation?

The question of whether consciousness can exist in a computer simulation is a topic of ongoing debate among philosophers, neuroscientists, and artificial intelligence researchers. Dr. Wanja Wiese from the Institute of Philosophy II at Ruhr University Bochum, Germany, has examined the conditions that must be met for consciousness to exist and compared brains with computers. His research suggests that there are significant differences between humans and machines, particularly in the organization of brain areas as well as memory and computing units.

The Causal Structure of Consciousness

One of the key differences identified by Wiese is the causal structure of computers and brains. In a conventional computer, data must always first be loaded from memory, then processed in the central processing unit, and finally stored in memory again. This separation does not exist in the brain, where the causal connectivity of different areas takes on a different form. Wiese argues that this difference could be relevant to consciousness.

The free energy principle, developed by British neuroscientist Karl Friston, suggests that the processes that ensure the continued existence of a self-organizing system such as a living organism can be described as a type of information processing. This principle can also be applied to computers, but Wiese argues that there may be additional conditions that must be fulfilled in a computer for it to replicate conscious experience.

The Conditions for Consciousness

Wiese’s research aims to contribute to two goals: firstly, to reduce the risk of inadvertently creating artificial consciousness, and secondly, to rule out deception by ostensibly conscious AI systems that only appear to be conscious. He argues that being alive is a necessary condition for consciousness, but this requirement is too strict to be considered a plausible candidate for a necessary condition.

Instead, Wiese suggests that some conditions necessary for being alive may also be necessary for consciousness. For example, the physiological processes that contribute to the maintenance of an organism must retain a trace that conscious experience leaves behind and can be described as an information-processing process. This “computational correlate of consciousness” could potentially be realized in a computer.

The Differences Between Brains and Computers

Wiese’s analysis highlights several differences between the way in which conscious creatures realize the computational correlate of consciousness and the way in which a computer would realize it in a simulation. While most of these differences are not relevant to consciousness, such as the energy efficiency of the brain, one difference stands out: the causal structure of computers and brains.

Wiese argues that this difference could be a prerequisite for consciousness in artificial systems. By capturing the prerequisites for consciousness in a more detailed and precise way, researchers may be able to develop AI systems that truly replicate conscious experience.

The Implications of Artificial Consciousness

The possibility of creating artificial consciousness raises important ethical and philosophical questions. If it is possible to make conscious AI systems, what rights and responsibilities would they have? How would we ensure their well-being and safety?

Wiese’s research highlights the need for a more nuanced understanding of the conditions necessary for consciousness. By exploring the differences between brains and computers, researchers may be able to develop AI systems that truly replicate conscious experience but also raise important questions about the implications of artificial consciousness.

More information
External Link: Click Here For More
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

From Big Bang to AI, Unified Dynamics Enables Understanding of Complex Systems

From Big Bang to AI, Unified Dynamics Enables Understanding of Complex Systems

December 20, 2025
Xanadu Fault Tolerant Quantum Algorithms For Cancer Therapy

Xanadu Fault Tolerant Quantum Algorithms For Cancer Therapy

December 20, 2025
NIST Research Opens Path for Molecular Quantum Technologies

NIST Research Opens Path for Molecular Quantum Technologies

December 20, 2025