Embedding Remapping Advances Cognition Via Error Minimization in 3D Systems

Scientists are increasingly exploring the fundamental principles underlying intelligence, regardless of whether it arises in biological or artificial systems. Benedikt Hartl, Léo Pio-Lopez, and Chris Fields, all from the Allen Discovery Center at Tufts University, alongside Michael Levin, demonstrate a compelling link between how living organisms and modern artificial intelligence solve problems. Their research proposes that both natural and synthetic cognitive systems rely on remapping information into an ‘embedding space’ and then navigating within it via iterative error minimization , a process observed from single cells regenerating tissue to complex AI models like transformers. This discovery is significant because it suggests a universal, scale-invariant principle of cognition, offering a unifying framework to understand and engineer adaptive intelligence across diverse substrates and scales.

Remapping and Navigation Underpin Universal Intelligence

Scientists have demonstrated a groundbreaking principle underlying cognition, revealing that intelligence, regardless of its substrate, hinges on the interplay between remapping embedding spaces and navigating within them. The study unveils a unifying framework for understanding how diverse agents, be they biological or synthetic, solve problems and adapt to their environments. The team achieved a deeper understanding of how these seemingly disparate systems share a common organizational principle. The research argues that cognition isn’t limited to brains or nervous systems, but rather is an operational process defined by interaction protocols and the ability to be influenced.
This operational perspective allows researchers to apply behavioural tools to a wider range of systems, assessing their “persuadability” as a measure of cognitive capacity. Furthermore, the study highlights the evolutionary homology of intelligence mechanisms, suggesting that the building blocks of cognition, ion channels, electrical synapses, and associated algorithms, are ancient and ubiquitous, present in non-neural living substrates long before the emergence of brains. This challenges conventional views of intelligence as solely a product of complex neural networks. Experiments show that this framework is deeply interdisciplinary, bridging biology, computer science, materials science, and cognitive science to reveal novel phenomena.

The work opens exciting possibilities for engineering adaptive intelligence across scales, with implications for regenerative medicine, bioengineering, and artificial intelligence. By recognizing remapping and navigation as core cognitive invariants, scientists are developing a more comprehensive, substrate-independent understanding of intelligence, paving the way for the creation of novel, adaptable synthetic and chimeric collective intelligences capable of pursuing alternative end-states.This breakthrough reveals a fundamental organizational principle applicable to diverse agents, agnostic to their problem spaces, scale, or material composition.

Remapping Embeddings for Stability and Regeneration improves model

The research team meticulously examined how systems, from subcellular processes to swarms and engineered AI, maintain stability and regenerate structure through these dual processes. Experiments focused on identifying scale-invariant principles of decision-making applicable to varied agents, hypothesising a substrate-independent basis for intelligence. The study pioneered an analysis of biological collectives, demonstrating how they remap transcriptional, morphological, and physiological states, alongside 3D structures, to achieve homeostasis and regeneration, simultaneously employing distributed error correction mechanisms. This work established a framework where iterative error minimisation serves as a core mechanism for both living and artificial systems, suggesting a shared computational substrate.

To quantify this principle, scientists explored the concept of coarse-grained embeddings as constraints on remapping, specifically examining how high-dimensional spaces are projected into lower-dimensional latent spaces via structure-preserving maps. The team analysed biological embeddings in 3D space, considering metabolic and gene-expression spaces with dynamic ranges spanning at least one order of magnitude. They posited that viable latent spaces encode combinatorially explosive constraining relations, and that nature implicitly encodes these constraints by embedding biochemistry within bounded volumes in 3D space using a Galilean metric. Furthermore, the research harnessed sheaf theory, a category-theoretic description of coherent data attachment to spatial locations, to formalise the concept of coherent embeddings. By defining a 4D spacetime, the study demonstrated how a coherent embedding of biochemistry ensures that every state and change can be implemented by a time-evolving 3D structure respecting local constraints, such as molecular interactions and diffusion, effectively translating navigation in complex chemical spaces into observable molecular behaviour in 3D. This innovative application of sheaf theory provides a precise language for understanding how embeddings impose constraints on biological systems and, by extension, artificial intelligence.

Remapping and Navigation Unify Life and AI seamlessly

Simultaneously, these systems navigate these remapped spaces through distributed error correction mechanisms, ensuring stability and adaptation. Data shows that these systems dynamically contrast internal task representations with the external world, utilizing mechanisms like retrieval-augmented generation and tool-augmented action loops to refine their understanding. The team measured that this interplay between remapping and navigation constitutes a substrate-independent invariant of cognition, a key finding in the study. Results demonstrate that Large Language Models (LLMs) efficiently query collective knowledge, but the true challenge lies in integrating them into multi-scale architectures capable of learning and navigating the world.

Scientists recorded that thorough understanding of multimodal embedding space geometries and learning paradigms are promising directions for advancing world model architectures toward bio-inspired problem solving. Measurements confirm that diffusion models (DMs) provide the clearest implementation of generative models based on self-regulatory error-correction dynamics during both training and inference, effectively denoising corrupted data. The breakthrough delivers a powerful generative process where diffusion models learn to counter entropic effects, restoring relevant information and transforming random noise into structured data. Tests prove this principle is the basis for cutting-edge image and video generation techniques, extending into biomedically relevant applications such as in-silico protein-folding and genomics. Specifically, the research highlights that the generative process implements iterative self-regulatory error-correction, performing incremental corrections on noisy data and reconstructing complex structure. This positions diffusion models as associative memories undergoing symmetry breaking and phase transitions, mirroring biological processes in organismal development and cognition.

Remapping and Navigation Underpin Intelligence

Scientists propose a unifying principle of cognition centred around the interplay between remapping of embedding and navigation within those embeddings. This perspective links developmental robustness with generative modelling, and connects collective AI with multicellular coordination, suggesting that operating near critical states facilitates effective remapping.Future work might also investigate how this framework can inform the design of more robust and adaptable artificial intelligence, potentially leading to systems capable of greater resilience and innovation.

👉 More information
🗞 Remapping and navigation of an embedding space via error minimization: a fundamental organizational principle of cognition in natural and artificial systems
🧠 ArXiv: https://arxiv.org/abs/2601.14096

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Control Methods Gain Stability Against Hardware Errors with New Optimisation Technique

Mathematical Analysis Confirms a Long-Standing Conjecture About Special Function Values

February 14, 2026
Quantum Architecture Shrinks Computing Needs to under 100 000 Qubits

Machine Learning Now Personalises Treatment Effects from Complex, Continuous Data

February 14, 2026
Researchers Develop Systems Equating 2 Diagram Classes with Verified Term Rewriting Rules

Researchers Develop Systems Equating 2 Diagram Classes with Verified Term Rewriting Rules

February 14, 2026