Efficient Coding Advances Neural Representation with Tractable Models and Optimisation of Similarity

The question of how neurons represent information remains a central challenge in neuroscience, and researchers continually seek to define the underlying principles governing neural activity. William Dorrell, Peter E. Latham, and James Whittington, all from University College London, present a new theoretical framework demonstrating that a broad range of neural coding problems can be understood through the lens of convex optimisation. This approach offers a powerful balance between mathematical simplicity and biological realism, sidestepping the limitations of both overly simplistic and computationally intractable models. By optimising representational similarity rather than neural activity itself, the team reveals previously unrecognised connections between different coding schemes and provides novel insights into the organisation of visual processing, specifically explaining differences between retinal and cortical coding strategies. The findings offer a new mathematical foundation for understanding neural codes and a means to rigorously analyse their properties.

Instead of directly optimising neural activities, the team optimised representational similarity, a matrix constructed from the dot products of neural responses, revealing a surprisingly broad family of problems that are mathematically convex. This innovative technique allows for a more tractable and understandable analysis of how neurons encode information, offering new insights into the fundamental principles governing brain function.

The study unveils that a significant range of optimisation problems, including those corresponding to both linear and non-linear neural networks, can be expressed as convex functions over representational dot-product similarity matrices. This breakthrough builds upon prior work by Sengupta et al. (2018), extending the application of convexity to a wider array of neuroscience challenges. By composing arbitrary convex constraints and objectives, the researchers created diverse optimisation problems, each amenable to analysis, and successfully reframed previously studied problems, such as semi-nonnegative matrix factorisation and nonnegative sparse coding, within this convex framework. This approach unlocks analytical tools previously unavailable for these complex neural coding models.

Experiments show the power of this method through three specific applications. The team provides the first complete set of conditions for identifying solutions in a form of semi-nonnegative matrix factorisation, a crucial step towards understanding modularity and mixed-selectivity in the entorhinal cortex. Furthermore, the research establishes that, under certain conditions, neural tunings are uniquely linked to the optimal representational similarity, validating the common neuroscience practice of analysing single neuron tuning curves. This finding suggests that observing individual neuron activity can reliably reveal information about the overall population representation.

The work opens new avenues for understanding the differences in coding strategies between brain regions. Specifically, the researchers explain why dense retinal codes and sparse cortical codes optimally split the coding of a single variable into ON and OFF channels, leveraging the tractable nonlinearity inherent in their convex problems. By identifying a space of convex problems and applying them to derive neural coding results, this research provides a powerful new toolkit for neuroscientists and artificial intelligence researchers alike, promising to accelerate progress in both fields.

Representational Similarity Optimisation via Convex Duality

The study pioneers a novel approach to understanding neural coding by shifting the optimisation focus from neural activities themselves to representational similarity, a matrix constructed from the dot products of neural responses. Rather than directly optimising activity patterns, researchers optimised this representational similarity matrix, revealing a surprisingly large family of optimisation problems that exhibit convexity. This methodological innovation enabled the team to analyse complex models with greater tractability and interpretability than previously possible, moving beyond simple linear models while retaining analytical clarity. Scientists developed a framework utilising convex duality to provide the first definitive result regarding the identifiability of a specific form of semi-nonnegative matrix factorisation.

Experiments employed this framework to demonstrate that, given sufficiently ‘different’ neural tunings, a unique link exists between these tunings and the optimal representational similarity. This finding partially validates the prevalent practice of single neuron tuning analysis within neuroscience, offering a theoretical justification for its continued use. The team harnessed mathematical tools, including the Schur complement and concepts from convex optimisation, to rigorously establish these relationships. Further extending this work, the research details how the tractable nonlinearity inherent in some of the identified problems explains a key functional difference between retinal and cortical coding schemes.

Specifically, the study demonstrates why dense retinal codes optimally split the coding of a single variable into ON and OFF channels, while sparse cortical codes do not. This analysis relied on precise mathematical formulations of the optimisation problems and their corresponding solutions, allowing for a clear prediction of coding strategies based on the underlying neural architecture. The system delivers a powerful new lens through which to view the principles governing information representation in the nervous system. The study’s methodological advances extend to the identification of a comprehensive set of convex problems, which were then leveraged to derive concrete results regarding neural coding principles. This approach enables researchers to move beyond descriptive analyses of neural activity and towards normative theories grounded in mathematical optimality, offering a more principled understanding of why neurons encode information in the way they do. The work builds upon previous research in sparse coding and nonnegative matrix factorisation, but significantly advances the field by establishing conditions for identifiability and convexity.

Convexity Reveals Principles of Neural Coding

Scientists have established a novel framework for understanding neural coding by demonstrating that a broad range of optimisation problems are, in fact, convex when applied to representational similarity matrices. The research, building on prior work by Sengupta et al. (2018), shifts the focus from optimising neural activities directly to optimising the relationships between neural responses, measured as dot products forming a representational similarity matrix. This approach unlocks analytical tractability previously unavailable in more complex neural models, allowing for a deeper understanding of how brains encode information. Experiments revealed that this framework applies to both linear and certain non-linear neural networks, offering a unified approach to analysing diverse coding schemes.

The team measured identifiability in a form of semi-nonnegative matrix factorisation, deriving the first set of necessary and sufficient conditions for its successful application. Results demonstrate that when neural tunings are sufficiently distinct, they are uniquely linked to the optimal representational similarity, thereby validating the use of single neuron tuning analysis as a meaningful approach in neuroscience. This finding confirms that analysing individual neuron responses can provide valuable insights into the overall function of neural populations, as long as certain conditions regarding the diversity of tuning curves are met. Further work used the tractable nonlinearity of these convex problems to explain a key feature of retinal coding.

Measurements confirm that dense retinal codes optimally split the coding of a single variable into ON and OFF channels, a phenomenon not observed in sparse cortical codes. This suggests that the coding strategy employed in the retina is uniquely suited to its specific computational demands, and that the proposed framework can illuminate the functional differences between brain regions. The breakthrough delivers a space of convex problems, enabling the derivation of new results in neural coding and offering a powerful tool for future research. Scientists achieved a significant advancement by reframing previously intractable problems, such as nonnegative-affine autoencoding, as convex optimisation problems. Data shows that this allows for a precise characterisation of the conditions under which latent neurons encode single sources, providing insights into modularity and mixed-selectivity observed in the entorhinal cortex. The study’s findings have implications for both artificial intelligence and neuroscience, offering a new lens through which to understand the principles underlying efficient and effective information processing.

Representational Similarity and Convex Optimisation of Coding

This research introduces a framework for understanding neural coding by modelling it as a series of convex optimisation problems. Rather than directly optimising neural activity, the authors optimise representational similarity, a matrix describing the relationships between neural responses, demonstrating that a broad range of problems within this framework are mathematically tractable. This approach allows for analytical solutions previously unavailable in more complex models of neural coding. The work yields several specific results, including the first complete proof of identifiability for a particular form of semi-nonnegative matrix factorisation, and a demonstration that unique links exist between neural tuning and optimal representational similarity under certain conditions.

Furthermore, the researchers explain why dense retinal codes and sparse cortical codes optimally separate the coding of a single variable into opposing channels. These findings offer a novel perspective on how neural systems might efficiently represent information. The authors acknowledge limitations in verifying certain conditions required for their results, noting that these checks can be computationally challenging. They suggest future work could explore non-convex optimisation approaches and leverage ongoing developments in computational methods to address these difficulties. The presented framework is intended to facilitate connections between neural activity and function in both artificial and biological networks, offering a valuable tool for further investigation in the field.

👉 More information
🗞 Convex Efficient Coding
🧠 ArXiv: https://arxiv.org/abs/2601.10482

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Quantum Light’s Wave-Particle Balance Now Fully Tunable

Quantum Light’s Wave-Particle Balance Now Fully Tunable

March 2, 2026
AI Swiftly Answers Questions by Focusing on Key Areas

AI Swiftly Answers Questions by Focusing on Key Areas

February 27, 2026
Machine Learning Sorts Quantum States with High Accuracy

Machine Learning Sorts Quantum States with High Accuracy

February 27, 2026