Machine Learning Accurately Recovers Hidden Functions Within Complex Equations

Researchers are increasingly focused on recovering unknown functional components within partial differential equations, a challenge that limits predictive modelling. Torkel E. Loman from the Mathematical Institute University of Oxford, Yurij Salmaniw from the Department of Mathematics, Physics and Geology Cape Breton University, and Antonio Leon Villares from the Department of Engineering University of Oxford, working with colleagues including Jose A. Carrillo and Ruth E. Baker from the Mathematical Institute University of Oxford, demonstrate a novel method for learning these functions directly from data. Their study embeds neural networks into partial differential equations, enabling accurate approximation of unknown functions during training. By applying this approach to nonlocal aggregation-diffusion equations, the team successfully recovers interaction kernels and external potentials from steady state data, whilst systematically investigating the impact of data quantity, quality, and properties on recovery accuracy. This work is significant because it extends existing parameter-fitting workflows to functional recovery, allowing the resulting partial differential equation to be used for standard system predictions.

Scientists often encounter partial differential equations containing unknown functions that are difficult or impossible to measure directly, hindering predictive modelling. If these landscapes can be recovered from population density observation, they can inform ecological management decisions.

In physics and chemical engineering, thermal or mass-transport systems are ubiquitous, often involving known temperatures or concentrations while the relevant source field, conductivity, or diffusivity is unknown, leading to well-studied inverse problems. Similar roles for spatial properties arise in phase-field and pattern-formation models, where heterogeneous tilts or mobility maps encode the effective energetic landscape.

In nonlocal aggregation, diffusion systems, the spatial structure to be inferred may be an external stimulus or the interaction kernel itself. This mismatch motivates inverse problems that aim to recover these unknown spatial components from data, with classical parabolic source identification in heat and transport settings being a familiar instance.

Classical parameter fitting Methods can recover scalar system parameters from data, but these can be extended to recover full functional parameters. The study focused on the aggregation-diffusion equation and successfully recovered interaction kernels and external potentials directly from steady-state data. The core of the approach lies in minimising the fixed-point residual, quantified as ∥T(u) − u∥, which precisely vanishes at numerical equilibria of the forward model, thereby avoiding differentiation of noisy data.

Initial experiments utilising exact, noise-free data confirmed successful recovery of both single and multiple functional parameters. Subsequent analysis incorporated synthetic empirical measurements, revealing that recovery remains viable even under sparse sampling conditions. However, recovery performance degrades as measurement noise increases, with the extent of this degradation varying depending on the specific system and data quality.

The ease of function recovery is heavily influenced by factors such as the number of available solutions and their inherent properties. Solutions with greater diversity and mutual informativeness facilitated more accurate recovery. Furthermore, recovery failure can stem from a lack of structural identifiability, an analytically predictable outcome, or from implementation details and data quality. The work establishes the feasibility of functional component recovery, holding true at the theoretical level with exact data and remaining achievable in practical scenarios with partial or noisy observations.

The Bigger Picture

Researchers have demonstrated a method for effectively ‘learning’ unknown functions embedded within partial differential equations, moving beyond simply identifying numerical parameters to reconstructing the functional relationships that govern a system’s behaviour. This isn’t merely a refinement of existing techniques; it represents a shift in how we tackle inverse problems in mathematical modelling.

For years, scientists have struggled to extract meaningful insights when crucial elements of a model remain unobservable. Traditional methods often rely on simplifying assumptions or extensive, painstaking experimentation. This new work bypasses those limitations by integrating neural networks directly into the partial differential equation itself, allowing the model to learn the missing functions from observed data.

The ability to recover these functions with increasing accuracy, even with limited or noisy data, is particularly noteworthy. The inherent non-identifiability of the problem remains a significant hurdle. The research clearly shows that a single observation is rarely enough to uniquely determine the hidden functions. At least two independent data points, or ‘solutions’, are needed to constrain the problem sufficiently.

This highlights the importance of experimental design and data acquisition strategies. Looking ahead, this technique could be extended to far more complex systems, potentially unlocking insights in fields ranging from fluid dynamics to materials science and even biological modelling. The next step isn’t simply about improving the accuracy of the recovery, but about developing methods to intelligently sample the solution space and overcome the limitations imposed by data scarcity. Ultimately, the goal is to create models that are not just predictive, but truly reflective of the underlying physical reality.

👉 More information
🗞 Learning functional components of PDEs from data using neural networks
🧠 ArXiv: https://arxiv.org/abs/2602.13174

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Neural Networks Boost Image Processing with Advanced Smoothing Techniques

Neural Networks Boost Image Processing with Advanced Smoothing Techniques

February 17, 2026
Wearable Tech Gets a Brain Boost with Gesture Control Needing Just 360 Parameters

Wearable Tech Gets a Brain Boost with Gesture Control Needing Just 360 Parameters

February 17, 2026
Nanoscale Imaging Reveals Ordered Patterns Within Superconducting Materials at 71 K

Nanoscale Imaging Reveals Ordered Patterns Within Superconducting Materials at 71 K

February 17, 2026