Partial differential equations underpin our understanding of countless physical phenomena, and scientists are increasingly exploring the potential of advanced machine learning models to solve them. Min Zhu from Yale University, Jingmin Sun from Johns Hopkins University, Zecheng Zhang from the University of Notre Dame, Hayden Schaeffer from the University of California Los Angeles, and Lu Lu present a new framework, PI-MFM, which significantly advances this field. Their work addresses a key limitation of current models, the need for vast amounts of training data and a disregard for the underlying physics governing the equations, by directly incorporating physical laws during both initial training and subsequent adaptation. The team demonstrates that PI-MFM not only surpasses purely data-driven approaches, particularly when data is limited or noisy, but also enables remarkably efficient learning, achieving high accuracy on unseen equations with minimal labelled data and even offering the possibility of solving problems without any labelled solutions at all. This achievement represents a substantial step towards creating practical, scalable, and transferable tools for solving partial differential equations across a wide range of scientific and engineering disciplines.
Partial differential equations (PDEs) govern a wide range of physical systems, and recent multimodal foundation models have shown promise for learning PDE solution operators across diverse equation families. However, existing multi-operator learning approaches are data-hungry and often neglect the underlying physics during training. This work proposes a physics-informed multimodal foundation model (PI-MFM) framework that directly enforces governing equations during pretraining and adaptation, improving data efficiency and generalization.
Foundation Models Generalize Across Partial Differential Equations
The text centers on foundation models for solving and learning about PDEs, aiming to move beyond training models for specific equations to creating models that can generalize to new PDEs and related tasks. This approach mirrors the success of large language models in natural language processing, and researchers are exploring how to build similar foundation models for scientific computing. A central idea is to train models on a distribution of PDEs, improving generalization ability. The authors explore combining neural networks with symbolic representations of equations, allowing the model to learn both functional relationships and the underlying mathematical structure.
Physics-Informed Neural Networks (PINNs) are a core technique, incorporating the physics of the PDE into the loss function to ensure physically plausible solutions. The strategy involves pre-training models on a large dataset of PDEs and then fine-tuning them on specific problems. Researchers are also investigating in-context learning, uncertainty quantification, federated learning, and the Neural Operator Element Method to further enhance model capabilities. This research builds upon existing models like LeMON, BCAT, Unisolver, and others, striving for improved generalization, scalability, efficiency, and data efficiency.
Physics-Informed Foundation Model Solves PDEs
Scientists have developed a physics-informed multimodal foundation model (PI-MFM) that represents a significant advance in solving partial differential equations. This framework directly incorporates governing equations during both training and adaptation, leading to improvements in data efficiency and generalization across diverse equation families. The PI-MFM accepts symbolic representations of PDEs as input and automatically constructs physics-based loss calculations, utilizing vectorized derivative computations. Experiments on a benchmark of 13 parametric, one-dimensional, time-dependent PDE families demonstrate that PI-MFM consistently outperforms purely data-driven methods, especially with limited labeled data or partially observed domains. The inclusion of physics losses enhances robustness against noise, and strategic resampling of collocation points further refines accuracy. A key breakthrough is the demonstration of zero-shot physics-informed fine-tuning, where the model adapts to unseen PDE families using only PDE residuals and initial/boundary conditions, without labeled solution data.
Physics-Informed Model Solves Equations Accurately
This research presents a new physics-informed multimodal foundation model, PI-MFM, which significantly advances the field of solving partial differential equations. The team successfully developed a framework that incorporates the governing equations of physical systems directly into the training process, allowing the model to learn more efficiently and accurately than purely data-driven approaches. By accepting symbolic representations of these equations as input and automatically calculating residual losses, PI-MFM scales effectively across different equation types. The results demonstrate substantial performance improvements, particularly when dealing with limited or noisy data, and partially observed domains.
Notably, the model achieves high accuracy with minimal labeled data and exhibits robust performance even in the presence of significant noise. Furthermore, the researchers demonstrated a zero-shot fine-tuning capability, where the model rapidly adapts to solve previously unseen equations using only the physics-informed objectives and boundary conditions. Future work will focus on refining the training process and extending PI-MFM to handle more complex problems, incorporating reliable extrapolation and uncertainty quantification for real-world applications.
👉 More information
🗞 PI-MFM: Physics-informed multimodal foundation model for solving partial differential equations
🧠 ArXiv: https://arxiv.org/abs/2512.23056
