Solving complex equations that describe physical phenomena often presents significant challenges for modern computational methods, particularly when those equations involve sharp changes and intricate boundaries. Naseem Abbas, Vittorio Colao, and Davide Macri, alongside William Spataro, from various Italian research institutions, now present a new approach to tackling these problems using physics-informed neural networks. Their work introduces a dual-network framework that intelligently separates the task of modelling the overall solution from accurately capturing behaviour near boundaries, a strategy that dramatically improves accuracy and efficiency. The team demonstrates substantial reductions in error and improved boundary satisfaction across several benchmark problems, including equations governing heat flow and probability distributions, offering a broadly applicable and easily implemented advance in scientific computing.
Physics-Informed Neural Networks Solve PDEs
This research details the development of Physics-Informed Neural Networks (PINNs), a machine learning technique used to solve and learn from partial differential equations (PDEs). PINNs combine the power of neural networks with the governing equations of physical systems, allowing them to solve PDEs even with limited data and extrapolate beyond the training data, offering the potential to discover hidden physics within data. However, PINNs present challenges, including difficult training requiring careful parameter adjustment and sophisticated optimization strategies. Issues like unstable gradients and getting trapped in suboptimal solutions are common, and neural networks exhibit a spectral bias, learning low-frequency functions more easily than high-frequency ones, which can hinder accurate representation of solutions with sharp changes.
Solving PDEs on complex shapes or with complicated boundary conditions can also be difficult for standard PINN designs, and performance can suffer when dealing with high-dimensional problems. Researchers have developed numerous techniques to overcome these limitations, including adaptive sampling which intelligently focuses computational effort on areas where the PDE is most violated, and domain decomposition which breaks down complex problems into smaller, more manageable parts. Residual-based methods guide the network towards satisfying the governing equations by minimizing the error in the PDE, while gradient enhancement techniques improve the flow of information during training, preventing instability. Other advancements include adaptive activation functions, multi-scale methods, transfer learning, specialized convolutional neural networks for complex shapes, and implicit neural representations of solutions.
Recent work has explored Fourier Feature Networks to address spectral bias, Variational PINNs to improve stability, Multiplicative Filter Networks to better represent complex functions, and Finite Basis PINNs to improve solution representation. PINNs are a promising approach for solving PDEs and learning from physical systems, but addressing their challenges requires careful consideration of the problem, network design, and optimization strategy. Ongoing research continues to improve their performance, scalability, and robustness, demonstrating the exciting potential of this rapidly evolving field.
Decomposing Solutions with Dual Neural Networks
Scientists have developed a dual-network framework to improve the accuracy of physics-informed neural networks (PINNs) when solving complex equations featuring sharp changes and intricate boundaries. This work addresses limitations of standard PINNs in high-frequency and multi-scale scenarios by decomposing the solution into two components: one representing interior dynamics and another providing corrections near boundaries. Both networks share a unified physics model, ensuring consistency, while being specialized to their respective roles. To achieve this specialization, researchers implemented distance-weighted priors, prioritizing the boundary network near boundaries and the domain network in the interior.
The team then regularized network roles, further reinforcing their distinct functions. Training proceeds in two phases, beginning with uniform sampling to establish the roles of each network and stabilize boundary condition satisfaction. The second phase employs focused sampling, specifically concentrating on areas near the boundary, combined with a technique that gradually adjusts the influence of the role weights. This targeted approach concentrates computational effort on areas requiring the most refinement, improving accuracy and reducing computational cost. The method was evaluated on four benchmark problems, demonstrating a significant reduction in error and an improvement in boundary satisfaction compared to a single-network PINN. Ablation studies confirmed the contributions of soft boundary-interior specialization, annealed role regularization, and the two-phase training curriculum to the overall performance gains, highlighting the effectiveness of this innovative approach.
Dual Network PINNs Stabilize Complex Solutions
This research presents a dual-network physics-informed neural network (PINN) designed to improve the accuracy and stability of solutions to partial differential equations (PDEs), particularly those with complex boundary conditions or sharp gradients. The team addressed the challenge of balancing physics-based calculations with boundary enforcement by decomposing the solution into two subnetworks: a domain network responsible for interior dynamics and a boundary network focused on near-boundary corrections. Both networks share a common physics model, ensuring consistent physical behavior across the entire domain. The key innovation lies in softly specializing these networks using distance-weighted priors.
These priors encourage the domain network to dominate in the interior and the boundary network to focus on the boundary region. This specialization is achieved through a loss function that penalizes contributions from the incorrect network in each region. Experiments demonstrate that this approach reduces error, improves boundary satisfaction, and decreases mean absolute error compared to a single-network PINN across Laplace and Poisson equation benchmarks. Training proceeds in two phases, further enhancing performance. Phase 1 establishes the roles of each network using uniform sampling across the domain.
Phase 2 employs focused sampling, specifically concentrating on areas near the boundary, combined with a technique that gradually reduces the influence of the role-based loss. This allows the physics model to dominate as training progresses, refining the solution and improving accuracy. The team also implemented an augmented Lagrangian method for enforcing boundary conditions, eliminating the need for manual adjustment of parameters. The method’s effectiveness is confirmed through benchmarks involving the 1D Fokker-Planck equation, Laplace equation, Poisson equation, and the 1D equation, demonstrating its broad applicability to PDEs with challenging characteristics.
Dual Networks Resolve Multi-Scale Equations
This research presents a dual-network physics-informed neural network (PINN) architecture designed to improve the accuracy and efficiency of solving multi-scale partial differential equations. The team decomposed the solution into separate components representing interior dynamics and boundary corrections, coupling these networks through a unified physics model and a novel distance-weighted specialization technique. This approach addresses challenges faced by traditional single-network PINNs, specifically optimization interference between physics and boundary conditions, and difficulty in resolving sharp gradients within solutions. Across benchmark problems including Laplace, Poisson, and Fokker-Planck equations, the dual-network framework consistently outperforms standard PINNs, achieving substantially lower errors, improved boundary adherence, and better resolution of steep solution variations.
Ablation studies confirm the complementary roles of the boundary-interior specialization, annealed role regularization, and the two-phase sampling curriculum employed during training. The method is notable for its simplicity, minimal computational overhead, and seamless integration with existing PINN workflows. The authors acknowledge that the foundational elements of the framework, the shared physics model, soft boundary-interior specialization, augmented Lagrangian formulation, and two-phase training, are crucial to its success. Future work could extend this approach to higher-dimensional, time-dependent, and stochastic systems, potentially broadening the applicability of physics-informed models to a wider range of complex problems.
👉 More information
🗞 A Multi-Phase Dual-PINN Framework: Soft Boundary-Interior Specialization via Distance-Weighted Priors
🧠 ArXiv: https://arxiv.org/abs/2511.23409
