ECCOMAS 2024

Regularity-Conforming Neural Networks for PDEs

  • Taylor, Jamie ()

Please login to view abstract download link

Neural Networks (NNs) are becoming more commonly applied as discretised function spaces for solving Partial Differential Equations (PDEs). Whilst the Universal Approximation Theorem guarantees that a sufficiently large NN can approximate Sobolev functions -- the natural functions that arise in PDEs --, we will demonstrate that in practice, low-regularity solutions can lead to convergence issues during training. To overcome this, we propose the use of regularity-conforming architectures, where a priori regularity information is built into the NN. In addition, such architectures are inherently explainable, allowing the definition of novel loss functions. We use a 2D transmission problem with discontinuous materials as our case study. This example is used in several application domains, e.g., geophysics. At the same time, we know precisely several aspects of its regularity along the domain. The solutions may admit power-like singularities and discontinuities in the gradient across material interfaces. In the classical L-shape problem, our proposed architecture improves the H1-error by a factor of ten with respect to the use of a classical architecture. In the case of four distinct materials, where both jump discontinuities in the derivative and power-type singularities are present, our explainable architecture permits the definition of a PINNs-type loss based on the strong formulation of the PDE with an interface condition, obtaining H1-relative errors of 0.5%