ECCOMAS 2024

UltraPINNs: Exploiting ultraweak implementations to boost the performance of Variational PINNs

  • Bastidas, Manuela (University of the Basque Country (UPV/EHU))
  • Uriarte, Carlos (BCAM and UPV/EHU)
  • Taylor, Jamie M (CUNEF University)
  • Rojas, Sergio (Pontificia Universidad Católica de Valparaíso)
  • Pardo, David (UPV/EHU, BCAM, and Ikerbasque)

Please login to view abstract download link

We propose UltraPINNs, a method for solving PDEs using Neural networks that offers two approaches: one based on the ultraweak variational formulation of a PDE, and the other considering its weak formulation. In the latter, we compute the loss function selecting test functions in $H^2 \cap H^1_0$ and transferring derivatives from the trial to the test space through integration by parts, thus avoiding numerical differentiation of the trial function with respect to space. We define this strategy as an ``ultraweak implementation'' rather than an ``ultraweak formulation'' because we still consider the norms, spaces, and error bounds corresponding to the weak formulation. Our method offers two main advantages: (i) Due to the increased regularity of the integrand, using classical quadrature rules yields higher precision without increasing the number of integration points. (ii) Despite the often suboptimal convergence rates of gradient-based optimization algorithms like Adam, it is possible to expedite the convergence of Adam by interpreting the neurons in the last hidden layer of the NN as basis trial functions and employing a least-squares (LS) solver to calculate the last-layer weights [1]. However, if the construction of the LS matrix requires the calculation of the spatial derivatives of the trial function, then its computational cost becomes dominant and increases linearly with its size. In UltraPINNs, the cost of constructing the LS matrix is significantly lower than in weak-type implementations, resulting in enhanced performance speeds. We demonstrate the performance of UltraPINNs equipped with a hybrid Adam/LS solver using numerical examples in 1D, 2D, and 3D. We observe overwhelming improvements in both convergence rate and computational cost, surpassing the performance of Adam or Adam/LS with weak-type implementations, and improving the integration error. [1] E.C. Cyr, et al., Robust training and initialization of deep neural networks: An adaptive basis viewpoint, Mathematical and Scientific Machine Learning, 2020.