ECCOMAS 2024

Sparsifying dimensionality reduction of PDE solution data with Bregman learning

  • Heeringa, Tjeerd Jan (University of Twente)
  • Brune, Christoph (University of Twente)
  • Guo, Mengwu (University of Twente)

Please login to view abstract download link

Classical model reduction techniques project the governing equations onto a linear subspace of the original state space. More recent data-driven techniques use neural networks to enable nonlinear projections. Whilst those approaches are not limited by the Kolgomorov $n$-width, they lack the ability to guarantee a physically meaningful representation of PDE-generated data and may overestimate the latent dimensionality. To overcome these challenges, we propose a multistep algorithm that induces sparsity in the encoder-decoder networks for an effective compression of the latent space. In the first step, we train a sparsely initialized network using linearized Bregman iterations. In the second step, we further compress the latent space dimensionality by POD. Afterwards, we use a bias propagation technique to reduce the network density even further. We apply this algorithm to three representative PDE models: 1D diffusion, 1D advection, and 2D reaction-diffusion. Compared to conventional training methods like Adam, our method achieves similar accuracy with 30\% less parameters and a significantly smaller latent space.