Rephrasing High-dimensional, PDE-based, Bayesian Inverse Problems
Please login to view abstract download link
We present a novel approach for solving high-dimensional, Bayesian inverse problems involving partial differential equations (PDEs). Conventional methods that treat the forward-model as a black-box, scale poorly with the number of unknowns (curse of dimensionality) due to the exorbitant number of costly likelihood evaluations required. This precludes solving such problems or necessitates the availability of high-performance computing resources. Our strategy addresses this issue while preserving the advantageous features of Bayesian formulations, such as the ability to handle stochastic noise and furnish probabilistic estimates. We leverage the governing equations (PDEs) expressed as weighted residuals and integrate them into a \textit{virtual} likelihood term [Kaltenbach, 2020]. We also treat both the PDE solution and unknown parameters as latent, random variables. We then employ stochastic variational inference to determine the posterior, with the specific methods evolving based on the number and selection of weight functions considered. In contrast to traditional solution strategies, each iteration involves computing a single integral over the problem domain [Bao, 2020], enabling the efficient approximation of the posterior. Notably, our formulation yields a solution to the inverse problem even when a well-posed, forward problem is not formulated. We showcase its effectiveness in diverse applications, including linear elasticity and diffusion problems.