Geometric Neural Operators for Bayesian Inverse Problems and Optimization Under Uncertainty
Please login to view abstract download link
Deep neural networks (DNNs) have emerged as leading contenders for overcoming the challenges of constructing infinite dimensional surrogate models, which are known as neural operators. We show that black box application of DNNs for problems with infinite dimensional parameter fields leads to poor results when training data are limited due to the expense of the model. Instead, by constructing a network architecture that captures the geometry of the map---in particular its smoothness, anisotropy, intrinsic low-dimensionality, and sensitivity---one can construct a dimension-independent reduced basis neural operator that is accurate and optimization-aware using limited training data. We employ this reduced basis neural operator to make tractable the solution of PDE-constrained Bayesian inverse problems and optimal control under uncertainty, with application to problems governed by wave propagation, hyperelasticity, and viscous flows.