Scientific Machine Learning for Closure Models of Multiscale Problems - a Differentiable Physics Approach
Please login to view abstract download link
Discovering physics models is an ongoing, fundamental challenge in computational science and a crucial ingredient for predictive digital twins. In fluid flow problems, this discovery problem is usually known as the “closure problem”, and the art is to discover a “closure model” that represents the effect of the small scales on the large scales. Well-known examples appear in large eddy simulations (LES) and in reduced-order models (ROMs). Recently, it appeared that highly accurate closure models can be constructed by using neural networks. However, integrating the neural network into the physics models (“neural closure models”) is typically prone to numerical instabilities. % as the training environment does not match the prediction environment. Our new approach avoids numerical instability by enforcing kinetic energy conservation (or dissipation). To achieve this, we extend the systems of equations for the large scales with an additional equation that represents compressed small scales. The closure model for this extended system is designed to be energy-conserving. We train our closure models to give accurate `a posteriori' results, meaning that they are trained \textit{while being embedded in the PDE solver}, and we need so-called differentiable programming. We will show how our new differentiable physics framework in Julia can be used to tackle the closure problem, and more generally, how it allows the coupling of PDE solvers acting at different scales while preserving gradient information.