Reducing Uncertainty in Digital Twin Models by Leveraging Data from a Population of Related Assets
Please login to view abstract download link
While the concept of digital twins has tremendous potential to revolutionize a wide range of applications, there is still work needed to enable mathematically robust, stable, and efficient algorithms for data-assimilation and optimal experimental design to realize this potential. In the context of employing Bayesian inference to learn the relative likelihood of parameter values for a particular asset, the prior may be too general or the data may not be sufficiently explanatory to inform predictions about the individual asset. Consequently, we seek to leverage data from a population of related assets to construct informative priors that can be used to reduce uncertainty in the digital twin of any single asset. This may be the same data that we collect from the asset, or different data that we collect during destructive testing of the related assets. To accomplish this objective, we use data consistent inversion [1], which is measure-theoretic framework that, in this context, constructs a population-informed prior that is pullback of the observed probability measure from the population, i.e., the push forward through the observation operator matches the observed probability density of the population data. Our numerical examples demonstrate that utilizing population-informed priors significantly increases the Kullback–Leibler divergence from the posterior to the prior in comparison to utilizing uninformative priors. These results are complemented with theory for linear-Gaussian inference that establishes the conditions under which using our approach is guaranteed to improve posterior estimates of uncertainty. We also develop a metric to detect when a particular asset is out-of-distribution from the population and therefore the population data may be uninformative.