Applying Adjoints Twice: an Efficient Gradient Implementation for Models with Linear Structure with Applications in Reconstruction for Electron Probe Microanalysis
Please login to view abstract download link
Computational techniques to solve inverse problems are often built upon gradient information of a scalar quantity. Both, gradient-based optimization methods (steepest descent, BFGS) and stochastic methods (hamiltonian monte carlo) require gradients. For an efficient application of such methods, the efficient computation of the gradient is key. Gradient-implementations are often driven by adjoint methods. E.g. the adjoint mode of algorithmic differentiation or the continuous adjoint method for pde-constrained models. Fundamentally, adjoint methods exploit linearity of the derivative and adjoint operators. However, they are not restricted to the application to derivatives. We consider a pde-constrained model with a certain structure: linearity of the pde-solution in an excitation and linear extractions from the pde-solution. Both can depend non-linearly on the unknown of the inverse problem. Additionally, we assume, that the number of excitations is much larger than the number of extractions per pde-solution. Naively, an implementation of the gradient would require [number of unknowns] x [number of excitations] expensive pde-solutions. Applying an adjoint method twice results in a model that only requires [number of extractions] pde-solutions. Our main application is the inverse problem of material reconstruction in electron probe microanalysis (EPMA). EPMA is a non-destructive imaging method for solid material samples based on x-ray measurements. A sample is successively scanned by an electron beam (many excitations) while x-ray intensities of characteristic wavelengths are measured (some extractions). The goal is to reconstruct mass concentrations of material samples (many unknowns) based on a deterministic model for electron transport and x-ray emission. Exemplarily, we also show application to an academic problem.