ECCOMAS 2024

New results in interpretability of models and algorithms with dependent inputs

  • Il Idrissi, Marouane (EDF R&D)
  • Herin, Margot (EDF R&D)
  • Chabridon, Vincent (EDF R&D)
  • Iooss, Bertrand (EDF R&D)
  • Bousquet, Nicolas (EDF R&D)

Please login to view abstract download link

By analogy with the global sensitivity analysis of numerical models, we can define the interpretability of a learned model, or more generally of an algorithm used in artificial intelligence tools, as the production of clear (explainable) diagnostics concerning the influence of input quantities varying in distributions (e.g. features, raw data...) on the predicted quantities. When these inputs are dependent (which is often the case), producing these diagnostics remains challenging. The use of Shapley values, derived from cooperative game theory, is now an extremely common response in the applied community. But it is known to offer a flawed solution when applied to correlated inputs, some of which play no role in the forecast. To overcome this difficulty, we recently developed Proportional Marginal Effects, that significantly refine these diagnostics. In addition, new results have been obtained that enable us to produce a generalized Hoeffding decomposition and differentiate between marginal, correlated and interaction effects in the model or algorithm.