XAI
The first transversal research axis in MAIA primary concerns the concept of eXplainable AI (XAI). It consists in interfacing the black boxes produced by ML models with human users, alias the explainees. In MAIA, explainees will typically be scientists, mostly specialists of the targeted areas (Health, Chemistry, Environment), but in the Health case, one also plans to consider patients as explainees, so as to help physicians in explaining people the AI-based predictions on which diagnoses and therapies will be elaborated.
When used in sensitive applications or critical areas, the decisions made by a learner must go through an understandable model. Thus, predictors must be verified and their global explainability assessed (the absence of bias must be tested, the predictions made must be compliant with expert knowledge). Furthermore, predictions themselves must also be explained and this calls for local explainability methods. The explanation of predictions can be broken down into several dimensions, depending on the type of explanations (e.g., abductive or contrastive) we are looking for, the explanatory model provided to the user, the quantity of data available, and the approach used to explain. These dimensions participate in the treatment of the famous dilemma between the precision and the interpretability of learning. Parsimonious models such as logical rules or decision trees, are, in practice, significantly less efficient than non-interpretable models such as deep neural networks or random forests. This also brings into focus the trade-offs between formal approaches to XAI, which are rigorous and model-based, and heuristic approaches which are model-agnostic, offer weaker guarantees, but tend to scale better.
In MAIA, we plan to develop research activities for both global and local explainability so as to improve the acceptability of AI. We will consider many ML models, explore a number of paths (including model-based techniques, heuristic approaches and the distillation of black boxes into more interpretable models) to reach the objective. Specialists of Health, Chemistry, and Environment involved in MAIA will be in charge of evaluating the quality of the solutions to the explanation issues that will be addressed. We will leverage their expertise to tackle the challenging issue of determining how to update predictors when the predictions are not good enough. Since the quality of an explanation is not intrinsic to it but heavily depends on the explainee who will get the explanation, we plan to define explainee models and to take advantage of them to focus the generation of explanations on those that best fit the explainee preferences. To do so and evaluate explainee models, the help of colleagues involved in Humanities will be also of the utmost value.