Research Centre for Machine Learning meeting on Explainable AI
When: Fri, 29 November 2019, 4:00pm
Where: AG01, College Building
SHAP is an increasingly popular method for providing local explanations of AI system predictions. SHAP is based on the game-theory concept of Shapley Values. Shapley Values are the unique solution for fairly attributing the benefits of a cooperative game between players, when subject to a set of local accuracy and consistency constraints (an excellent introduction to Shapley Values is provided at https://www.youtube.com/watch?v=qcLZMYPdpH4&t=437s)
We will be discussing Lundeberg and Lee’s paper ‘A Unified Approach to Interpreting Model Predictions’ (2017) in which they propose SHAP and claim that it unifies six explainable AI methods. The aim of the meeting will be both to gain a better understanding of SHAP and to evaluate its usefulness. Dr Adam White will begin the meeting by providing a critical overview of SHAP.
As always – all welcome!