Machine Learning seminar
When: Wed, 19 June 2019, 2:00pm
Where: A225, College Building
Who: Adam White; City, University of London.
Title: Measurable Counterfactual Explanations for Any Classifier
Abstract: The predictions of machine learning systems need to be explainable to the individuals they affect. Yet the inner workings of many machine learning systems seem unavoidably opaque. In this talk we will introduce a new system Counterfactual Local Explanations viA Regression (CLEAR). CLEAR is based on the view that a satisfactory explanation of a prediction needs to both explain the value of that prediction and answer ‘what-if-things-had-been-different’ questions. Furthermore, it must also be measurable and state how well it explains a machine learning system. It must know what it does not know. CLEAR generates counterfactuals that specify the minimum changes necessary to flip a prediction’s classification. It then builds local regression models, using the counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. When applied to multi-layer perceptrons trained on four datasets, CLEAR improves on the fidelity of LIME by approximately 40%.
Bio: Adam White is currently working as a Research Assistant at City, University of London. His research interests are in explainable AI and causality. Adam received a PhD in philosophy of science from the London School of Economics in 2017. His PhD thesis was on the causal discovery of nonlinear dynamics in biochemistry. He then completed the MSc in Data Science at City in 2017/2018. Adam worked for 17 years as an Operational Research analyst in British Airways and Barclays Bank.
All welcome!