Explainability Snippets

version 2018

Why Model interpretation? #

Understanding how a model makes decisions — model interpretation — has been on the front burner since the end of 2017. Decision support systems and models they are based on don’t explain which features influenced their decisions were known as black boxes. Model interpretability is not only important for companies that need to fulfill legal obligations to customers. It serves a technical purpose as well. Every ML model considers input features (problem properties) to predict results (outputs). The more relevant features we create and use to train an ML model during feature engineering, the more accurate results we can get and the simpler our model is. That’s why the ability to understand how the model makes predictions is crucial for its debugging. source

What is model interpretability? #

Machine learning models are very complex tools, so-called black-box classifiers, that don’t offer straightforward and human-interpretable decision rules.

As data scientists, we should be able to provide an explanation to end users about how a model works. However, this not necessarily means understanding every piece of the model or generating a set of decision rules. There could also be a case where this is not required:

  • problem is well studied,
  • model results has no consequences,
  • understanding the model by the end-user could pose a risk of gaming the system.

source

How to do it? #

  • Feature importance analysis
  • Feature correlations
  • Existing packages and tools (LIME, SHAP, Manifold…)

Model Interpretation Guidelines #

  • Global Interpretability: How well can we understand the relationship between each feature and the predicted value at a global level — for our entire observation set. Can we understand both the magnitude and direction of the impact of each feature on the predicted value?
  • Local Interpretability: How well can we understand the relationship between each feature and the predicted value at a local level — for a specific observation.
  • Feature Selection: Does the model help us focus on only the features that matter? Can it zero out the features that are just “noise”?

We can quickly identify that Feature X may be the most important, but does it make Outcome Y more or less likely? There may not be a yes-or-no answer.

Resources #

Blogs and Forums #

Papers #