0% found this document useful (0 votes)
20 views24 pages

13 Model Interpretability

Uploaded by

sairohith068620
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views24 pages

13 Model Interpretability

Uploaded by

sairohith068620
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Model Explainability and

Interpretability
Instructor: Saravanan Thirumuruganathan
Caveat Emptor
• Assumption: You know how to interpret simple ML models

• The ideas in this lecture are quite simple and published > 5 years ago
• So NOT the state of the art which is much much more complex

• I have included some simple and intuitive approaches to give a taste


• Goal: introduce some ideas needed for PA4
Explainable AI

XAI for Science and Medicine by Scott Lundberg, 2019


Explainable AI

Image from Darpa XAI Initiative


AI is increasingly used in many high-stakes tasks

https://hcixaitutorial.github.io/
Explainable AI

• Model Interpretability : Understand what the model is doing by


analyzing its features, parameters, weights etc

• Model Explainability: Takes an ML model and explain the behavior in


human terms. Not always possible for complex models

• Increasingly mandated by government regulations

https://hcixaitutorial.github.io/
Why Explainable AI?
• GDPR: Article 22 empowers individuals with the right to demand an explanation of how an
automated system made a decision that affects them.
• Algorithmic Accountability Act 2019: Requires companies to provide an assessment of the risks
posed by the automated decision system to the privacy or security and the risks that
contribute to inaccurate, unfair, biased, or discriminatory decisions impacting consumers
• California Consumer Privacy Act: Requires companies to rethink their approach to capturing,
storing, and sharing personal data to align with the new requirements by January 1, 2020.
• Washington Bill 1655: Establishes guidelines for the use of automated decision systems to
protect consumers, improve transparency, and create more market predictability.
• Massachusetts Bill H.2701: Establishes a commission on automated decision-making,
transparency, fairness, and individual rights.
• Illinois House Bill 3415: States predictive data analytics determining creditworthiness or hiring
decisions may not include information that correlates with the applicant race or zip code.
XAI Tutorial AAAI 2020 Lecue et al
XAI Taxonomy

Guidotti et al. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR) .
Post-hoc Global Explanation: Knowledge
Distillation

https://hcixaitutorial.github.io/
Global Surrogate model / Distillation
• Train a complex model M1 on training data D_T
• For dataset D_X, get the prediction from M1
• Choose a simple model M2
• Train M2 using D_X and prediction of M1 on D_X (not ground truth!)
• Make sure the accuracy of M2 on D_X is not too bad compared to M1

• Observation: overfitting is okay here as we are only using it for


explanations
Decision Tree Approximation

Dhurandhar et al. Improving Simple Models with Confidence Profiles. NeurIPS 2018
Glob Exp via Permutation Feature Importance
• Goal: measures the contribution of each feature in a model

• Intuition: break the relationship between feature and target

• Importance of an attribute A = performance of a model with A –


performance of model without A
• Question: how to estimate this as the model needs A for prediction?
Permutation Feature Importance
Intuition: break the relationship between feature and target

XAI Tutorial AAAI 2020 Lecue et al


Permutation Feature Importance
Given: classifier C, Dataset D, accuracy A when applying C on D

For each feature f


For i = 1 to K
Randomly shuffle f to generate a corrupted version of the dataset D’
Compute accuracy A’_i of C on D’
Importance(f) = A – average (A’ )
Permutation Feature Importance

Eliana Pastor, XAI Course 2024


Glob Exp via Partial Dependence Plots
• PDP shows the dependence between the target and a feature of
interest by marginalizing over the values of all other input features

• Simplest application: check whether the relationship between the


target and a feature is linear, monotonic or more complex.

• Visualize how changes to a feature influences the predicted target


Partial Dependence Plots

Eliana Pastor, XAI Course 2024


Partial Dependence Plots

Eliana Pastor, XAI Course 2024


Partial Dependence Plots

Eliana Pastor, XAI Course 2024


Explaining a prediction: Local Feature Contribution

Dhurandhar et al. Improving Simple Models with Confidence Profiles. NeurIPS 2018
LIME

Ribeiro, et al. Why should i trust you? Explaining the predictions of any classifier. KDD 2016
Local Approximation via LIME

• Given an instance to explain, generate synthetic instances in the neighborhood by perturbation


• Get label for them using the model
• Train a linear model using the synthetic data but with weights so that predictions of nearer
instances have a higher weight

Eliana Pastor, XAI Course 2024


Explaining a prediction: Similar Examples

Gurumoorthy et al. Efficient Data Representation by Selecting Prototypes with Importance Weights”, ICDM 2019
Inspecting Counterfactual: Contrastive Features

Dhurandhar, et al. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. NeurIPS 2018

You might also like