0% found this document useful (0 votes)
9 views3 pages

AI - Explainables - Pyhton Libraries

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views3 pages

AI - Explainables - Pyhton Libraries

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

𝐄𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐥𝐞 𝐀𝐈: 𝟏𝟎 𝐓𝐨𝐩 𝐏𝐲𝐭𝐡𝐨𝐧 𝐋𝐢𝐛𝐫𝐚𝐫𝐢𝐞𝐬 𝐟𝐨𝐫 𝐃𝐞𝐦𝐲𝐬𝐭𝐢𝐟𝐲𝐢𝐧𝐠 𝐘𝐨𝐮𝐫 𝐌𝐨𝐝𝐞𝐥'𝐬 𝐃𝐞𝐜𝐢𝐬𝐢𝐨𝐧𝐬

https://moez-62905.medium.com/top-explainable-ai-xai-python-frameworks-in-2022-
94ff4610b0f5

XAI is artificial intelligence that allows humans to understand the results and decision-making
processes of the model or system.

𝟛 𝕊𝕥𝕒𝕘𝕖𝕤 𝕠𝕗 𝔼𝕩𝕡𝕝𝕒𝕟𝕒𝕥𝕚𝕠𝕟:

Pre-modeling Explainability

Explainable AI starts with explainable data and clear, interpretable feature engineering.

Modeling Explainability

When choosing a model for a particular problem, it is generally best to use the most
interpretable model that still achieves good predictive results.

Post-model explainability

This includes techniques such as perturbation, where the effect of changing a single variable on
the model's output is analyzed such as SHAP values for after training.

I found these 𝟏𝟎 𝐩𝐲𝐭𝐡𝐨𝐧 𝐥𝐢𝐛𝐫𝐚𝐫𝐢𝐞𝐬 𝐟𝐨𝐫 𝐀𝐈 𝐞𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲:

📚SHAP (SHapley Additive exPlanations)

SHAP is a model agnostic and works by breaking down the contribution of each feature and
attributing a score to each feature.

📚 LIME (Local Interpretable Model-agnostic Explanations)

LIME is another model agnostic method that works by approximating the behavior of the model
locally around a specific prediction.

📚 Eli5

Eli5 is a library for debugging and explaining classifiers. It provides feature importance scores, as
well as "reason codes" for scikit-learn, Keras, xgboost, LightGBM, CatBoost.
📚 Shapash

Shapash is a Python library which aims to make machine learning interpretable and
understandable to everyone. Shapash provides several types of visualization with explicit labels.

📚 Anchors

Anchors is a method for generating human-interpretable rules that can be used to explain the
predictions of a machine learning model.

📚 XAI (eXplainable AI)

XAI is a library for explaining and visualizing the predictions of machine learning models
including feature importance scores.

📚 BreakDown

BreakDown is a tool that can be used to explain the predictions of linear models. It works by
decomposing the model's output into the contribution of each input feature.

📚 interpret-text

interpret-text is a library for explaining the predictions of natural language processing models.

📚 iml (Interpretable Machine Learning)

iml currently contains the interface and IO code from the Shap project, and it will potentially also
do the same for the Lime project.

📚 aix360 (AI Explainability 360)

aix360 includes a comprehensive set of algorithms that cover different dimensions

📚 OmniXAI

OmniXAI (short for Omni eXplainable AI), addresses several problems with interpreting
judgments produced by machine learning models in practice.
✍️Have I forgotten any libraries?

You might also like