AI - Explainables - Pyhton Libraries
AI - Explainables - Pyhton Libraries
https://moez-62905.medium.com/top-explainable-ai-xai-python-frameworks-in-2022-
94ff4610b0f5
XAI is artificial intelligence that allows humans to understand the results and decision-making
processes of the model or system.
𝟛 𝕊𝕥𝕒𝕘𝕖𝕤 𝕠𝕗 𝔼𝕩𝕡𝕝𝕒𝕟𝕒𝕥𝕚𝕠𝕟:
Pre-modeling Explainability
Explainable AI starts with explainable data and clear, interpretable feature engineering.
Modeling Explainability
When choosing a model for a particular problem, it is generally best to use the most
interpretable model that still achieves good predictive results.
Post-model explainability
This includes techniques such as perturbation, where the effect of changing a single variable on
the model's output is analyzed such as SHAP values for after training.
SHAP is a model agnostic and works by breaking down the contribution of each feature and
attributing a score to each feature.
LIME is another model agnostic method that works by approximating the behavior of the model
locally around a specific prediction.
📚 Eli5
Eli5 is a library for debugging and explaining classifiers. It provides feature importance scores, as
well as "reason codes" for scikit-learn, Keras, xgboost, LightGBM, CatBoost.
📚 Shapash
Shapash is a Python library which aims to make machine learning interpretable and
understandable to everyone. Shapash provides several types of visualization with explicit labels.
📚 Anchors
Anchors is a method for generating human-interpretable rules that can be used to explain the
predictions of a machine learning model.
XAI is a library for explaining and visualizing the predictions of machine learning models
including feature importance scores.
📚 BreakDown
BreakDown is a tool that can be used to explain the predictions of linear models. It works by
decomposing the model's output into the contribution of each input feature.
📚 interpret-text
interpret-text is a library for explaining the predictions of natural language processing models.
iml currently contains the interface and IO code from the Shap project, and it will potentially also
do the same for the Lime project.
📚 OmniXAI
OmniXAI (short for Omni eXplainable AI), addresses several problems with interpreting
judgments produced by machine learning models in practice.
✍️Have I forgotten any libraries?