Alibi is a Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.
If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.
Anchor explanations for images |
Integrated Gradients for text |
Counterfactual examples |
Accumulated Local Effects |
Alibi can be installed from:
pip
)conda
/mamba
)Alibi can be installed from PyPI:
pip install alibi
Alternatively, the development version can be installed:
pip install git+https://github.com/SeldonIO/alibi.git
To take advantage of distributed computation of explanations, install alibi
with ray
:
pip install alibi[ray]
For SHAP support, install alibi
as follows:
pip install alibi[shap]
To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:
conda install mamba -n base -c conda-forge
For the standard Alibi install:
mamba install -c conda-forge alibi
For distributed computing support:
mamba install -c conda-forge alibi ray
For SHAP support:
mamba install -c conda-forge alibi shap
The alibi explanation API takes inspiration from scikit-learn
, consisting of distinct initialize,
fit and explain steps. We will use the AnchorTabular
explainer to illustrate the API:
from alibi.explainers import AnchorTabular
# initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)
explainer.fit(X_train)
# explain an instance
explanation = explainer.explain(x)
The explanation returned is an Explanation
object with attributes meta
and data
. meta
is a dictionary
containing the explainer metadata and any hyperparameters and data
is a dictionary containing everything
related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed
via explanation.data['anchor']
(or explanation.anchor
). The exact details of available fields varies
from method to method so we encourage the reader to become familiar with the
types of methods supported.
The following tables summarize the possible use cases for each method.
Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed |
---|---|---|---|---|---|---|---|---|---|---|
ALE | BB | global | ✔ | ✔ | ✔ | |||||
Partial Dependence | BB WB | global | ✔ | ✔ | ✔ | ✔ | ||||
PD Variance | BB WB | global | ✔ | ✔ | ✔ | ✔ | ||||
Permutation Importance | BB | global | ✔ | ✔ | ✔ | ✔ | ||||
Anchors | BB | local | ✔ | ✔ | ✔ | ✔ | ✔ | For Tabular | ||
CEM | BB* TF/Keras | local | ✔ | ✔ | ✔ | Optional | ||||
Counterfactuals | BB* TF/Keras | local | ✔ | ✔ | ✔ | No | ||||
Prototype Counterfactuals | BB* TF/Keras | local | ✔ | ✔ | ✔ | ✔ | Optional | |||
Counterfactuals with RL | BB | local | ✔ | ✔ | ✔ | ✔ | ✔ | |||
Integrated Gradients | TF/Keras | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | Optional | |
Kernel SHAP | BB | local global |
✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Tree SHAP | WB | local global |
✔ | ✔ | ✔ | ✔ | Optional | |||
Similarity explanations | WB | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
These algorithms provide instance-specific scores measuring the model confidence for making a particular prediction.
Method | Models | Classification | Regression | Tabular | Text | Images | Categorical Features | Train set required |
---|---|---|---|---|---|---|---|---|
Trust Scores | BB | ✔ | ✔ | ✔(1) | ✔(2) | Yes | ||
Linearity Measure | BB | ✔ | ✔ | ✔ | ✔ | Optional |
Key:
These algorithms provide a distilled view of the dataset and help construct a 1-KNN interpretable classifier.
Method | Classification | Regression | Tabular | Text | Images | Categorical Features | Train set labels |
---|---|---|---|---|---|---|---|
ProtoSelect | ✔ | ✔ | ✔ | ✔ | ✔ | Optional |
Accumulated Local Effects (ALE, Apley and Zhu, 2016)
Partial Dependence (J.H. Friedman, 2001)
Partial Dependence Variance(Greenwell et al., 2018)
Permutation Importance(Breiman, 2001; Fisher et al., 2018)
Anchor explanations (Ribeiro et al., 2018)
Contrastive Explanation Method (CEM, Dhurandhar et al., 2018)
Counterfactual Explanations (extension of Wachter et al., 2017)
Counterfactual Explanations Guided by Prototypes (Van Looveren and Klaise, 2019)
Model-agnostic Counterfactual Explanations via RL(Samoilescu et al., 2021)
Integrated Gradients (Sundararajan et al., 2017)
Kernel Shapley Additive Explanations (Lundberg et al., 2017)
Tree Shapley Additive Explanations (Lundberg et al., 2020)
Trust Scores (Jiang et al., 2018)
Linearity Measure
ProtoSelect
Similarity explanations
If you use alibi in your research, please consider citing it.
BibTeX entry:
@article{JMLR:v22:21-0017,
author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},
title = {Alibi Explain: Algorithms for Explaining Machine Learning Models},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {181},
pages = {1-7},
url = {http://jmlr.org/papers/v22/21-0017.html}
}