- Notifications
You must be signed in to change notification settings - Fork228
Algorithms for outlier, adversarial and drift detection
License
SeldonIO/alibi-detect
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Alibi Detect is a source-available Python library focused onoutlier,adversarial anddrift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. BothTensorFlow andPyTorch backends are supported for drift detection.
For more background on the importance of monitoring outliers and distributions in a production setting, check outthis talk from theChallenges in Deploying and Monitoring Machine Learning Systems ICML 2020 workshop, based on the paperMonitoring and explainability of models in production and referencing Alibi Detect.
For a thorough introduction to drift detection, check outProtecting Your Machine Learning Against Drift: An Introduction. The talk covers what drift is and why it pays to detect it, the different types of drift, how it can be detected in a principled manner and also describes the anatomy of a drift detector.
The package,alibi-detect
can be installed from:
- PyPI or GitHub source (with
pip
) - Anaconda (with
conda
/mamba
)
alibi-detect can be installed fromPyPI:
pip install alibi-detect
Alternatively, the development version can be installed:
pip install git+https://github.com/SeldonIO/alibi-detect.git
To install with the TensorFlow backend:
pip install alibi-detect[tensorflow]
To install with the PyTorch backend:
pip install alibi-detect[torch]
To install with the KeOps backend:
pip install alibi-detect[keops]
To use the
Prophet
time series outlier detector:pip install alibi-detect[prophet]
To install fromconda-forge it is recommended to usemamba,which can be installed to thebase conda enviroment with:
conda install mamba -n base -c conda-forge
To install alibi-detect:
mamba install -c conda-forge alibi-detect
We will use theVAE outlier detector to illustrate the API.
fromalibi_detect.odimportOutlierVAEfromalibi_detect.savingimportsave_detector,load_detector# initialize and fit detectorod=OutlierVAE(threshold=0.1,encoder_net=encoder_net,decoder_net=decoder_net,latent_dim=1024)od.fit(x_train)# make predictionspreds=od.predict(x_test)# save and load detectorsfilepath='./my_detector/'save_detector(od,filepath)od=load_detector(filepath)
The predictions are returned in a dictionary with as keysmeta
anddata
.meta
contains the detector's metadata whiledata
is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores and thresholds as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with thetypes of algorithms supported.
The following tables show the advised use cases for each algorithm. The columnFeature Level indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check thealgorithm reference list for more information with links to the documentation and original papers as well as examples for each of the detectors.
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Isolation Forest | ✔ | ✔ | |||||
Mahalanobis Distance | ✔ | ✔ | ✔ | ||||
AE | ✔ | ✔ | ✔ | ||||
VAE | ✔ | ✔ | ✔ | ||||
AEGMM | ✔ | ✔ | |||||
VAEGMM | ✔ | ✔ | |||||
Likelihood Ratios | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Prophet | ✔ | ||||||
Spectral Residual | ✔ | ✔ | ✔ | ||||
Seq2Seq | ✔ | ✔ |
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Adversarial AE | ✔ | ✔ | |||||
Model distillation | ✔ | ✔ | ✔ | ✔ | ✔ |
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Kolmogorov-Smirnov | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Cramér-von Mises | ✔ | ✔ | ✔ | ✔ | |||
Fisher's Exact Test | ✔ | ✔ | ✔ | ✔ | |||
Maximum Mean Discrepancy (MMD) | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Learned Kernel MMD | ✔ | ✔ | ✔ | ✔ | |||
Context-aware MMD | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Least-Squares Density Difference | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Chi-Squared | ✔ | ✔ | ✔ | ||||
Mixed-type tabular data | ✔ | ✔ | ✔ | ||||
Classifier | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Spot-the-diff | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
Classifier Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Regressor Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ |
The drift detectors support TensorFlow, PyTorch and (where applicable)KeOps backends.However, Alibi Detect does not install these by default. See theinstallation options for more details.
fromalibi_detect.cdimportMMDDriftcd=MMDDrift(x_ref,backend='tensorflow',p_val=.05)preds=cd.predict(x)
The same detector in PyTorch:
cd=MMDDrift(x_ref,backend='pytorch',p_val=.05)preds=cd.predict(x)
Or in KeOps:
cd=MMDDrift(x_ref,backend='keops',p_val=.05)preds=cd.predict(x)
Alibi Detect also comes with various preprocessing steps such as randomly initialized encoders, pretrained textembeddings to detect drift on using thetransformers library andextraction of hidden layers from machine learning models. This allows to detect different types of drift such ascovariate and predicted distribution shift. The preprocessing steps are again supported in TensorFlow and PyTorch.
fromalibi_detect.cd.tensorflowimportHiddenOutput,preprocess_driftmodel=# TensorFlow model; tf.keras.Model or tf.keras.Sequentialpreprocess_fn=partial(preprocess_drift,model=HiddenOutput(model,layer=-1),batch_size=128)cd=MMDDrift(x_ref,backend='tensorflow',p_val=.05,preprocess_fn=preprocess_fn)preds=cd.predict(x)
Check the example notebooks (e.g.CIFAR10,movie reviews) for more details.
Isolation Forest (FT Liu et al., 2008)
- Example:Network Intrusion
Mahalanobis Distance (Mahalanobis, 1936)
- Example:Network Intrusion
- Example:CIFAR10
Variational Auto-Encoder (VAE) (Kingma et al., 2013)
- Examples:Network Intrusion,CIFAR10
Auto-Encoding Gaussian Mixture Model (AEGMM) (Zong et al., 2018)
- Example:Network Intrusion
Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)
- Example:Network Intrusion
Likelihood Ratios (Ren et al., 2019)
- Examples:Genome,Fashion-MNIST vs. MNIST
Prophet Time Series Outlier Detector (Taylor et al., 2018)
- Example:Weather Forecast
Spectral Residual Time Series Outlier Detector (Ren et al., 2019)
- Example:Synthetic Dataset
Sequence-to-Sequence (Seq2Seq) Outlier Detector (Sutskever et al., 2014;Park et al., 2017)
- Examples:ECG,Synthetic Dataset
Adversarial Auto-Encoder (Vacanti and Van Looveren, 2020)
- Example:CIFAR10
- Example:CIFAR10
- Example:CIFAR10,molecular graphs,movie reviews
- Example:Penguins
- Example:Penguins
Learned Kernel MMD (Liu et al, 2020)
- Example:CIFAR10
Context-aware MMD (Cobb and Van Looveren, 2022)
- Example:ECG,news topics
- Example:Income Prediction
- Example:Income Prediction
Classifier (Lopez-Paz and Oquab, 2017)
- Example:CIFAR10,Amazon reviews
Spot-the-diff (adaptation ofJitkrittum et al, 2016)
- ExampleMNIST and Wine quality
Classifier and Regressor Uncertainty
- Example:CIFAR10 and Wine,molecular graphs
Online Maximum Mean Discrepancy
- Example:Wine Quality,Camelyon medical imaging
Online Least-Squares Density Difference (Bu et al, 2017)
- Example:Wine Quality
The package also contains functionality inalibi_detect.datasets
to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or aBunch object with the data, labels and optional metadata are returned. Example:
fromalibi_detect.datasetsimportfetch_ecg(X_train,y_train), (X_test,y_test)=fetch_ecg(return_X_y=True)
Genome Dataset:
fetch_genome
- Bacteria genomics dataset for out-of-distribution detection, released as part ofLikelihood Ratios for Out-of-Distribution Detection. From the originalTL;DR:The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check theREADME.
fromalibi_detect.datasetsimportfetch_genome(X_train,y_train), (X_val,y_val), (X_test,y_test)=fetch_genome(return_X_y=True)
ECG 5000:
fetch_ecg
- 5000 ECG's, originally obtained fromPhysionet.
NAB:
fetch_nab
- Any univariate time series in a DataFrame from theNumenta Anomaly Benchmark. A list with the available time series can be retrieved using
alibi_detect.datasets.get_list_nab()
.
- Any univariate time series in a DataFrame from theNumenta Anomaly Benchmark. A list with the available time series can be retrieved using
CIFAR-10-C:
fetch_cifar10c
- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
fetch_cifar10c
allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved withalibi_detect.datasets.corruption_types_cifar10c()
. The dataset can be used in research on robustness and drift. The original data can be foundhere. Example:
fromalibi_detect.datasetsimportfetch_cifar10ccorruption= ['gaussian_noise','motion_blur','brightness','pixelate']X,y=fetch_cifar10c(corruption=corruption,severity=5,return_X_y=True)
- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
Adversarial CIFAR-10:
fetch_attack
- Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks:Carlini-Wagner ('cw') andSLIDE ('slide'). Example:
fromalibi_detect.datasetsimportfetch_attack(X_train,y_train), (X_test,y_test)=fetch_attack('cifar10','resnet56','cw',return_X_y=True)
- KDD Cup '99:
fetch_kdd
- Dataset with different types of computer network intrusions.
fetch_kdd
allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be foundhere.
- Dataset with different types of computer network intrusions.
Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found underalibi_detect.models
. Main implementations:
PixelCNN++:
alibi_detect.models.pixelcnn.PixelCNN
Variational Autoencoder:
alibi_detect.models.autoencoder.VAE
Sequence-to-sequence model:
alibi_detect.models.autoencoder.Seq2Seq
ResNet:
alibi_detect.models.resnet
- Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on ourGoogle Cloud Bucket and can be fetched as follows:
fromalibi_detect.utils.fetchingimportfetch_tf_modelmodel=fetch_tf_model('cifar10','resnet32')
Alibi-detect is integrated in the machine learning model deployment platformSeldon Core and model serving frameworkKFServing.
If you use alibi-detect in your research, please consider citing it.
BibTeX entry:
@software{alibi-detect, title = {Alibi Detect: Algorithms for outlier, adversarial and drift detection}, author = {Van Looveren, Arnaud and Klaise, Janis and Vacanti, Giovanni and Cobb, Oliver and Scillitoe, Ashley and Samoilescu, Robert and Athorne, Alex}, url = {https://github.com/SeldonIO/alibi-detect}, version = {0.12.1.dev0}, date = {2024-04-17}, year = {2019}}
About
Algorithms for outlier, adversarial and drift detection