Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Interpretability and explainability of data and machine learning models

License

NotificationsYou must be signed in to change notification settings

Trusted-AI/AIX360

Repository files navigation

BuildDocumentation StatusPyPI version

The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 toolkit supports tabular, text, images, and time series data.

TheAI Explainability 360 interactive experience provides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. Thetutorials and example notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created someguidance material and ataxonomy tree that can be consulted.

We have developed the package with extensibility in mind. This library is still in development. We encourage you to contribute your explainability algorithms, metrics, and use cases. To get started as a contributor, please join theAI Explainability 360 Community on Slack by requesting an invitationhere. Please review the instructions to contribute code and python notebookshere.

Supported explainability algorithms

Data explanations

Local post-hoc explanations

Time-Series local post-hoc explanations

  • Time Series Saliency Maps using Integrated Gradients (Inspired bySundararajan et al. )
  • Time Series LIME (Time series adaptation of the classic paper byRibeiro et al. 2016 )
  • Time Series Individual Conditional Expectation (Time series adaptation of Individual Conditional Expectation PlotsGoldstein et al. )

Local direct explanations

Certifying local explanations

Global direct explanations

Global post-hoc explanations

Supported explainability metrics

Setup

Supported Configurations:

Installation keywordExplainer(s)OSPython version
cofrnetcofrnetmacOS, Ubuntu, Windows3.10
contrastivecem, cem_mafmacOS, Ubuntu, Windows3.6
dipvaedipvaemacOS, Ubuntu, Windows3.10
gcegcemacOS, Ubuntu, Windows3.10
ecertifyecertifymacOS, Ubuntu, Windows3.10
imdimdmacOS, Ubuntu3.10
limelimemacOS, Ubuntu, Windows3.10
matchingmatchingmacOS, Ubuntu, Windows3.10
nncontrastivenncontrastivemacOS, Ubuntu, Windows3.10
profwtprofwtmacOS, Ubuntu, Windows3.6
protodashprotodashmacOS, Ubuntu, Windows3.10
rbmbrcg, glrmmacOS, Ubuntu, Windows3.10
rule_inductionrippermacOS, Ubuntu, Windows3.10
shapshapmacOS, Ubuntu, Windows3.6
tedtedmacOS, Ubuntu, Windows3.10
tsicetsicemacOS, Ubuntu, Windows3.10
tslimetslimemacOS, Ubuntu, Windows3.10
tssaliencytssaliencymacOS, Ubuntu, Windows3.10

(Optional) Create a virtual environment

AI Explainability 360 requires specific versions of many Python packages which may conflictwith other projects on your system. A virtual environment manager is stronglyrecommended to ensure dependencies may be installed safely. If you have trouble installing the toolkit, try this first.

Conda

Conda is recommended for all configurations though Virtualenv is generallyinterchangeable for our purposes. Miniconda is sufficient (seethe difference between Anaconda andMinicondaif you are curious) and can be installed fromhere if you do not already have it.

Then, create a new python environment based on the explainability algorithms you wish to use by referring to thetable above. For example, for python 3.10, use the following command:

conda create --name aix360 python=3.10conda activate aix360

The shell should now look like(aix360) $. To deactivate the environment, run:

(aix360)$ conda deactivate

The prompt will return back to$ or(base)$.

Note: Older versions of conda may usesource activate aix360 andsource deactivate (activate aix360 anddeactivate on Windows).

Installation

Clone the latest version of this repository:

(aix360)$ git clone https://github.com/Trusted-AI/AIX360

If you'd like to run the examples and tutorial notebooks, download the datasets now and place them intheir respective folders as described inaix360/data/README.md.

Then, navigate to the root directory of the project which containssetup.py file and run:

(aix360)$ pip install -e .[<algo1>,<algo2>, ...]

The above command installs packages required by specific algorithms. Here<algo> refers to the installation keyword intable above. For instance to install packages needed by BRCG, DIPVAE, and TSICE algorithms, one could use

(aix360)$ pip install -e .[rbm,dipvae,tsice]

The default commandpip install . installsdefault dependencies alone.

Note that you may not be able to install two algorithms that require different versions of python in the same environment (for instancecontrastive along withrbm).

If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.

(aix360)$ pip install --upgrade pip setuptools(aix360)$ pip uninstall aix360

PIP Installation of AI Explainability 360

If you would like to quickly start using the AI explainability 360 toolkit without explicitly cloning this repository, you can use one of these options:

  • Install v0.3.0 via repository link
(your environment)$ pip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[<algo1>,<algo2>,...]

For example, usepip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[rbm,dipvae,tsice] to install BRCG, DIPVAE, and TSICE. You may need to installcmake if its not already installed in your environment usingconda install cmake.

  • Install v0.3.0 (or previous versions) viapypi
(your environment)$ pip install aix360

If you follow either of these two options, you will need to download the notebooks available in theexamples folder separately.

Dealing with installation errors

AI Explainability 360 toolkit istested on Windows, MacOS, and Linux. However, if you still face installation issues due to package dependencies, please try installing the corresponding package via conda (e.g. conda install package-name) and then install the toolkit by following the usual steps. For example, if you face issues related to pygraphviz during installation, useconda install pygraphviz and then install the toolkit.

Please use the right python environment based on thetable above.

Running in Docker

  • UnderAIX360 directory build the container image from Dockerfile usingdocker build -t aix360_docker .
  • Start the container image using commanddocker run -it -p 8888:8888 aix360_docker:latest bash assuming port 8888 is free on your machine.
  • Inside the container start jupuyter lab using commandjupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browser
  • Access the sample tutorials on your machine using URLlocalhost:8888

Using AI Explainability 360

Theexamples directory contains a diverse collection of jupyter notebooksthat use AI Explainability 360 in various ways. Both examples and tutorial notebooks illustrateworking code using the toolkit. Tutorials provide additional discussion that walksthe user through the various steps of the notebook. See the details abouttutorials and exampleshere.

Citing AI Explainability 360

If you are using AI Explainability 360 for your work, we encourage you to

  • Cite the followingpaper. The bibtex entry is as follows:
@misc{aix360-sept-2019,title = "One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques",author = {Vijay Arya and Rachel K. E. Bellamy and Pin-Yu Chen and Amit Dhurandhar and Michael Hindand Samuel C. Hoffman and Stephanie Houde and Q. Vera Liao and Ronny Luss and Aleksandra Mojsilovi\'cand Sami Mourad and Pablo Pedemonte and Ramya Raghavendra and John Richards and Prasanna Sattigeriand Karthikeyan Shanmugam and Moninder Singh and Kush R. Varshney and Dennis Wei and Yunfeng Zhang},month = sept,year = {2019},url = {https://arxiv.org/abs/1909.03012}}

AIX360 Videos

  • Introductoryvideo to AIExplainability 360 by Vijay Arya and Amit Dhurandhar, September 5, 2019 (35 mins)

Acknowledgements

AIX360 is built with the help of several open source packages. All of these are listed in setup.py and some of these include:

License Information

Please view both theLICENSE file and the foldersupplementary license present in the root directory for license information.


[8]ページ先頭

©2009-2025 Movatter.jp