Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu

License

NotificationsYou must be signed in to change notification settings

explainX/explainx

Repository files navigation

We are looking for co-authors to take this project forward. Reach out @ms8909@nyu.edu

ExplainX is a model explainability/interpretability framework for data scientists and business users.

Supported Python versionsDownloadsMaintenanceWebsite

Use explainX to understand overall model behavior, explain the "why" behind model predictions, remove biases and create convincing explanations for your business stakeholders.Tweet

explainX AI explainable AI library

Why we need model explainability & interpretibility?

Essential for:

  1. Explaining model predictions
  2. Debugging models
  3. Detecting biases in data
  4. Gaining trust of business users
  5. Successfully deploying AI solution

What questions can we answer with explainX?

  1. Why did my model make a mistake?
  2. Is my model biased? If yes, where?
  3. How can I understand and trust the model's decisions?
  4. Does my model satisfy legal & regulatory requirements?

We have deployed the app on our server so you can play around with the dashboard. Check it out:

Dashboard Demo:http://3.128.188.55:8080/

Getting Started

Installation

Python 3.5+ | Linux, Mac, Windows

pip install explainx

To download on Windows, please installMicrosoft C++ Build Tools first and then install the explainX package viapip

Installation on the cloud

If you are using a notebook instance on the cloud (AWS SageMaker, Colab, Azure), please follow our step-by-step guide to install & run explainX cloud.

Usage (Example)

After successfully installing explainX, open up your Python IDE of Jupyter Notebook and simply follow the code below to use it:

  1. Import required module.
fromexplainximport*fromsklearn.ensembleimportRandomForestClassifierfromsklearn.model_selectionimporttrain_test_split
  1. Load and split your dataset into x_data and y_data
#Load Dataset: X_Data, Y_Data#X_Data = Pandas DataFrame#Y_Data = Numpy Array or ListX_data,Y_data=explainx.dataset_heloc()
  1. Split dataset into training & testing.
X_train,X_test,Y_train,Y_test=train_test_split(X_data,Y_data,test_size=0.3,random_state=0)
  1. Train your model.
# Train a RandomForest Modelmodel=RandomForestClassifier()model.fit(X_train,Y_train)

After you're done training the model, you can either access the complete explainability dashboard or access individual techniques.

Complete Explainability Dashboard

To access the entire dashboard with all the explainability techniques under one roof, follow the code down below. It is great for sharing your work with your peers and managers in an interactive and easy to understand way.

5.1. Pass your model and dataset into the explainX function:

explainx.ai(X_test,Y_test,model,model_name="randomforest")

5.2. Click on the dashboard link to start exploring model behavior:

Apprunningonhttps://127.0.0.1:8080

Explainability Modules

In this latest release, we have also given the option to use explainability techniques individually. This will allow the user to choose technique that fits their personal AI use case.

6.1. Pass your model, X_Data and Y_Data into the explainx_modules function.

explainx_modules.ai(model,X_test,Y_test)

As an upgrade, we have eliminated the need to pass in the model name as explainX is smart enough to identify the model type and problem type i.e. classification or regression, by itself.

You can access multiple modules:

Module 1: Dataframe with Predictions

explainx_modules.dataframe_graphing()

Module 2: Model Metrics

explainx_modules.metrics()

Module 3: Global Level SHAP Values

explainx_modules.shap_df()

Module 4: What-If Scenario Analysis (Local Level Explanations)

explainx_modules.what_if_analysis()

Module 5: Partial Dependence Plot & Summary Plot

explainx_modules.feature_interactions()

Module 6: Model Performance Comparison (Cohort Analysis)

explainx_modules.cohort_analysis()

To access the modules within your jupyter notebook as IFrames, just pass themode='inline' argument.

Cloud Installation

If you are running explainX on the cloud e.g., AWS Sagemaker?https://0.0.0.0:8080 will not work.

After installation is complete, just open yourterminal and run the following command.

lt -h "https://serverless.social" -p [port number]
lt -h "https://serverless.social" -p 8080

explainX ai explainable ai

Walkthough Video Tutorial

Please click on the image below to load the tutorial:

here

(Note: Please manually set it to 720p or greater to have the text appear clearly)

Supported Techniques

Interpretability TechniqueStatus
SHAP Kernel ExplainerLive
SHAP Tree ExplainerLive
What-if AnalysisLive
Model Performance ComparisonLive
Partial Dependence PlotLive
Surrogate Decision TreeComing Soon
AnchorsComing Soon
Integrated Gradients (IG)Coming Soon

Main Models Supported

No.Model NameStatus
1.CatboostLive
2.XGboost==1.0.2Live
3.Gradient Boosting RegressorLive
4.RandomForest ModelLive
5.SVMLive
6.KNeighboursClassifierLive
7.Logistic RegressionLive
8.DecisionTreeClassifierLive
9.All Scikit-learn ModelsLive
10.Neural NetworksLive
11.H2O.ai AutoMLLive
12.TensorFlow ModelsComing Soon
13.PyTorch ModelsComing Soon

Contributing

Pull requests are welcome. In order to make changes to explainx, the ideal approach is to fork the repository then clone the fork locally.

For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.

Report Issues

Please help us byreporting any issues you may have while using explainX.

License

MIT

About

Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu

Topics

Resources

License

Stars

Watchers

Forks

Contributors5


[8]ページ先頭

©2009-2025 Movatter.jp