Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A library for debugging/inspecting machine learning classifiers and explaining their predictions

License

NotificationsYou must be signed in to change notification settings

TeamHG-Memex/eli5

Repository files navigation

PyPI VersionBuild StatusCode CoverageDocumentation

ELI5 is a Python package which helps to debug machine learningclassifiers and explain their predictions.

explain_prediction for text data

explain_prediction for image data

It provides support for the following machine learning frameworks and packages:

  • scikit-learn. Currently ELI5 allows to explain weights and predictionsof scikit-learn linear classifiers and regressors, print decision treesas text or as SVG, show feature importances and explain predictionsof decision trees and tree-based ensembles. ELI5 understands textprocessing utilities from scikit-learn and can highlight text dataaccordingly. Pipeline and FeatureUnion are supported.It also allows to debug scikit-learn pipelines which containHashingVectorizer, by undoing hashing.
  • Keras - explain predictions of image classifiers via Grad-CAM visualizations.
  • xgboost - show feature importances and explain predictions of XGBClassifier,XGBRegressor and xgboost.Booster.
  • LightGBM - show feature importances and explain predictions ofLGBMClassifier and LGBMRegressor.
  • CatBoost - show feature importances of CatBoostClassifier,CatBoostRegressor and catboost.CatBoost.
  • lightning - explain weights and predictions of lightning classifiers andregressors.
  • sklearn-crfsuite. ELI5 allows to check weights of sklearn_crfsuite.CRFmodels.

ELI5 also implements several algorithms for inspecting black-box models(seeInspecting Black-Box Estimators):

  • TextExplainer allows to explain predictionsof any text classifier usingLIME algorithm (Ribeiro et al., 2016).There are utilities for using LIME with non-text data and arbitrary black-boxclassifiers as well, but this feature is currently experimental.
  • Permutation importance method can be used to compute feature importancesfor black box estimators.

Explanation and formatting are separated; you can get text-based explanationto display in console, HTML version embeddable in an IPython notebookor web dashboards, apandas.DataFrame object if you want to processresults further, or JSON version which allows to implement custom renderingand formatting on a client.

License is MIT.

Checkdocs for more.


define hyperiongray

[8]ページ先頭

©2009-2025 Movatter.jp