Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

explainability

Here are 451 public repositories matching this topic...

A game theoretic approach to explain the output of any machine learning model.

  • UpdatedNov 27, 2025
  • Jupyter Notebook

🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models

  • UpdatedNov 10, 2025
  • Jupyter Notebook

[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.

  • UpdatedJan 24, 2024
  • Jupyter Notebook

Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

  • UpdatedFeb 7, 2025
  • TypeScript

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

  • UpdatedAug 24, 2023
  • Jupyter Notebook

Visualization toolkit for neural networks in PyTorch! Demo -->

  • UpdatedSep 21, 2023
  • HTML
shapiq

Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet

  • UpdatedMay 1, 2023
  • Python

[Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks

  • UpdatedMar 1, 2025
  • Jupyter Notebook
explainx

Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu

  • UpdatedAug 21, 2024
  • Jupyter Notebook
adversarial-explainable-ai

This is an open-source version of the representation engineering framework for stopping harmful outputs or hallucinations on the level of activations. 100% free, self-hosted and open-source.

  • UpdatedNov 29, 2025
  • Python

Improve this page

Add a description, image, and links to theexplainability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with theexplainability topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp