Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

visual-explanations

Here are 13 public repositories matching this topic...

Pytorch Implementation of recent visual attribution methods for model interpretability

  • UpdatedFeb 27, 2020
  • Jupyter Notebook

Model Agnostics breakDown plots

  • UpdatedMar 12, 2024
  • R
fastcam

A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore National Laboratory.

  • UpdatedSep 29, 2020
  • Jupyter Notebook

Local Interpretable (Model-agnostic) Visual Explanations - model visualization for regression problems and tabular data based on LIME method. Available on CRAN

  • UpdatedAug 21, 2019
  • R

[ICCVW 2019] PyTorch code for Class Visualization Pyramid for intpreting spatio-temporal class-specific activations throughout the network

  • UpdatedMar 9, 2020
  • Python

A XAI Framework to provide Contrastive Whole-output Explanation for Image Classification.

  • UpdatedJul 28, 2023
  • Jupyter Notebook

Code, model and data for our paper: K. Tsigos, E. Apostolidis, S. Baxevanakis, S. Papadopoulos, V. Mezaris, "Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection", Proc. ACM Int. Workshop on Multimedia AI against Disinformation (MAD’24) at the ACM Int. Conf. on Multimedia Retrieval (ICMR’24), Thailand, June 2024.

  • UpdatedNov 5, 2024
  • Python

Code for the paper "ViConEx-Med: Visual Concept Explainability via Multi-Concept Token Transformer for Medical Image Analysis", 2025.

  • UpdatedOct 14, 2025
  • Python

This repository provides the training codes to classify aerial images using a custom-built model (transfer learning with InceptionResNetV2 as the backbone) and explainers to explain the predictions with LIME and GradCAM on an interface that lets you upload or paste images for classification and see visual explanations.

  • UpdatedJul 29, 2024
  • Jupyter Notebook

Official implementation of CASE: Contrastive Activation for Saliency Estimation; a diagnostic exploration and method for faithful, class-discriminative saliency maps.

  • UpdatedJun 13, 2025
  • Python

Language-Aware Visual Explanations (LAVE) is a framework designed for image classification tasks, particularly focusing on the ImageNet dataset. Unlike conventional methods that necessitate extensive training, LAVE leverages SHAP (SHapley Additive exPlanations) values to provide insightful textual and visual explanations.

  • UpdatedOct 22, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to thevisual-explanations topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thevisual-explanations topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp