visual-explanations
Here are 13 public repositories matching this topic...
Language:All
Sort:Most stars
Official implementation of Score-CAM in PyTorch
- Updated
Aug 6, 2022 - Python
Pytorch Implementation of recent visual attribution methods for model interpretability
- Updated
Feb 27, 2020 - Jupyter Notebook
Model Agnostics breakDown plots
- Updated
Mar 12, 2024 - R
A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore National Laboratory.
- Updated
Sep 29, 2020 - Jupyter Notebook
Local Interpretable (Model-agnostic) Visual Explanations - model visualization for regression problems and tabular data based on LIME method. Available on CRAN
- Updated
Aug 21, 2019 - R
[ICCVW 2019] PyTorch code for Class Visualization Pyramid for intpreting spatio-temporal class-specific activations throughout the network
- Updated
Mar 9, 2020 - Python
A XAI Framework to provide Contrastive Whole-output Explanation for Image Classification.
- Updated
Jul 28, 2023 - Jupyter Notebook
Code, model and data for our paper: K. Tsigos, E. Apostolidis, S. Baxevanakis, S. Papadopoulos, V. Mezaris, "Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection", Proc. ACM Int. Workshop on Multimedia AI against Disinformation (MAD’24) at the ACM Int. Conf. on Multimedia Retrieval (ICMR’24), Thailand, June 2024.
- Updated
Nov 5, 2024 - Python
Code for the paper "ViConEx-Med: Visual Concept Explainability via Multi-Concept Token Transformer for Medical Image Analysis", 2025.
- Updated
Oct 14, 2025 - Python
This repository provides the training codes to classify aerial images using a custom-built model (transfer learning with InceptionResNetV2 as the backbone) and explainers to explain the predictions with LIME and GradCAM on an interface that lets you upload or paste images for classification and see visual explanations.
- Updated
Jul 29, 2024 - Jupyter Notebook
Similarity Differences and Uniqueness Explainable AI method
- Updated
Apr 29, 2022 - Python
Official implementation of CASE: Contrastive Activation for Saliency Estimation; a diagnostic exploration and method for faithful, class-discriminative saliency maps.
- Updated
Jun 13, 2025 - Python
Language-Aware Visual Explanations (LAVE) is a framework designed for image classification tasks, particularly focusing on the ImageNet dataset. Unlike conventional methods that necessitate extensive training, LAVE leverages SHAP (SHapley Additive exPlanations) values to provide insightful textual and visual explanations.
- Updated
Oct 22, 2024 - Jupyter Notebook
Improve this page
Add a description, image, and links to thevisual-explanations topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thevisual-explanations topic, visit your repo's landing page and select "manage topics."