hallucination-mitigation
Here are 17 public repositories matching this topic...
Language:All
Sort:Most stars
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
- Updated
Jul 11, 2025 - Python
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
- Updated
May 10, 2025
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
- Updated
Dec 10, 2024 - Python
[ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO
- Updated
Apr 30, 2025 - Python
A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models.
- Updated
Apr 21, 2025 - Python
✨ Official code for our paper: "Uncertainty-o: One Model-agnostic Framework for Unveiling Epistemic Uncertainty in Large Multimodal Models".
- Updated
Mar 13, 2025 - Python
[ICLR 2025] Data-Augmented Phrase-Level Alignment for Mitigating Object Hallucination
- Updated
Jan 27, 2025 - Python
[CVPR 2025 Workshop] PAINT (Paying Attention to INformed Tokens) is a plug-and-play framework that intervenes in the self-attention of the LLM and selectively boost the visual attention informed tokens to mitigate hallucination of Vision Language Models
- Updated
Jun 2, 2025 - Python
Agentic-AI framework w/o the headaches
- Updated
Jun 8, 2025 - Python
Fully automated LLM evaluator
- Updated
Oct 14, 2024 - Python
[NAACL Findings 2025] Code and data of "Mitigating Hallucinations in Multimodal Spatial Relations through Constraint-Aware Prompting"
- Updated
May 2, 2025 - Python
Official PyTorch implementation of "LPOI: Listwise Preference Optimization for Vision Language Models" (ACL 2025 Main)
- Updated
May 28, 2025 - Python
This repository contains all code to support the paper: "On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation".
- Updated
Jun 6, 2025 - Jupyter Notebook
[ACL findings 2025] "Retrieval Visual Contrastive Decoding to Mitigate Object Hallucinations in Large Vision-Language Models"
- Updated
Jun 19, 2025 - Python
Detecting Hallucinations in LLMs
- Updated
Feb 12, 2025 - Python
An interactive Python chatbot demonstrating real-time contextual hallucination detection in Large Language Models using the "Lookback Lens" method. This project implements the attention-based ratio feature extraction and a trained classifier to identify when an LLM deviates from the provided context during generation.
- Updated
May 16, 2025 - Python
MedRAG-2 is an enhanced Retrieval-Augmented Generation pipeline that addresses the challenges of LLM hallucinations through prompt redesign of the MedRAG framework, enforcing strict grounding, structured output validation using Pydantic schemas, and cross-encoder based re-ranking for improved retrieval precision. KAUST's RAG Course.
- Updated
Jul 9, 2025 - Python
Improve this page
Add a description, image, and links to thehallucination-mitigation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thehallucination-mitigation topic, visit your repo's landing page and select "manage topics."