hallucination
Here are 64 public repositories matching this topic...
Language:All
Sort:Most stars
Loki: Open-source solution designed to automate the process of verifying factuality
- Updated
Oct 3, 2024 - Python
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
- Updated
Feb 28, 2025
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models
- Updated
Dec 23, 2024 - Python
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
- Updated
Nov 7, 2024 - Python
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
- Updated
Mar 13, 2024 - Python
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
- Updated
Nov 13, 2024 - Python
Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.
- Updated
Dec 7, 2024 - Jupyter Notebook
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
- Updated
Nov 12, 2024 - Python
😎 curated list of awesome LMM hallucinations papers, methods & resources.
- Updated
Mar 23, 2024
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
- Updated
Mar 26, 2024 - Python
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
- Updated
Feb 20, 2025 - Python
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
- Updated
Feb 22, 2025
- Updated
Sep 10, 2024 - JavaScript
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
- Updated
Apr 28, 2024 - Python
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
- Updated
Feb 22, 2025 - Python
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
- Updated
Feb 27, 2024 - Python
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the available tool, methods, repo, code etc to detect hallucination, LLM evaluation, grading and much more.
- Updated
Jul 10, 2024 - Jupyter Notebook
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
- Updated
Mar 18, 2024 - HTML
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
- Updated
Dec 10, 2024 - Python
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
- Updated
Sep 10, 2024 - Python
Improve this page
Add a description, image, and links to thehallucination topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thehallucination topic, visit your repo's landing page and select "manage topics."