Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
#

hallucination

Here are 64 public repositories matching this topic...

Loki: Open-source solution designed to automate the process of verifying factuality

  • UpdatedOct 3, 2024
  • Python

✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models

  • UpdatedDec 23, 2024
  • Python

RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.

  • UpdatedNov 7, 2024
  • Python

[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

  • UpdatedMar 13, 2024
  • Python

[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models

  • UpdatedNov 13, 2024
  • Python

Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.

  • UpdatedDec 7, 2024
  • Jupyter Notebook

😎 curated list of awesome LMM hallucinations papers, methods & resources.

  • UpdatedMar 23, 2024

Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"

  • UpdatedMar 26, 2024
  • Python
  • UpdatedSep 10, 2024
  • JavaScript

[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection

  • UpdatedApr 28, 2024
  • Python

This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.

  • UpdatedFeb 22, 2025
  • Python

Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"

  • UpdatedFeb 27, 2024
  • Python

Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the available tool, methods, repo, code etc to detect hallucination, LLM evaluation, grading and much more.

  • UpdatedJul 10, 2024
  • Jupyter Notebook

"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang

  • UpdatedMar 18, 2024
  • HTML

OLAPH: Improving Factuality in Biomedical Long-form Question Answering

  • UpdatedSep 10, 2024
  • Python

Improve this page

Add a description, image, and links to thehallucination topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thehallucination topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp