Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Cornell University
We gratefully acknowledge support from the Simons Foundation,member institutions, and all contributors.Donate
arxiv logo>cs> arXiv:2308.11914
arXiv logo
Cornell University Logo

Computer Science > Artificial Intelligence

arXiv:2308.11914 (cs)
[Submitted on 23 Aug 2023 (v1), last revised 12 Feb 2025 (this version, v4)]

Title:Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs

View PDFHTML (experimental)
Abstract:Despite the progress of foundation models, knowledge-based reasoning remains a persistent challenge due to their limited capacity for knowledge recall and inference. Existing methods primarily focus on encouraging these models to plan and solve problems or extensively sample reasoning chains independently. However, these methods often overlook conceptual errors and inferential fallacies, inevitably leading to a series of notorious issues such as misleading conclusions, cognitive biases, and reduced decision quality. While explicit modeling of causality is argued to hold promise in addressing these issues, contemporary research efforts have thus far fallen short in achieving causality-based foundation models. Drawing inspiration from the orchestration of diverse specialized agents collaborating to tackle intricate tasks, we propose a framework named Causal-Consistency Chain-of-Thought (CaCo-CoT) that harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models, involving a set of reasoners and evaluators. These agents collaboratively work within a reasoning-and-consensus paradigm to improve faithfulness. The reasoners are tasked with generating reasoning chains for knowledge-intensive problems by mimicking human causal reasoning. Meanwhile, the evaluator scrutinizes the causal consistency of a reasoner's reasoning chain from a non-causal and a counterfactual perspective. Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations across text-based and multi-modal knowledge reasoning tasks (e.g., science question answering and commonsense reasoning).
Comments:12 pages, 9 figures. 3 tables
Subjects:Artificial Intelligence (cs.AI); Multiagent Systems (cs.MA)
Cite as:arXiv:2308.11914 [cs.AI]
 (orarXiv:2308.11914v4 [cs.AI] for this version)
 https://doi.org/10.48550/arXiv.2308.11914
arXiv-issued DOI via DataCite

Submission history

From: Ziyi Tang [view email]
[v1] Wed, 23 Aug 2023 04:59:21 UTC (3,828 KB)
[v2] Mon, 4 Sep 2023 10:15:51 UTC (3,826 KB)
[v3] Tue, 26 Nov 2024 11:39:04 UTC (5,846 KB)
[v4] Wed, 12 Feb 2025 08:28:49 UTC (5,844 KB)
Full-text links:

Access Paper:

Current browse context:
cs.AI
Change to browse by:
export BibTeX citation

Bookmark

BibSonomy logoReddit logo

Bibliographic and Citation Tools

Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
scite Smart Citations(What are Smart Citations?)

Code, Data and Media Associated with this Article

CatalyzeX Code Finder for Papers(What is CatalyzeX?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)

Demos

Hugging Face Spaces(What is Spaces?)

Recommenders and Search Tools

Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.

Which authors of this paper are endorsers? |Disable MathJax (What is MathJax?)

[8]ページ先頭

©2009-2025 Movatter.jp