Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting with tools [ICLR'24].

License

NotificationsYou must be signed in to change notification settings

microsoft/ToRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ToRA
ToRA: A Tool-Integrated Reasoning Agent


PWC

[🌐 Website][📜 Paper][🤗 HF Models][🐱 GitHub]
[🐦 Twitter][💬 Reddit][🍀 Unofficial Blog]

Repo for "ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving" [ICLR'2024]


Figure 1: Comparing ToRA with baselines on LLaMA-2 base models from 7B to 70B.

🔥 News

  • [2023/10/08] 🔥🔥🔥 All ToRA models released at🤗 HuggingFace!
  • [2023/09/29] ToRA paper, repo, and website released.

💡 Introduction

ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools.

ModelSizeGSM8kMATHAVG@10 math tasks
GPT-4-92.042.578.3
GPT-4 (PAL)-94.251.886.4
ToRA ToRA-7B7B68.840.162.4
ToRA ToRA-Code-7B7B72.644.666.5
ToRA-Code-7B + self-consistency (k=50)7B76.852.5-
ToRA ToRA-13B13B72.743.065.9
ToRA ToRA-Code-13B13B75.848.171.3
ToRA-Code-13B + self-consistency (k=50)13B80.455.1-
ToRA ToRA-Code-34B*34B80.751.074.8
ToRA-Code-34B + self-consistency (k=50)34B85.160.0-
ToRA ToRA-70B70B84.349.776.9
ToRA-70B + self-consistency (k=50)70B88.356.9-
  • *ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come!

  • 10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith.

Tool-Integrated Reasoning


Figure 2: A basic example of single-round tool interaction, which interleaves rationales with program-based tool use.

ToRA Training Pipeline


Figure 3: Training ToRA contains ① Imitation Learning, and ② output space shaping.

🚀 Quick Start

⚙️ Setup

We recommend usingConda to manage your environment. We usevLLM (0.1.4) to accelerate inference. Run the following commands to setup your environment:

git clone https://github.com/microsoft/ToRA.git&&cd ToRA/srcconda create -n tora python=3.10conda activate torapip install packaging==22.0pip install torch==2.0.1 --index-url https://download.pytorch.org/whl/cu118# CUDA 11.8 for examplepip install -r requirements.txt

🪁 Inference

We provide a script for inference, simply config theMODEL_NAME_OR_PATH andDATA insrc/scripts/infer.sh and run the following command:

bash scritps/infer.sh

We also open-source themodel outputs from our best models (ToRA-Code-34B and ToRA-70B) in thesrc/outputs/ folder.

⚖️ Evaluation

Thesrc/eval/grader.py file contains the grading logic that assesses the accuracy of the predicted answer by comparing it to the ground truth. This logic is developed based on the Hendrycks' MATH grading system, which we have manually verified on the MATH dataset to minimize false positives and false negatives.

To evaluate the predicted answer, run the following command:

python -m eval.evaluate \    --data_name"math" \    --prompt_type"tora" \    --file_path"outputs/llm-agents/tora-code-34b-v1.0/math/test_tora_-1_seed0_t0.0_s0_e5000.jsonl" \    --execute

then you will get:

Num samples: 5000Num scores: 5000Timeout samples: 0Empty samples: 2Mean score: [51.0]Type scores: {'Algebra': 67.3, 'Counting & Probability': 42.2, 'Geometry': 26.1, 'Intermediate Algebra': 40.0, 'Number Theory': 59.3, 'Prealgebra': 63.8, 'Precalculus': 34.2}

⚡️ Training

We're currently undergoing an internal review to open-source the ToRA-Corpus-16k, stay tuned!We also open-source our complete training scripts for the community, and you may construct your own dataset for training. We provide some example data indata/tora/.

To train a model, run the following command:

bash scripts/train.sh codellama 7b

☕️ Citation

If you find this repository helpful, please consider citing our paper:

@inproceedings{gou2024tora,title={To{RA}: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving},author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen},booktitle={The Twelfth International Conference on Learning Representations},year={2024},url={https://openreview.net/forum?id=Ep0TtjVoap}}

🍀 Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to aContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant usthe rights to use your contribution. For details, visithttps://cla.opensource.microsoft.com.

This project has adopted theMicrosoft Open Source Code of Conduct.For more information see theCode of Conduct FAQ orcontactopencode@microsoft.com with any additional questions or comments.

🌟 Star History

Star History Chart

About

ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting with tools [ICLR'24].

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Contributors4

  •  
  •  
  •  
  •  

[8]ページ先頭

©2009-2026 Movatter.jp