Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The implementation for ICLR 2025 Oral: From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions.

License

NotificationsYou must be signed in to change notification settings

quchangle1/DRAFT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[Quick Start][Paper][Citation]

Repo for the paper "From Exploration to Mastery:Enabling LLMs to Master Tools via Self-Driven Interactions" [ICLR'25 Oral]

🔥 News

  • [2025/2/24] We release all the code for DRAFT.
  • [2025/2/11] DRAFT is selected to be presented as anOral (1.8%).
  • [2025/1/23] DRAFT is accepted byICLR 2025.
  • [2024/10/10] Ourpaper and code is released.

💡 Introduction

Due to the inherent understanding gap between LLMs and humans, inefficiencies and inaccuracies within existing tool documentation hamper the effective utilization of tools by LLMs. Humans, acquire tool proficiency through repeated interactions and hands-on experiences, capable of maintaining an updated comprehension of these tools despite their evolving functionalities. In light of this, we propose DRAFT, conceptualized to automate the adjustment and optimization of tool documentation based on the feedback derived from the LLM's interaction with the tool.

DRAFT is designed to dynamically adjust and optimize tool documentation based on the interaction feedback between LLMs and external tools, which significantly bridges the gap between them by enabling the LLMs to better comprehend and utilize the tools at their disposal, thereby enhancing the overall tool-using capabilities of LLMs.

🛠️ Setup

Environment Setup

Our experimental environment is shown below:

openai version: 0.28.0numpy version: 1.26.4pandas version: 2.2.2torch version: 2.3.1

API Key Setup

Get OpenAI key fromOpenAI, RapidAPI key fromRapidAPI or ToolBench key fromToolBench repo, TMDB key fromTMDB, and Spotify key fromSpotify.

Data Setup

You can download ToolBench dataset from theGoogle Drive orTsinghua cloud and RestBench dataset fromRestBench repo, then extract all tool documentation. Alternatively, you can directly use our preprocessed tool documentation.

🚀 Quick Start

DRAFT

RunDRAFT to get revised tool documentation:

python DRAFT.py

Inference

RunInference_DFSDT to perform inference using the tool documentation modified by DRAFT to examine the effectiveness of DRAFT.

python Inference_DFSDT -model_name gpt-4o-2024-08-06 -data_type G3 -method DRAFT

You can specify the model, dataset, and method by cmd line arguments.

Evaluation

RunCal_path_rate to calculate the path rate for evaluating the results.

python Cal_path_rate.py

We use the official code provided by ToolBench to calculate the win rate. You can find the calculation method in theToolEval repo.

☕️ Citation

If you find our code or work useful for your research, please cite our work.

@inproceedings{  quexploration,  title={From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions},  author={Qu, Changle and Dai, Sunhao and Wei, Xiaochi and Cai, Hengyi and Wang, Shuaiqiang and Yin, Dawei and Xu, Jun and Wen, Ji-Rong},  booktitle={The Thirteenth International Conference on Learning Representations},    year={2025},    url={https://openreview.net/forum?id=QKBu1BOAwd}}

About

The implementation for ICLR 2025 Oral: From Exploration to Mastery: Enabling LLMs to Master Tools via Self-Driven Interactions.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors2

  •  
  •  

Languages


[8]ページ先頭

©2009-2026 Movatter.jp