- Notifications
You must be signed in to change notification settings - Fork1
An experimental project using MCTS to refine LLM responses for better accuracy and decision-making.
License
brotSchimmelt/LLM-MCTS-Inference
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
An experimental project using Monte Carlo Tree Search (MCTS) to refine Language Model (LLM) responses for better accuracy and decision-making.
This project leverages MCTS to explore multiple answer candidates generated by an LLM. By iteratively generating an initial answer, evaluating it, and refining it based on targeted self-feedback, the system strives to improve response quality and decision-making. This approach leverages test-time compute to enhance the precision and robustness of model outputs.
The process follows these key steps:
- Initial Answer Generation: Uses greedy decoding to generate an initial response.
- Feedback Generation: Provides constructive, concise feedback on initial answers. The feedback is generated by the model itself.
- Iterative Refinement: Refines responses based on the feedback through additional model queries.
- Monte Carlo Tree Search: Employs MCTS to explore and evaluate multiple answer paths.
The performance of this approach was evaluated on a subset of theGSM8k test split using the Llama3.2-1B-instruct model withvLLM. A baseline run using zero-shot prompting achieved a pass@8 score of74% and a majority@8 score of27%. When applying MCTS for iterative refinement, the pass@8 score marginally increased to75%, while the majority@8 score improved significantly to39%. The evaluation was done withllm-eval.
These results suggest that while MCTS does not drastically improve the probability of generating at least one correct answer (pass@8), it significantly enhances response consistency (majority@8), making the model more reliable in decision-making scenarios.
A smaller model was selected for this experiment to better illustrate the impact of MCTS. Larger models already achieve high accuracy on GSM8k, making it difficult to demonstrate meaningful improvements. The 1B parameter model provides a more realistic proof-of-concept by:• Being resource-efficient, allowing for scalable experimentation.• Providing a challenging test case, as smaller models struggle more with GSM8k, making improvements more noticeable.• Ensuring the evaluation remains relevant, since GSM8k has been extensively benchmarked by larger models, leaving little room for additional gains.
- Python: Version 3.11 or higher
The project depends mainly on the following packages:
instructorfor guided generationlitellmprovides a unified API to interact with multiple LLM providers
To install the package directly from PyPi run the following command:pip install llm-mcts-inference
To install from source, follow these steps:
Clone the Repository:
git clone https://github.com/brotSchimmelt/llm-mcts-inference.gitcd llm-mcts-inferenceInstall the Project Dependencies:
If you use
uv, run the following commands to create a virtualenv and install all requirements:uv venv --python 3.11uv sync
Otherwise, install the required packages with pip:
pip install -r pyproject.toml
Configure Environment Variables:Rename the provided example.env file to .env and update it with your API keys or other configuration details as needed.
Use the MonteCarloLLM class to generate and improve responses via MCTS:
fromllm_mcts_inference.MonteCarloLLMimportMonteCarloLLM# Initialize with a specific model; defaults are defined in settingsllm=MonteCarloLLM(model_name="openai/gpt-4o-mini")# Define your promptprompt="What is the capital of France?"# Generate a response using Monte Carlo Tree Searchresult=llm.generate(prompt=prompt,iterations=5,max_children=3)# Output the final improved answerprint("Final Answer:",result.answer)# Optionally, display the sequence of nodes (answers) along the best pathprint("Best Path:", [node.answerfornodeinresult.valid_path])
This project is licensed under the MIT license.
About
An experimental project using MCTS to refine LLM responses for better accuracy and decision-making.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Uh oh!
There was an error while loading.Please reload this page.