Computer Science > Machine Learning
arXiv:2409.15254 (cs)
[Submitted on 23 Sep 2024 (v1), last revised 3 Oct 2024 (this version, v5)]
Title:Archon: An Architecture Search Framework for Inference-Time Techniques
Authors:Jon Saad-Falcon,Adrian Gamarra Lafuente,Shlok Natarajan,Nahum Maru,Hristo Todorov,Etash Guha,E. Kelly Buchanan,Mayee Chen,Neel Guha,Christopher Ré,Azalia Mirhoseini
View a PDF of the paper titled Archon: An Architecture Search Framework for Inference-Time Techniques, by Jon Saad-Falcon and 10 other authors
View PDFHTML (experimental)Abstract:Inference-time techniques are emerging as highly effective tools to enhance large language model (LLM) capabilities. However, best practices for developing systems that combine these techniques remain underdeveloped due to our limited understanding of the utility of individual inference-time techniques and the interactions between them. Additionally, efficiently and automatically searching the space of model choices, inference-time techniques, and their compositions is challenging due to the large design space. To address these challenges, we introduce Archon, a modular framework for selecting, combining, and stacking layers of inference-time techniques to construct optimized LLM systems for target benchmarks. Rather than relying on a single LLM called once, we leverage a diverse set of LLMs and inference-time techniques, creating LLM systems greater than the sum of their parts. Archon defines an extensible design space, encompassing techniques such as generation ensembling, repeated sampling, ranking, fusion, critiquing, verification, and unit testing. It transforms the problem of building LLM systems into a hyperparameter optimization objective. Given the available LLMs, inference-time techniques, and compute budget, Archon utilizes hyperparameter search techniques to discover optimized architectures for target benchmark(s). We evaluate Archon architectures across a range of instruction-following, reasoning, and coding benchmarks, including MT-Bench, Arena-Hard-Auto, AlpacaEval 2.0, MixEval, MixEval Hard, MATH, and CodeContests. Archon architectures outperform frontier models, such as GPT-4o and Claude 3.5 Sonnet, on these benchmarks, achieving an average accuracy increase of 15.1 percentage points by using all available LLMs. We make our code and datasets available publicly on Github:this https URL.
Subjects: | Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) |
Cite as: | arXiv:2409.15254 [cs.LG] |
(orarXiv:2409.15254v5 [cs.LG] for this version) | |
https://doi.org/10.48550/arXiv.2409.15254 arXiv-issued DOI via DataCite |
Submission history
From: Jon Saad-Falcon [view email][v1] Mon, 23 Sep 2024 17:53:42 UTC (1,500 KB)
[v2] Tue, 24 Sep 2024 05:08:18 UTC (1,500 KB)
[v3] Thu, 26 Sep 2024 08:01:39 UTC (1,410 KB)
[v4] Fri, 27 Sep 2024 21:39:30 UTC (1,410 KB)
[v5] Thu, 3 Oct 2024 05:41:48 UTC (1,321 KB)
Full-text links:
Access Paper:
- View PDF
- HTML (experimental)
- TeX Source
- Other Formats
View a PDF of the paper titled Archon: An Architecture Search Framework for Inference-Time Techniques, by Jon Saad-Falcon and 10 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
IArxiv Recommender(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.