- Notifications
You must be signed in to change notification settings - Fork364
AdalFlow: The library to build & auto-optimize LLM applications.
License
SylphAI-Inc/AdalFlow
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
⚡ AdalFlow is a PyTorch-like library to build and auto-optimize any LM workflows, from Chatbots, RAG, to Agents. ⚡
AdalFlow proudly powersAdaL CLI — The AI coding agent
- 100% Open-source Agents SDK: Lightweight and requires no additional API to setup
Human-in-the-LoopandTracingFunctionalities. - Say goodbye to manual prompting: AdalFlow provides a unified auto-differentiative framework for both zero-shot optimization and few-shot prompt optimization. Our research,
LLM-AutoDiffandLearn-to-Reason Few-shot In Context Learning, achieve the highest accuracy among all auto-prompt optimization libraries. - Switch your LLM app to any model via a config: AdalFlow provides
Model-agnosticbuilding blocks for LLM task pipelines, ranging from RAG, Agents to classical NLP tasks.
ViewDocumentation
Install AdalFlow with pip:
pip install adalflow
fromadalflowimportAgent,Runnerfromadalflow.components.model_client.openai_clientimportOpenAIClientfromadalflow.core.typesimport (ToolCallActivityRunItem,RunItemStreamEvent,ToolCallRunItem,ToolOutputRunItem,FinalOutputItem)importasyncio# Define toolsdefcalculator(expression:str)->str:"""Evaluate a mathematical expression."""try:result=eval(expression)returnf"The result of{expression} is{result}"exceptExceptionase:returnf"Error:{e}"asyncdefweb_search(query:str="what is the weather in SF today?")->str:"""Web search on query."""awaitasyncio.sleep(0.5)return"San Francisco will be mostly cloudy today with some afternoon sun, reaching about 67 °F (20 °C)."defcounter(limit:int):"""A counter that counts up to a limit."""final_output= []foriinrange(1,limit+1):stream_item=f"Count:{i}/{limit}"final_output.append(stream_item)yieldToolCallActivityRunItem(data=stream_item)yieldfinal_output# Create agent with toolsagent=Agent(name="MyAgent",tools=[calculator,web_search,counter],model_client=OpenAIClient(),model_kwargs={"model":"gpt-4o","temperature":0.3},max_steps=5)runner=Runner(agent=agent)
# Sync call - returns RunnerResult with complete execution historyresult=runner.call(prompt_kwargs={"input_str":"Calculate 15 * 7 + 23 and count to 5"})print(result.answer)# Output: The result of 15 * 7 + 23 is 128. The counter counted up to 5: 1, 2, 3, 4, 5.# Access step historyforstepinresult.step_history:print(f"Step{step.step}:{step.function.name} ->{step.observation}")# Output:# Step 0: calculator -> The result of 15 * 7 + 23 is 128# Step 1: counter -> ['Count: 1/5', 'Count: 2/5', 'Count: 3/5', 'Count: 4/5', 'Count: 5/5']
# Async call - similar output structure to sync callresult=awaitrunner.acall(prompt_kwargs={"input_str":"What's the weather in SF and calculate 42 * 3"})print(result.answer)# Output: San Francisco will be mostly cloudy today with some afternoon sun, reaching about 67 °F (20 °C).# The result of 42 * 3 is 126.
# Async streaming - real-time event processingstreaming_result=runner.astream(prompt_kwargs={"input_str":"Calculate 100 + 50 and count to 3"},)# Process streaming events in real-timeasyncforeventinstreaming_result.stream_events():ifisinstance(event,RunItemStreamEvent):ifisinstance(event.item,ToolCallRunItem):print(f"🔧 Calling:{event.item.data.name}")elifisinstance(event.item,ToolCallActivityRunItem):print(f"📝 Activity:{event.item.data}")elifisinstance(event.item,ToolOutputRunItem):print(f"✅ Output:{event.item.data.output}")elifisinstance(event.item,FinalOutputItem):print(f"🎯 Final:{event.item.data.answer}")# Output:# 🔧 Calling: calculator# ✅ Output: The result of 100 + 50 is 150# 🔧 Calling: counter# 📝 Activity: Count: 1/3# 📝 Activity: Count: 2/3# 📝 Activity: Count: 3/3# ✅ Output: ['Count: 1/3', 'Count: 2/3', 'Count: 3/3']# 🎯 Final: The result of 100 + 50 is 150. Counted to 3 successfully.
Set yourOPENAI_API_KEY environment variable to run these examples.
Try the full Agent tutorial in Colab:
ViewQuickstart: Learn HowAdalFlow optimizes LM workflows end-to-end in 15 mins.
Go toDocumentation for tracing, human-in-the-loop, and more.
- Fine-tuning-free robot planning using LLM auto-differentiation
- Integration of formal methods feedback for robot control
[Jan 2025]Auto-Differentiating Any LLM Workflow: A Farewell to Manual Prompting
- LLM Applications as auto-differentiation graphs
- Token-efficient and better performance than DsPy
[Dec 2025]Scaling Textual Gradients via Sampling-Based Momentum
- Stable, scalable prompt optimization using momentum-weighted textual gradient
- Gumbel-Top-k sampling improves exploration and integrates seamlessly with TextGrad, DSPy-COPRO, and AdalFlow
AdalFlow is part of a growing ecosystem of libraries that automatically optimize LLM prompts and workflows. Here's how the landscape looks:
| Library | Approach | Key Idea |
|---|---|---|
| AdalFlow | PyTorch-style auto-differentiation | LLM workflows as auto-diff graphs; unified textual gradient descent + few-shot bootstrap optimization in one training loop |
| DSPy | Declarative programming | Write compositional Python code instead of prompts; compiler optimizes prompts and weights automatically |
| Agent Lightning | Framework-agnostic agent trainer | Turn any agent (LangChain, OpenAI SDK, AutoGen, etc.) into an optimizable entity with minimal code changes; supports RL, auto-prompt optimization, and supervised fine-tuning |
| TextGrad | Textual gradient descent | Automatic differentiation via text; uses LLM feedback as gradients to optimize prompts, code, and solutions |
Where AdalFlow fits: AdalFlow draws inspiration from all of the above (seeAcknowledgements) and unifies them into a single PyTorch-like framework. You get textual gradients (à la TextGrad), few-shot bootstrap (à la DSPy), and instruction history — all composable withinParameter,Generator,AdalComponent, andTrainer.
We work closely with theVITA Group at University of Texas at Austin, under the leadership ofDr. Atlas Wang and in collaboration withDr. Junyuan Hong, who provides valuable support in driving project initiatives.
For collaboration, contactLi Yin.
We are looking for a Dev Rel to help us build the community and support our users. If you are interested, please contactLi Yin.
AdalFlow full documentation available atadalflow.sylph.ai:
AdalFlow is named in honor ofAda Lovelace, the pioneering female mathematician who first recognized that machines could go beyond mere calculations. As a team led by a female founder, we aim to inspire more women to pursue careers in AI.
The AdalFlow is a community-driven project, and we welcome everyone to join us in building the future of LLM applications.
Join ourDiscord community to ask questions, share your projects, and get updates on AdalFlow.
To contribute, please read ourContributor Guide.
Many existing works greatly inspired AdalFlow library! Here is a non-exhaustive list:
- 📚PyTorch for design philosophy and design pattern of
Component,Parameter,Sequential. - 📚Micrograd: A tiny autograd engine for our auto-differentiative architecture.
- 📚Text-Grad for the
Textual Gradient Descenttext optimizer. - 📚DSPy for inspiring the
__{input/output}__fieldsin ourDataClassand the bootstrap few-shot optimizer. - 📚OPRO for adding past text instructions along with its accuracy in the text optimizer.
- 📚PyTorch Lightning for the
AdalComponentandTrainer.
About
AdalFlow: The library to build & auto-optimize LLM applications.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.

