Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

AdalFlow: The library to build & auto-optimize LLM applications.

License

NotificationsYou must be signed in to change notification settings

SylphAI-Inc/AdalFlow

Repository files navigation

AdaL AdaL CLI

AdalFlow proudly powersAdaL CLI — The AI coding agent

Try Quickstart in Colab

PyPI VersionPyPI DownloadsPyPI DownloadsGitHub starsOpen IssuesLicensediscord-invite

Why AdalFlow

  1. 100% Open-source Agents SDK: Lightweight and requires no additional API to setupHuman-in-the-Loop andTracing Functionalities.
  2. Say goodbye to manual prompting: AdalFlow provides a unified auto-differentiative framework for both zero-shot optimization and few-shot prompt optimization. Our research,LLM-AutoDiff andLearn-to-Reason Few-shot In Context Learning, achieve the highest accuracy among all auto-prompt optimization libraries.
  3. Switch your LLM app to any model via a config: AdalFlow providesModel-agnostic building blocks for LLM task pipelines, ranging from RAG, Agents to classical NLP tasks.

AdalFlow Optimized Prompt

AdalFlow MLflow Integration

ViewDocumentation

Quick Start

Install AdalFlow with pip:

pip install adalflow

Hello World Agent Example

fromadalflowimportAgent,Runnerfromadalflow.components.model_client.openai_clientimportOpenAIClientfromadalflow.core.typesimport (ToolCallActivityRunItem,RunItemStreamEvent,ToolCallRunItem,ToolOutputRunItem,FinalOutputItem)importasyncio# Define toolsdefcalculator(expression:str)->str:"""Evaluate a mathematical expression."""try:result=eval(expression)returnf"The result of{expression} is{result}"exceptExceptionase:returnf"Error:{e}"asyncdefweb_search(query:str="what is the weather in SF today?")->str:"""Web search on query."""awaitasyncio.sleep(0.5)return"San Francisco will be mostly cloudy today with some afternoon sun, reaching about 67 °F (20 °C)."defcounter(limit:int):"""A counter that counts up to a limit."""final_output= []foriinrange(1,limit+1):stream_item=f"Count:{i}/{limit}"final_output.append(stream_item)yieldToolCallActivityRunItem(data=stream_item)yieldfinal_output# Create agent with toolsagent=Agent(name="MyAgent",tools=[calculator,web_search,counter],model_client=OpenAIClient(),model_kwargs={"model":"gpt-4o","temperature":0.3},max_steps=5)runner=Runner(agent=agent)

1. Synchronous Call Mode

# Sync call - returns RunnerResult with complete execution historyresult=runner.call(prompt_kwargs={"input_str":"Calculate 15 * 7 + 23 and count to 5"})print(result.answer)# Output: The result of 15 * 7 + 23 is 128. The counter counted up to 5: 1, 2, 3, 4, 5.# Access step historyforstepinresult.step_history:print(f"Step{step.step}:{step.function.name} ->{step.observation}")# Output:# Step 0: calculator -> The result of 15 * 7 + 23 is 128# Step 1: counter -> ['Count: 1/5', 'Count: 2/5', 'Count: 3/5', 'Count: 4/5', 'Count: 5/5']

2. Asynchronous Call Mode

# Async call - similar output structure to sync callresult=awaitrunner.acall(prompt_kwargs={"input_str":"What's the weather in SF and calculate 42 * 3"})print(result.answer)# Output: San Francisco will be mostly cloudy today with some afternoon sun, reaching about 67 °F (20 °C).#         The result of 42 * 3 is 126.

3. Async Streaming Mode

# Async streaming - real-time event processingstreaming_result=runner.astream(prompt_kwargs={"input_str":"Calculate 100 + 50 and count to 3"},)# Process streaming events in real-timeasyncforeventinstreaming_result.stream_events():ifisinstance(event,RunItemStreamEvent):ifisinstance(event.item,ToolCallRunItem):print(f"🔧 Calling:{event.item.data.name}")elifisinstance(event.item,ToolCallActivityRunItem):print(f"📝 Activity:{event.item.data}")elifisinstance(event.item,ToolOutputRunItem):print(f"✅ Output:{event.item.data.output}")elifisinstance(event.item,FinalOutputItem):print(f"🎯 Final:{event.item.data.answer}")# Output:# 🔧 Calling: calculator# ✅ Output: The result of 100 + 50 is 150# 🔧 Calling: counter# 📝 Activity: Count: 1/3# 📝 Activity: Count: 2/3# 📝 Activity: Count: 3/3# ✅ Output: ['Count: 1/3', 'Count: 2/3', 'Count: 3/3']# 🎯 Final: The result of 100 + 50 is 150. Counted to 3 successfully.

Set yourOPENAI_API_KEY environment variable to run these examples.

Try the full Agent tutorial in Colab:Open In Colab

ViewQuickstart: Learn HowAdalFlow optimizes LM workflows end-to-end in 15 mins.

Go toDocumentation for tracing, human-in-the-loop, and more.

Research

[Sep 2025]LAD-VF: LLM-Automatic Differentiation Enables Fine-Tuning-Free Robot Planning from Formal Methods Feedback

  • Fine-tuning-free robot planning using LLM auto-differentiation
  • Integration of formal methods feedback for robot control

[Jan 2025]Auto-Differentiating Any LLM Workflow: A Farewell to Manual Prompting

  • LLM Applications as auto-differentiation graphs
  • Token-efficient and better performance than DsPy

[Dec 2025]Scaling Textual Gradients via Sampling-Based Momentum

  • Stable, scalable prompt optimization using momentum-weighted textual gradient
  • Gumbel-Top-k sampling improves exploration and integrates seamlessly with TextGrad, DSPy-COPRO, and AdalFlow

Auto-Prompt Optimization Ecosystem

AdalFlow is part of a growing ecosystem of libraries that automatically optimize LLM prompts and workflows. Here's how the landscape looks:

LibraryApproachKey Idea
AdalFlowPyTorch-style auto-differentiationLLM workflows as auto-diff graphs; unified textual gradient descent + few-shot bootstrap optimization in one training loop
DSPyDeclarative programmingWrite compositional Python code instead of prompts; compiler optimizes prompts and weights automatically
Agent LightningFramework-agnostic agent trainerTurn any agent (LangChain, OpenAI SDK, AutoGen, etc.) into an optimizable entity with minimal code changes; supports RL, auto-prompt optimization, and supervised fine-tuning
TextGradTextual gradient descentAutomatic differentiation via text; uses LLM feedback as gradients to optimize prompts, code, and solutions

Where AdalFlow fits: AdalFlow draws inspiration from all of the above (seeAcknowledgements) and unifies them into a single PyTorch-like framework. You get textual gradients (à la TextGrad), few-shot bootstrap (à la DSPy), and instruction history — all composable withinParameter,Generator,AdalComponent, andTrainer.

Collaborations

We work closely with theVITA Group at University of Texas at Austin, under the leadership ofDr. Atlas Wang and in collaboration withDr. Junyuan Hong, who provides valuable support in driving project initiatives.

For collaboration, contactLi Yin.

Hiring

We are looking for a Dev Rel to help us build the community and support our users. If you are interested, please contactLi Yin.

Documentation

AdalFlow full documentation available atadalflow.sylph.ai:

AdalFlow: A Tribute to Ada Lovelace

AdalFlow is named in honor ofAda Lovelace, the pioneering female mathematician who first recognized that machines could go beyond mere calculations. As a team led by a female founder, we aim to inspire more women to pursue careers in AI.

Community & Contributors

The AdalFlow is a community-driven project, and we welcome everyone to join us in building the future of LLM applications.

Join ourDiscord community to ask questions, share your projects, and get updates on AdalFlow.

To contribute, please read ourContributor Guide.

Contributors

contributors

Acknowledgements

Many existing works greatly inspired AdalFlow library! Here is a non-exhaustive list:

  • 📚PyTorch for design philosophy and design pattern ofComponent,Parameter,Sequential.
  • 📚Micrograd: A tiny autograd engine for our auto-differentiative architecture.
  • 📚Text-Grad for theTextual Gradient Descent text optimizer.
  • 📚DSPy for inspiring the__{input/output}__fields in ourDataClass and the bootstrap few-shot optimizer.
  • 📚OPRO for adding past text instructions along with its accuracy in the text optimizer.
  • 📚PyTorch Lightning for theAdalComponent andTrainer.

[8]ページ先頭

©2009-2026 Movatter.jp