- Notifications
You must be signed in to change notification settings - Fork0
A lightweight, powerful framework for multi-agent workflows
License
cyberbuck/openai-agents-python
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows.
- Agents: LLMs configured with instructions, tools, guardrails, and handoffs
- Handoffs: Allow agents to transfer control to other agents for specific tasks
- Guardrails: Configurable safety checks for input and output validation
- Tracing: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
Explore theexamples directory to see the SDK in action, and read ourdocumentation for more details.
Notably, our SDKis compatible with any model providers that support the OpenAI Chat Completions API format.
- Set up your Python environment
python -m venv envsource env/bin/activate
- Install Agents SDK
pip install openai-agents
fromagentsimportAgent,Runneragent=Agent(name="Assistant",instructions="You are a helpful assistant")result=Runner.run_sync(agent,"Write a haiku about recursion in programming.")print(result.final_output)# Code within the code,# Functions calling themselves,# Infinite loop's dance.
(If running this, ensure you set theOPENAI_API_KEY
environment variable)
(For Jupyter notebook users, seehello_world_jupyter.py)
fromagentsimportAgent,Runnerimportasynciospanish_agent=Agent(name="Spanish agent",instructions="You only speak Spanish.",)english_agent=Agent(name="English agent",instructions="You only speak English",)triage_agent=Agent(name="Triage agent",instructions="Handoff to the appropriate agent based on the language of the request.",handoffs=[spanish_agent,english_agent],)asyncdefmain():result=awaitRunner.run(triage_agent,input="Hola, ¿cómo estás?")print(result.final_output)# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?if__name__=="__main__":asyncio.run(main())
importasynciofromagentsimportAgent,Runner,function_tool@function_tooldefget_weather(city:str)->str:returnf"The weather in{city} is sunny."agent=Agent(name="Hello world",instructions="You are a helpful agent.",tools=[get_weather],)asyncdefmain():result=awaitRunner.run(agent,input="What's the weather in Tokyo?")print(result.final_output)# The weather in Tokyo is sunny.if__name__=="__main__":asyncio.run(main())
When you callRunner.run()
, we run a loop until we get a final output.
- We call the LLM, using the model and settings on the agent, and the message history.
- The LLM returns a response, which may include tool calls.
- If the response has a final output (see below for more on this), we return it and end the loop.
- If the response has a handoff, we set the agent to the new agent and go back to step 1.
- We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
There is amax_turns
parameter that you can use to limit the number of times the loop executes.
Final output is the last thing the agent produces in the loop.
- If you set an
output_type
on the agent, the final output is when the LLM returns something of that type. We usestructured outputs for this. - If there's no
output_type
(i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.
As a result, the mental model for the agent loop is:
- If the current agent has an
output_type
, the loop runs until the agent produces structured output matching that type. - If the current agent does not have an
output_type
, the loop runs until the current agent produces a message without any tool calls/handoffs.
The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples inexamples/agent_patterns
.
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, includingLogfire,AgentOps,Braintrust,Scorecard, andKeywords AI. For more details about how to customize or disable tracing, seeTracing.
- Ensure you have
uv
installed.
uv --version
- Install dependencies
make sync
- (After making changes) lint/test
make tests # run testsmake mypy # run typecheckermake lint # run linter
We'd like to acknowledge the excellent work of the open-source community, especially:
We're committed to continuing to build the Agents SDK as an open source framework so others in the community can expand on our approach.
About
A lightweight, powerful framework for multi-agent workflows
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- Python99.9%
- Makefile0.1%