Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A lightweight, powerful framework for multi-agent workflows

License

NotificationsYou must be signed in to change notification settings

artificialinc/openai-agents-python

 
 

Repository files navigation

The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. It is provider-agnostic, supporting the OpenAI Responses and Chat Completions APIs, as well as 100+ other LLMs.

Image of the Agents Tracing UI

Note

Looking for the JavaScript/TypeScript version? Check outAgents SDK JS/TS.

Core concepts:

  1. Agents: LLMs configured with instructions, tools, guardrails, and handoffs
  2. Handoffs: A specialized tool call used by the Agents SDK for transferring control between agents
  3. Guardrails: Configurable safety checks for input and output validation
  4. Sessions: Automatic conversation history management across agent runs
  5. Tracing: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows

Explore theexamples directory to see the SDK in action, and read ourdocumentation for more details.

Sessions

The Agents SDK provides built-in session memory to automatically maintain conversation history across multiple agent runs, eliminating the need to manually handle.to_input_list() between turns.

Quick start

fromagentsimportAgent,Runner,SQLiteSession# Create agentagent=Agent(name="Assistant",instructions="Reply very concisely.",)# Create a session instancesession=SQLiteSession("conversation_123")# First turnresult=awaitRunner.run(agent,"What city is the Golden Gate Bridge in?",session=session)print(result.final_output)# "San Francisco"# Second turn - agent automatically remembers previous contextresult=awaitRunner.run(agent,"What state is it in?",session=session)print(result.final_output)# "California"# Also works with synchronous runnerresult=Runner.run_sync(agent,"What's the population?",session=session)print(result.final_output)# "Approximately 39 million"

Session options

  • No memory (default): No session memory when session parameter is omitted
  • session: Session = DatabaseSession(...): Use a Session instance to manage conversation history
fromagentsimportAgent,Runner,SQLiteSession# Custom SQLite database filesession=SQLiteSession("user_123","conversations.db")agent=Agent(name="Assistant")# Different session IDs maintain separate conversation historiesresult1=awaitRunner.run(agent,"Hello",session=session)result2=awaitRunner.run(agent,"Hello",session=SQLiteSession("user_456","conversations.db"))

Custom session implementations

You can implement your own session memory by creating a class that follows theSession protocol:

fromagents.memoryimportSessionfromtypingimportListclassMyCustomSession:"""Custom session implementation following the Session protocol."""def__init__(self,session_id:str):self.session_id=session_id# Your initialization hereasyncdefget_items(self,limit:int|None=None)->List[dict]:# Retrieve conversation history for the sessionpassasyncdefadd_items(self,items:List[dict])->None:# Store new items for the sessionpassasyncdefpop_item(self)->dict|None:# Remove and return the most recent item from the sessionpassasyncdefclear_session(self)->None:# Clear all items for the sessionpass# Use your custom sessionagent=Agent(name="Assistant")result=awaitRunner.run(agent,"Hello",session=MyCustomSession("my_session"))

Get started

  1. Set up your Python environment
  • Option A: Using venv (traditional method)
python -m venv envsource env/bin/activate# On Windows: env\Scripts\activate
  • Option B: Using uv (recommended)
uv venvsource .venv/bin/activate# On Windows: .venv\Scripts\activate
  1. Install Agents SDK
pip install openai-agents

For voice support, install with the optionalvoice group:pip install 'openai-agents[voice]'.

Hello world example

fromagentsimportAgent,Runneragent=Agent(name="Assistant",instructions="You are a helpful assistant")result=Runner.run_sync(agent,"Write a haiku about recursion in programming.")print(result.final_output)# Code within the code,# Functions calling themselves,# Infinite loop's dance.

(If running this, ensure you set theOPENAI_API_KEY environment variable)

(For Jupyter notebook users, seehello_world_jupyter.ipynb)

Handoffs example

fromagentsimportAgent,Runnerimportasynciospanish_agent=Agent(name="Spanish agent",instructions="You only speak Spanish.",)english_agent=Agent(name="English agent",instructions="You only speak English",)triage_agent=Agent(name="Triage agent",instructions="Handoff to the appropriate agent based on the language of the request.",handoffs=[spanish_agent,english_agent],)asyncdefmain():result=awaitRunner.run(triage_agent,input="Hola, ¿cómo estás?")print(result.final_output)# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?if__name__=="__main__":asyncio.run(main())

Functions example

importasynciofromagentsimportAgent,Runner,function_tool@function_tooldefget_weather(city:str)->str:returnf"The weather in{city} is sunny."agent=Agent(name="Hello world",instructions="You are a helpful agent.",tools=[get_weather],)asyncdefmain():result=awaitRunner.run(agent,input="What's the weather in Tokyo?")print(result.final_output)# The weather in Tokyo is sunny.if__name__=="__main__":asyncio.run(main())

The agent loop

When you callRunner.run(), we run a loop until we get a final output.

  1. We call the LLM, using the model and settings on the agent, and the message history.
  2. The LLM returns a response, which may include tool calls.
  3. If the response has a final output (see below for more on this), we return it and end the loop.
  4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
  5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.

There is amax_turns parameter that you can use to limit the number of times the loop executes.

Final output

Final output is the last thing the agent produces in the loop.

  1. If you set anoutput_type on the agent, the final output is when the LLM returns something of that type. We usestructured outputs for this.
  2. If there's nooutput_type (i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.

As a result, the mental model for the agent loop is:

  1. If the current agent has anoutput_type, the loop runs until the agent produces structured output matching that type.
  2. If the current agent does not have anoutput_type, the loop runs until the current agent produces a message without any tool calls/handoffs.

Common agent patterns

The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples inexamples/agent_patterns.

Tracing

The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, includingLogfire,AgentOps,Braintrust,Scorecard, andKeywords AI. For more details about how to customize or disable tracing, seeTracing, which also includes a larger list ofexternal tracing processors.

Development (only needed if you need to edit the SDK/examples)

  1. Ensure you haveuv installed.
uv --version
  1. Install dependencies
make sync
  1. (After making changes) lint/test
make check # run tests linter and typechecker

Or to run them individually:

make tests  # run testsmake mypy   # run typecheckermake lint   # run lintermake format-check # run style checker

Acknowledgements

We'd like to acknowledge the excellent work of the open-source community, especially:

We're committed to continuing to build the Agents SDK as an open source framework so others in the community can expand on our approach.

About

A lightweight, powerful framework for multi-agent workflows

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python99.9%
  • Makefile0.1%

[8]ページ先頭

©2009-2025 Movatter.jp