Runner
Runner
Source code insrc/agents/run.py
302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524 | |
runasyncclassmethod
run(starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResultRun a workflow starting at the given agent.
The agent will run in a loop until a final output is generated. The loop runs like so:
- The agent is invoked with the given input.
- If there is a final output (i.e. the agent produces something of type
agent.output_type), the loop terminates. - If there's a handoff, we run the loop again, with the new agent.
- Else, we run tool calls (if any), and re-run the loop.
In two cases, the agent may raise an exception:
- If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.
- If a guardrail tripwire is triggered, a GuardrailTripwireTriggered exception is raised.
Note
Only the first agent's input guardrails are run.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
starting_agent | Agent[TContext] | The starting agent to run. | required |
input | str |list[TResponseInputItem] | The initial input to the agent. You can pass a single string for auser message, or a list of input items. | required |
context | TContext | None | The context to run the agent with. | None |
max_turns | int | The maximum number of turns to run the agent for. A turn isdefined as one AI invocation (including any tool calls that might occur). | DEFAULT_MAX_TURNS |
hooks | RunHooks[TContext] | None | An object that receives callbacks on various lifecycle events. | None |
run_config | RunConfig | None | Global settings for the entire agent run. | None |
previous_response_id | str | None | The ID of the previous response. If using OpenAImodels via the Responses API, this allows you to skip passing in inputfrom the previous turn. | None |
conversation_id | str | None | The conversation ID(https://platform.openai.com/docs/guides/conversation-state?api-mode=responses).If provided, the conversation will be used to read and write items.Every agent will have access to the conversation history so far,and its output items will be written to the conversation.We recommend only using this if you are exclusively using OpenAI models;other model providers don't write to the Conversation object,so you'll end up having partial conversations stored. | None |
session | Session | None | A session for automatic conversation history management. | None |
Returns:
| Type | Description |
|---|---|
RunResult | A run result containing all the inputs, guardrail results and the output of |
RunResult | the last agent. Agents may perform handoffs, so we don't know the specific |
RunResult | type of the output. |
Source code insrc/agents/run.py
run_syncclassmethod
run_sync(starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResultRun a workflow synchronously, starting at the given agent.
Note
This just wraps therun method, so it will not work if there's already anevent loop (e.g. inside an async function, or in a Jupyter notebook or asynccontext like FastAPI). For those cases, use therun method instead.
The agent will run in a loop until a final output is generated. The loop runs:
- The agent is invoked with the given input.
- If there is a final output (i.e. the agent produces something of type
agent.output_type), the loop terminates. - If there's a handoff, we run the loop again, with the new agent.
- Else, we run tool calls (if any), and re-run the loop.
In two cases, the agent may raise an exception:
- If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.
- If a guardrail tripwire is triggered, a GuardrailTripwireTriggered exception is raised.
Note
Only the first agent's input guardrails are run.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
starting_agent | Agent[TContext] | The starting agent to run. | required |
input | str |list[TResponseInputItem] | The initial input to the agent. You can pass a single string for auser message, or a list of input items. | required |
context | TContext | None | The context to run the agent with. | None |
max_turns | int | The maximum number of turns to run the agent for. A turn isdefined as one AI invocation (including any tool calls that might occur). | DEFAULT_MAX_TURNS |
hooks | RunHooks[TContext] | None | An object that receives callbacks on various lifecycle events. | None |
run_config | RunConfig | None | Global settings for the entire agent run. | None |
previous_response_id | str | None | The ID of the previous response, if using OpenAImodels via the Responses API, this allows you to skip passing in inputfrom the previous turn. | None |
conversation_id | str | None | The ID of the stored conversation, if any. | None |
session | Session | None | A session for automatic conversation history management. | None |
Returns:
| Type | Description |
|---|---|
RunResult | A run result containing all the inputs, guardrail results and the output of |
RunResult | the last agent. Agents may perform handoffs, so we don't know the specific |
RunResult | type of the output. |
Source code insrc/agents/run.py
run_streamedclassmethod
run_streamed(starting_agent:Agent[TContext],input:str|list[TResponseInputItem],context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResultStreamingRun a workflow starting at the given agent in streaming mode.
The returned result object contains a method you can use to stream semanticevents as they are generated.
The agent will run in a loop until a final output is generated. The loop runs like so:
- The agent is invoked with the given input.
- If there is a final output (i.e. the agent produces something of type
agent.output_type), the loop terminates. - If there's a handoff, we run the loop again, with the new agent.
- Else, we run tool calls (if any), and re-run the loop.
In two cases, the agent may raise an exception:
- If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.
- If a guardrail tripwire is triggered, a GuardrailTripwireTriggered exception is raised.
Note
Only the first agent's input guardrails are run.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
starting_agent | Agent[TContext] | The starting agent to run. | required |
input | str |list[TResponseInputItem] | The initial input to the agent. You can pass a single string for auser message, or a list of input items. | required |
context | TContext | None | The context to run the agent with. | None |
max_turns | int | The maximum number of turns to run the agent for. A turn isdefined as one AI invocation (including any tool calls that might occur). | DEFAULT_MAX_TURNS |
hooks | RunHooks[TContext] | None | An object that receives callbacks on various lifecycle events. | None |
run_config | RunConfig | None | Global settings for the entire agent run. | None |
previous_response_id | str | None | The ID of the previous response, if using OpenAImodels via the Responses API, this allows you to skip passing in inputfrom the previous turn. | None |
conversation_id | str | None | The ID of the stored conversation, if any. | None |
session | Session | None | A session for automatic conversation history management. | None |
Returns:
| Type | Description |
|---|---|
RunResultStreaming | A result object that contains data about the run, as well as a method to |
RunResultStreaming | stream events. |
Source code insrc/agents/run.py
RunConfigdataclass
Configures settings for the entire agent run.
Source code insrc/agents/run.py
modelclass-attributeinstance-attribute
model:str|Model|None=NoneThe model to use for the entire agent run. If set, will override the model set on everyagent. The model_provider passed in below must be able to resolve this model name.
model_providerclass-attributeinstance-attribute
model_provider:ModelProvider=field(default_factory=MultiProvider)The model provider to use when looking up string model names. Defaults to OpenAI.
model_settingsclass-attributeinstance-attribute
model_settings:ModelSettings|None=NoneConfigure global model settings. Any non-null values will override the agent-specific modelsettings.
handoff_input_filterclass-attributeinstance-attribute
handoff_input_filter:HandoffInputFilter|None=NoneA global input filter to apply to all handoffs. IfHandoff.input_filter is set, then thatwill take precedence. The input filter allows you to edit the inputs that are sent to the newagent. See the documentation inHandoff.input_filter for more details.
nest_handoff_historyclass-attributeinstance-attribute
Wrap prior run history in a single assistant message before handing off when no custominput filter is set. Set to False to preserve the raw transcript behavior from previousreleases.
handoff_history_mapperclass-attributeinstance-attribute
handoff_history_mapper:HandoffHistoryMapper|None=NoneOptional function that receives the normalized transcript (history + handoff items) andreturns the input history that should be passed to the next agent. When left asNone, therunner collapses the transcript into a single assistant message. This function only runs whennest_handoff_history is True.
input_guardrailsclass-attributeinstance-attribute
input_guardrails:list[InputGuardrail[Any]]|None=NoneA list of input guardrails to run on the initial run input.
output_guardrailsclass-attributeinstance-attribute
output_guardrails:list[OutputGuardrail[Any]]|None=NoneA list of output guardrails to run on the final output of the run.
tracing_disabledclass-attributeinstance-attribute
Whether tracing is disabled for the agent run. If disabled, we will not trace the agent run.
trace_include_sensitive_dataclass-attributeinstance-attribute
Whether we include potentially sensitive data (for example: inputs/outputs of tool calls orLLM generations) in traces. If False, we'll still create spans for these events, but thesensitive data will not be included.
workflow_nameclass-attributeinstance-attribute
The name of the run, used for tracing. Should be a logical name for the run, like"Code generation workflow" or "Customer support agent".
trace_idclass-attributeinstance-attribute
A custom trace ID to use for tracing. If not provided, we will generate a new trace ID.
group_idclass-attributeinstance-attribute
A grouping identifier to use for tracing, to link multiple traces from the same conversationor process. For example, you might use a chat thread ID.
trace_metadataclass-attributeinstance-attribute
An optional dictionary of additional metadata to include with the trace.
session_input_callbackclass-attributeinstance-attribute
session_input_callback:SessionInputCallback|None=NoneDefines how to handle session history when new input is provided.-None (default): The new input is appended to the session history.-SessionInputCallback: A custom function that receives the history and new input, and returns the desired combined list of items.
call_model_input_filterclass-attributeinstance-attribute
Optional callback that is invoked immediately before calling the model. It receives the currentagent, context and the model input (instructions and input items), and must return a possiblymodifiedModelInputData to use for the model call.
This allows you to edit the input sent to the model e.g. to stay within a token limit.For example, you can use this to add a system prompt to the input.