Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Runner

Runner

Source code insrc/agents/run.py
classRunner:@classmethodasyncdefrun(cls,starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResult:"""        Run a workflow starting at the given agent.        The agent will run in a loop until a final output is generated. The loop runs like so:          1. The agent is invoked with the given input.          2. If there is a final output (i.e. the agent produces something of type             `agent.output_type`), the loop terminates.          3. If there's a handoff, we run the loop again, with the new agent.          4. Else, we run tool calls (if any), and re-run the loop.        In two cases, the agent may raise an exception:          1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.          2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered             exception is raised.        Note:            Only the first agent's input guardrails are run.        Args:            starting_agent: The starting agent to run.            input: The initial input to the agent. You can pass a single string for a                user message, or a list of input items.            context: The context to run the agent with.            max_turns: The maximum number of turns to run the agent for. A turn is                defined as one AI invocation (including any tool calls that might occur).            hooks: An object that receives callbacks on various lifecycle events.            run_config: Global settings for the entire agent run.            previous_response_id: The ID of the previous response. If using OpenAI                models via the Responses API, this allows you to skip passing in input                from the previous turn.            conversation_id: The conversation ID                (https://platform.openai.com/docs/guides/conversation-state?api-mode=responses).                If provided, the conversation will be used to read and write items.                Every agent will have access to the conversation history so far,                and its output items will be written to the conversation.                We recommend only using this if you are exclusively using OpenAI models;                other model providers don't write to the Conversation object,                so you'll end up having partial conversations stored.            session: A session for automatic conversation history management.        Returns:            A run result containing all the inputs, guardrail results and the output of            the last agent. Agents may perform handoffs, so we don't know the specific            type of the output.        """runner=DEFAULT_AGENT_RUNNERreturnawaitrunner.run(starting_agent,input,context=context,max_turns=max_turns,hooks=hooks,run_config=run_config,previous_response_id=previous_response_id,auto_previous_response_id=auto_previous_response_id,conversation_id=conversation_id,session=session,)@classmethoddefrun_sync(cls,starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResult:"""        Run a workflow synchronously, starting at the given agent.        Note:            This just wraps the `run` method, so it will not work if there's already an            event loop (e.g. inside an async function, or in a Jupyter notebook or async            context like FastAPI). For those cases, use the `run` method instead.        The agent will run in a loop until a final output is generated. The loop runs:          1. The agent is invoked with the given input.          2. If there is a final output (i.e. the agent produces something of type             `agent.output_type`), the loop terminates.          3. If there's a handoff, we run the loop again, with the new agent.          4. Else, we run tool calls (if any), and re-run the loop.        In two cases, the agent may raise an exception:          1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.          2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered             exception is raised.        Note:            Only the first agent's input guardrails are run.        Args:            starting_agent: The starting agent to run.            input: The initial input to the agent. You can pass a single string for a                user message, or a list of input items.            context: The context to run the agent with.            max_turns: The maximum number of turns to run the agent for. A turn is                defined as one AI invocation (including any tool calls that might occur).            hooks: An object that receives callbacks on various lifecycle events.            run_config: Global settings for the entire agent run.            previous_response_id: The ID of the previous response, if using OpenAI                models via the Responses API, this allows you to skip passing in input                from the previous turn.            conversation_id: The ID of the stored conversation, if any.            session: A session for automatic conversation history management.        Returns:            A run result containing all the inputs, guardrail results and the output of            the last agent. Agents may perform handoffs, so we don't know the specific            type of the output.        """runner=DEFAULT_AGENT_RUNNERreturnrunner.run_sync(starting_agent,input,context=context,max_turns=max_turns,hooks=hooks,run_config=run_config,previous_response_id=previous_response_id,conversation_id=conversation_id,session=session,auto_previous_response_id=auto_previous_response_id,)@classmethoddefrun_streamed(cls,starting_agent:Agent[TContext],input:str|list[TResponseInputItem],context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResultStreaming:"""        Run a workflow starting at the given agent in streaming mode.        The returned result object contains a method you can use to stream semantic        events as they are generated.        The agent will run in a loop until a final output is generated. The loop runs like so:          1. The agent is invoked with the given input.          2. If there is a final output (i.e. the agent produces something of type             `agent.output_type`), the loop terminates.          3. If there's a handoff, we run the loop again, with the new agent.          4. Else, we run tool calls (if any), and re-run the loop.        In two cases, the agent may raise an exception:          1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.          2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered             exception is raised.        Note:            Only the first agent's input guardrails are run.        Args:            starting_agent: The starting agent to run.            input: The initial input to the agent. You can pass a single string for a                user message, or a list of input items.            context: The context to run the agent with.            max_turns: The maximum number of turns to run the agent for. A turn is                defined as one AI invocation (including any tool calls that might occur).            hooks: An object that receives callbacks on various lifecycle events.            run_config: Global settings for the entire agent run.            previous_response_id: The ID of the previous response, if using OpenAI                models via the Responses API, this allows you to skip passing in input                from the previous turn.            conversation_id: The ID of the stored conversation, if any.            session: A session for automatic conversation history management.        Returns:            A result object that contains data about the run, as well as a method to            stream events.        """runner=DEFAULT_AGENT_RUNNERreturnrunner.run_streamed(starting_agent,input,context=context,max_turns=max_turns,hooks=hooks,run_config=run_config,previous_response_id=previous_response_id,auto_previous_response_id=auto_previous_response_id,conversation_id=conversation_id,session=session,)

runasyncclassmethod

run(starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResult

Run a workflow starting at the given agent.

The agent will run in a loop until a final output is generated. The loop runs like so:

  1. The agent is invoked with the given input.
  2. If there is a final output (i.e. the agent produces something of typeagent.output_type), the loop terminates.
  3. If there's a handoff, we run the loop again, with the new agent.
  4. Else, we run tool calls (if any), and re-run the loop.

In two cases, the agent may raise an exception:

  1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.
  2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered exception is raised.
Note

Only the first agent's input guardrails are run.

Parameters:

NameTypeDescriptionDefault
starting_agentAgent[TContext]

The starting agent to run.

required
inputstr |list[TResponseInputItem]

The initial input to the agent. You can pass a single string for auser message, or a list of input items.

required
contextTContext | None

The context to run the agent with.

None
max_turnsint

The maximum number of turns to run the agent for. A turn isdefined as one AI invocation (including any tool calls that might occur).

DEFAULT_MAX_TURNS
hooksRunHooks[TContext] | None

An object that receives callbacks on various lifecycle events.

None
run_configRunConfig | None

Global settings for the entire agent run.

None
previous_response_idstr | None

The ID of the previous response. If using OpenAImodels via the Responses API, this allows you to skip passing in inputfrom the previous turn.

None
conversation_idstr | None

The conversation ID(https://platform.openai.com/docs/guides/conversation-state?api-mode=responses).If provided, the conversation will be used to read and write items.Every agent will have access to the conversation history so far,and its output items will be written to the conversation.We recommend only using this if you are exclusively using OpenAI models;other model providers don't write to the Conversation object,so you'll end up having partial conversations stored.

None
sessionSession | None

A session for automatic conversation history management.

None

Returns:

TypeDescription
RunResult

A run result containing all the inputs, guardrail results and the output of

RunResult

the last agent. Agents may perform handoffs, so we don't know the specific

RunResult

type of the output.

Source code insrc/agents/run.py
@classmethodasyncdefrun(cls,starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResult:"""    Run a workflow starting at the given agent.    The agent will run in a loop until a final output is generated. The loop runs like so:      1. The agent is invoked with the given input.      2. If there is a final output (i.e. the agent produces something of type         `agent.output_type`), the loop terminates.      3. If there's a handoff, we run the loop again, with the new agent.      4. Else, we run tool calls (if any), and re-run the loop.    In two cases, the agent may raise an exception:      1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.      2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered         exception is raised.    Note:        Only the first agent's input guardrails are run.    Args:        starting_agent: The starting agent to run.        input: The initial input to the agent. You can pass a single string for a            user message, or a list of input items.        context: The context to run the agent with.        max_turns: The maximum number of turns to run the agent for. A turn is            defined as one AI invocation (including any tool calls that might occur).        hooks: An object that receives callbacks on various lifecycle events.        run_config: Global settings for the entire agent run.        previous_response_id: The ID of the previous response. If using OpenAI            models via the Responses API, this allows you to skip passing in input            from the previous turn.        conversation_id: The conversation ID            (https://platform.openai.com/docs/guides/conversation-state?api-mode=responses).            If provided, the conversation will be used to read and write items.            Every agent will have access to the conversation history so far,            and its output items will be written to the conversation.            We recommend only using this if you are exclusively using OpenAI models;            other model providers don't write to the Conversation object,            so you'll end up having partial conversations stored.        session: A session for automatic conversation history management.    Returns:        A run result containing all the inputs, guardrail results and the output of        the last agent. Agents may perform handoffs, so we don't know the specific        type of the output.    """runner=DEFAULT_AGENT_RUNNERreturnawaitrunner.run(starting_agent,input,context=context,max_turns=max_turns,hooks=hooks,run_config=run_config,previous_response_id=previous_response_id,auto_previous_response_id=auto_previous_response_id,conversation_id=conversation_id,session=session,)

run_syncclassmethod

run_sync(starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResult

Run a workflow synchronously, starting at the given agent.

Note

This just wraps therun method, so it will not work if there's already anevent loop (e.g. inside an async function, or in a Jupyter notebook or asynccontext like FastAPI). For those cases, use therun method instead.

The agent will run in a loop until a final output is generated. The loop runs:

  1. The agent is invoked with the given input.
  2. If there is a final output (i.e. the agent produces something of typeagent.output_type), the loop terminates.
  3. If there's a handoff, we run the loop again, with the new agent.
  4. Else, we run tool calls (if any), and re-run the loop.

In two cases, the agent may raise an exception:

  1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.
  2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered exception is raised.
Note

Only the first agent's input guardrails are run.

Parameters:

NameTypeDescriptionDefault
starting_agentAgent[TContext]

The starting agent to run.

required
inputstr |list[TResponseInputItem]

The initial input to the agent. You can pass a single string for auser message, or a list of input items.

required
contextTContext | None

The context to run the agent with.

None
max_turnsint

The maximum number of turns to run the agent for. A turn isdefined as one AI invocation (including any tool calls that might occur).

DEFAULT_MAX_TURNS
hooksRunHooks[TContext] | None

An object that receives callbacks on various lifecycle events.

None
run_configRunConfig | None

Global settings for the entire agent run.

None
previous_response_idstr | None

The ID of the previous response, if using OpenAImodels via the Responses API, this allows you to skip passing in inputfrom the previous turn.

None
conversation_idstr | None

The ID of the stored conversation, if any.

None
sessionSession | None

A session for automatic conversation history management.

None

Returns:

TypeDescription
RunResult

A run result containing all the inputs, guardrail results and the output of

RunResult

the last agent. Agents may perform handoffs, so we don't know the specific

RunResult

type of the output.

Source code insrc/agents/run.py
@classmethoddefrun_sync(cls,starting_agent:Agent[TContext],input:str|list[TResponseInputItem],*,context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResult:"""    Run a workflow synchronously, starting at the given agent.    Note:        This just wraps the `run` method, so it will not work if there's already an        event loop (e.g. inside an async function, or in a Jupyter notebook or async        context like FastAPI). For those cases, use the `run` method instead.    The agent will run in a loop until a final output is generated. The loop runs:      1. The agent is invoked with the given input.      2. If there is a final output (i.e. the agent produces something of type         `agent.output_type`), the loop terminates.      3. If there's a handoff, we run the loop again, with the new agent.      4. Else, we run tool calls (if any), and re-run the loop.    In two cases, the agent may raise an exception:      1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.      2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered         exception is raised.    Note:        Only the first agent's input guardrails are run.    Args:        starting_agent: The starting agent to run.        input: The initial input to the agent. You can pass a single string for a            user message, or a list of input items.        context: The context to run the agent with.        max_turns: The maximum number of turns to run the agent for. A turn is            defined as one AI invocation (including any tool calls that might occur).        hooks: An object that receives callbacks on various lifecycle events.        run_config: Global settings for the entire agent run.        previous_response_id: The ID of the previous response, if using OpenAI            models via the Responses API, this allows you to skip passing in input            from the previous turn.        conversation_id: The ID of the stored conversation, if any.        session: A session for automatic conversation history management.    Returns:        A run result containing all the inputs, guardrail results and the output of        the last agent. Agents may perform handoffs, so we don't know the specific        type of the output.    """runner=DEFAULT_AGENT_RUNNERreturnrunner.run_sync(starting_agent,input,context=context,max_turns=max_turns,hooks=hooks,run_config=run_config,previous_response_id=previous_response_id,conversation_id=conversation_id,session=session,auto_previous_response_id=auto_previous_response_id,)

run_streamedclassmethod

run_streamed(starting_agent:Agent[TContext],input:str|list[TResponseInputItem],context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResultStreaming

Run a workflow starting at the given agent in streaming mode.

The returned result object contains a method you can use to stream semanticevents as they are generated.

The agent will run in a loop until a final output is generated. The loop runs like so:

  1. The agent is invoked with the given input.
  2. If there is a final output (i.e. the agent produces something of typeagent.output_type), the loop terminates.
  3. If there's a handoff, we run the loop again, with the new agent.
  4. Else, we run tool calls (if any), and re-run the loop.

In two cases, the agent may raise an exception:

  1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.
  2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered exception is raised.
Note

Only the first agent's input guardrails are run.

Parameters:

NameTypeDescriptionDefault
starting_agentAgent[TContext]

The starting agent to run.

required
inputstr |list[TResponseInputItem]

The initial input to the agent. You can pass a single string for auser message, or a list of input items.

required
contextTContext | None

The context to run the agent with.

None
max_turnsint

The maximum number of turns to run the agent for. A turn isdefined as one AI invocation (including any tool calls that might occur).

DEFAULT_MAX_TURNS
hooksRunHooks[TContext] | None

An object that receives callbacks on various lifecycle events.

None
run_configRunConfig | None

Global settings for the entire agent run.

None
previous_response_idstr | None

The ID of the previous response, if using OpenAImodels via the Responses API, this allows you to skip passing in inputfrom the previous turn.

None
conversation_idstr | None

The ID of the stored conversation, if any.

None
sessionSession | None

A session for automatic conversation history management.

None

Returns:

TypeDescription
RunResultStreaming

A result object that contains data about the run, as well as a method to

RunResultStreaming

stream events.

Source code insrc/agents/run.py
@classmethoddefrun_streamed(cls,starting_agent:Agent[TContext],input:str|list[TResponseInputItem],context:TContext|None=None,max_turns:int=DEFAULT_MAX_TURNS,hooks:RunHooks[TContext]|None=None,run_config:RunConfig|None=None,previous_response_id:str|None=None,auto_previous_response_id:bool=False,conversation_id:str|None=None,session:Session|None=None,)->RunResultStreaming:"""    Run a workflow starting at the given agent in streaming mode.    The returned result object contains a method you can use to stream semantic    events as they are generated.    The agent will run in a loop until a final output is generated. The loop runs like so:      1. The agent is invoked with the given input.      2. If there is a final output (i.e. the agent produces something of type         `agent.output_type`), the loop terminates.      3. If there's a handoff, we run the loop again, with the new agent.      4. Else, we run tool calls (if any), and re-run the loop.    In two cases, the agent may raise an exception:      1. If the max_turns is exceeded, a MaxTurnsExceeded exception is raised.      2. If a guardrail tripwire is triggered, a GuardrailTripwireTriggered         exception is raised.    Note:        Only the first agent's input guardrails are run.    Args:        starting_agent: The starting agent to run.        input: The initial input to the agent. You can pass a single string for a            user message, or a list of input items.        context: The context to run the agent with.        max_turns: The maximum number of turns to run the agent for. A turn is            defined as one AI invocation (including any tool calls that might occur).        hooks: An object that receives callbacks on various lifecycle events.        run_config: Global settings for the entire agent run.        previous_response_id: The ID of the previous response, if using OpenAI            models via the Responses API, this allows you to skip passing in input            from the previous turn.        conversation_id: The ID of the stored conversation, if any.        session: A session for automatic conversation history management.    Returns:        A result object that contains data about the run, as well as a method to        stream events.    """runner=DEFAULT_AGENT_RUNNERreturnrunner.run_streamed(starting_agent,input,context=context,max_turns=max_turns,hooks=hooks,run_config=run_config,previous_response_id=previous_response_id,auto_previous_response_id=auto_previous_response_id,conversation_id=conversation_id,session=session,)

RunConfigdataclass

Configures settings for the entire agent run.

Source code insrc/agents/run.py
@dataclassclassRunConfig:"""Configures settings for the entire agent run."""model:str|Model|None=None"""The model to use for the entire agent run. If set, will override the model set on every    agent. The model_provider passed in below must be able to resolve this model name.    """model_provider:ModelProvider=field(default_factory=MultiProvider)"""The model provider to use when looking up string model names. Defaults to OpenAI."""model_settings:ModelSettings|None=None"""Configure global model settings. Any non-null values will override the agent-specific model    settings.    """handoff_input_filter:HandoffInputFilter|None=None"""A global input filter to apply to all handoffs. If `Handoff.input_filter` is set, then that    will take precedence. The input filter allows you to edit the inputs that are sent to the new    agent. See the documentation in `Handoff.input_filter` for more details.    """nest_handoff_history:bool=True"""Wrap prior run history in a single assistant message before handing off when no custom    input filter is set. Set to False to preserve the raw transcript behavior from previous    releases.    """handoff_history_mapper:HandoffHistoryMapper|None=None"""Optional function that receives the normalized transcript (history + handoff items) and    returns the input history that should be passed to the next agent. When left as `None`, the    runner collapses the transcript into a single assistant message. This function only runs when    `nest_handoff_history` is True.    """input_guardrails:list[InputGuardrail[Any]]|None=None"""A list of input guardrails to run on the initial run input."""output_guardrails:list[OutputGuardrail[Any]]|None=None"""A list of output guardrails to run on the final output of the run."""tracing_disabled:bool=False"""Whether tracing is disabled for the agent run. If disabled, we will not trace the agent run.    """trace_include_sensitive_data:bool=field(default_factory=_default_trace_include_sensitive_data)"""Whether we include potentially sensitive data (for example: inputs/outputs of tool calls or    LLM generations) in traces. If False, we'll still create spans for these events, but the    sensitive data will not be included.    """workflow_name:str="Agent workflow""""The name of the run, used for tracing. Should be a logical name for the run, like    "Code generation workflow" or "Customer support agent".    """trace_id:str|None=None"""A custom trace ID to use for tracing. If not provided, we will generate a new trace ID."""group_id:str|None=None"""    A grouping identifier to use for tracing, to link multiple traces from the same conversation    or process. For example, you might use a chat thread ID.    """trace_metadata:dict[str,Any]|None=None"""    An optional dictionary of additional metadata to include with the trace.    """session_input_callback:SessionInputCallback|None=None"""Defines how to handle session history when new input is provided.    - `None` (default): The new input is appended to the session history.    - `SessionInputCallback`: A custom function that receives the history and new input, and      returns the desired combined list of items.    """call_model_input_filter:CallModelInputFilter|None=None"""    Optional callback that is invoked immediately before calling the model. It receives the current    agent, context and the model input (instructions and input items), and must return a possibly    modified `ModelInputData` to use for the model call.    This allows you to edit the input sent to the model e.g. to stay within a token limit.    For example, you can use this to add a system prompt to the input.    """

modelclass-attributeinstance-attribute

model:str|Model|None=None

The model to use for the entire agent run. If set, will override the model set on everyagent. The model_provider passed in below must be able to resolve this model name.

model_providerclass-attributeinstance-attribute

model_provider:ModelProvider=field(default_factory=MultiProvider)

The model provider to use when looking up string model names. Defaults to OpenAI.

model_settingsclass-attributeinstance-attribute

model_settings:ModelSettings|None=None

Configure global model settings. Any non-null values will override the agent-specific modelsettings.

handoff_input_filterclass-attributeinstance-attribute

handoff_input_filter:HandoffInputFilter|None=None

A global input filter to apply to all handoffs. IfHandoff.input_filter is set, then thatwill take precedence. The input filter allows you to edit the inputs that are sent to the newagent. See the documentation inHandoff.input_filter for more details.

nest_handoff_historyclass-attributeinstance-attribute

nest_handoff_history:bool=True

Wrap prior run history in a single assistant message before handing off when no custominput filter is set. Set to False to preserve the raw transcript behavior from previousreleases.

handoff_history_mapperclass-attributeinstance-attribute

handoff_history_mapper:HandoffHistoryMapper|None=None

Optional function that receives the normalized transcript (history + handoff items) andreturns the input history that should be passed to the next agent. When left asNone, therunner collapses the transcript into a single assistant message. This function only runs whennest_handoff_history is True.

input_guardrailsclass-attributeinstance-attribute

input_guardrails:list[InputGuardrail[Any]]|None=None

A list of input guardrails to run on the initial run input.

output_guardrailsclass-attributeinstance-attribute

output_guardrails:list[OutputGuardrail[Any]]|None=None

A list of output guardrails to run on the final output of the run.

tracing_disabledclass-attributeinstance-attribute

tracing_disabled:bool=False

Whether tracing is disabled for the agent run. If disabled, we will not trace the agent run.

trace_include_sensitive_dataclass-attributeinstance-attribute

trace_include_sensitive_data:bool=field(default_factory=_default_trace_include_sensitive_data)

Whether we include potentially sensitive data (for example: inputs/outputs of tool calls orLLM generations) in traces. If False, we'll still create spans for these events, but thesensitive data will not be included.

workflow_nameclass-attributeinstance-attribute

workflow_name:str='Agent workflow'

The name of the run, used for tracing. Should be a logical name for the run, like"Code generation workflow" or "Customer support agent".

trace_idclass-attributeinstance-attribute

trace_id:str|None=None

A custom trace ID to use for tracing. If not provided, we will generate a new trace ID.

group_idclass-attributeinstance-attribute

group_id:str|None=None

A grouping identifier to use for tracing, to link multiple traces from the same conversationor process. For example, you might use a chat thread ID.

trace_metadataclass-attributeinstance-attribute

trace_metadata:dict[str,Any]|None=None

An optional dictionary of additional metadata to include with the trace.

session_input_callbackclass-attributeinstance-attribute

session_input_callback:SessionInputCallback|None=None

Defines how to handle session history when new input is provided.-None (default): The new input is appended to the session history.-SessionInputCallback: A custom function that receives the history and new input, and returns the desired combined list of items.

call_model_input_filterclass-attributeinstance-attribute

call_model_input_filter:CallModelInputFilter|None=None

Optional callback that is invoked immediately before calling the model. It receives the currentagent, context and the model input (instructions and input items), and must return a possiblymodifiedModelInputData to use for the model call.

This allows you to edit the input sent to the model e.g. to stay within a token limit.For example, you can use this to add a system prompt to the input.


[8]ページ先頭

©2009-2025 Movatter.jp