Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Model settings

ModelSettings

Settings to use when calling an LLM.

This class holds optional model configuration parameters (e.g. temperature,top_p, penalties, truncation, etc.).

Not all models/providers support all of these parameters, so please check the API documentationfor the specific model and provider you are using.

Source code insrc/agents/model_settings.py
@dataclassclassModelSettings:"""Settings to use when calling an LLM.    This class holds optional model configuration parameters (e.g. temperature,    top_p, penalties, truncation, etc.).    Not all models/providers support all of these parameters, so please check the API documentation    for the specific model and provider you are using.    """temperature:float|None=None"""The temperature to use when calling the model."""top_p:float|None=None"""The top_p to use when calling the model."""frequency_penalty:float|None=None"""The frequency penalty to use when calling the model."""presence_penalty:float|None=None"""The presence penalty to use when calling the model."""tool_choice:ToolChoice|None=None"""The tool choice to use when calling the model."""parallel_tool_calls:bool|None=None"""Controls whether the model can make multiple parallel tool calls in a single turn.    If not provided (i.e., set to None), this behavior defers to the underlying    model provider's default. For most current providers (e.g., OpenAI), this typically    means parallel tool calls are enabled (True).    Set to True to explicitly enable parallel tool calls, or False to restrict the    model to at most one tool call per turn.    """truncation:Literal["auto","disabled"]|None=None"""The truncation strategy to use when calling the model.    See [Responses API documentation](https://platform.openai.com/docs/api-reference/responses/create#responses_create-truncation)    for more details.    """max_tokens:int|None=None"""The maximum number of output tokens to generate."""reasoning:Reasoning|None=None"""Configuration options for    [reasoning models](https://platform.openai.com/docs/guides/reasoning).    """verbosity:Literal["low","medium","high"]|None=None"""Constrains the verbosity of the model's response.    """metadata:dict[str,str]|None=None"""Metadata to include with the model response call."""store:bool|None=None"""Whether to store the generated model response for later retrieval.    For Responses API: automatically enabled when not specified.    For Chat Completions API: disabled when not specified."""prompt_cache_retention:Literal["in_memory","24h"]|None=None"""The retention policy for the prompt cache. Set to `24h` to enable extended    prompt caching, which keeps cached prefixes active for longer, up to a maximum    of 24 hours.    [Learn more](https://platform.openai.com/docs/guides/prompt-caching#prompt-cache-retention)."""include_usage:bool|None=None"""Whether to include usage chunk.    Only available for Chat Completions API."""# TODO: revisit ResponseIncludable | str if ResponseIncludable covers more cases# We've added str to support missing ones like# "web_search_call.action.sources" etc.response_include:list[ResponseIncludable|str]|None=None"""Additional output data to include in the model response.    [include parameter](https://platform.openai.com/docs/api-reference/responses/create#responses-create-include)"""top_logprobs:int|None=None"""Number of top tokens to return logprobs for. Setting this will    automatically include ``"message.output_text.logprobs"`` in the response."""extra_query:Query|None=None"""Additional query fields to provide with the request.    Defaults to None if not provided."""extra_body:Body|None=None"""Additional body fields to provide with the request.    Defaults to None if not provided."""extra_headers:Headers|None=None"""Additional headers to provide with the request.    Defaults to None if not provided."""extra_args:dict[str,Any]|None=None"""Arbitrary keyword arguments to pass to the model API call.    These will be passed directly to the underlying model provider's API.    Use with caution as not all models support all parameters."""defresolve(self,override:ModelSettings|None)->ModelSettings:"""Produce a new ModelSettings by overlaying any non-None values from the        override on top of this instance."""ifoverrideisNone:returnselfchanges={field.name:getattr(override,field.name)forfieldinfields(self)ifgetattr(override,field.name)isnotNone}# Handle extra_args merging specially - merge dictionaries instead of replacingifself.extra_argsisnotNoneoroverride.extra_argsisnotNone:merged_args={}ifself.extra_args:merged_args.update(self.extra_args)ifoverride.extra_args:merged_args.update(override.extra_args)changes["extra_args"]=merged_argsifmerged_argselseNonereturnreplace(self,**changes)defto_json_dict(self)->dict[str,Any]:dataclass_dict=dataclasses.asdict(self)json_dict:dict[str,Any]={}forfield_name,valueindataclass_dict.items():ifisinstance(value,BaseModel):json_dict[field_name]=value.model_dump(mode="json")else:json_dict[field_name]=valuereturnjson_dict

temperatureclass-attributeinstance-attribute

temperature:float|None=None

The temperature to use when calling the model.

top_pclass-attributeinstance-attribute

top_p:float|None=None

The top_p to use when calling the model.

frequency_penaltyclass-attributeinstance-attribute

frequency_penalty:float|None=None

The frequency penalty to use when calling the model.

presence_penaltyclass-attributeinstance-attribute

presence_penalty:float|None=None

The presence penalty to use when calling the model.

tool_choiceclass-attributeinstance-attribute

tool_choice:ToolChoice|None=None

The tool choice to use when calling the model.

parallel_tool_callsclass-attributeinstance-attribute

parallel_tool_calls:bool|None=None

Controls whether the model can make multiple parallel tool calls in a single turn.If not provided (i.e., set to None), this behavior defers to the underlyingmodel provider's default. For most current providers (e.g., OpenAI), this typicallymeans parallel tool calls are enabled (True).Set to True to explicitly enable parallel tool calls, or False to restrict themodel to at most one tool call per turn.

truncationclass-attributeinstance-attribute

truncation:Literal['auto','disabled']|None=None

The truncation strategy to use when calling the model.SeeResponses API documentationfor more details.

max_tokensclass-attributeinstance-attribute

max_tokens:int|None=None

The maximum number of output tokens to generate.

reasoningclass-attributeinstance-attribute

reasoning:Reasoning|None=None

Configuration options forreasoning models.

verbosityclass-attributeinstance-attribute

verbosity:Literal['low','medium','high']|None=None

Constrains the verbosity of the model's response.

metadataclass-attributeinstance-attribute

metadata:dict[str,str]|None=None

Metadata to include with the model response call.

storeclass-attributeinstance-attribute

store:bool|None=None

Whether to store the generated model response for later retrieval.For Responses API: automatically enabled when not specified.For Chat Completions API: disabled when not specified.

prompt_cache_retentionclass-attributeinstance-attribute

prompt_cache_retention:(Literal["in_memory","24h"]|None)=None

The retention policy for the prompt cache. Set to24h to enable extendedprompt caching, which keeps cached prefixes active for longer, up to a maximumof 24 hours.Learn more.

include_usageclass-attributeinstance-attribute

include_usage:bool|None=None

Whether to include usage chunk.Only available for Chat Completions API.

response_includeclass-attributeinstance-attribute

response_include:list[ResponseIncludable|str]|None=(None)

Additional output data to include in the model response.include parameter

top_logprobsclass-attributeinstance-attribute

top_logprobs:int|None=None

Number of top tokens to return logprobs for. Setting this willautomatically include"message.output_text.logprobs" in the response.

extra_queryclass-attributeinstance-attribute

extra_query:Query|None=None

Additional query fields to provide with the request.Defaults to None if not provided.

extra_bodyclass-attributeinstance-attribute

extra_body:Body|None=None

Additional body fields to provide with the request.Defaults to None if not provided.

extra_headersclass-attributeinstance-attribute

extra_headers:Headers|None=None

Additional headers to provide with the request.Defaults to None if not provided.

extra_argsclass-attributeinstance-attribute

extra_args:dict[str,Any]|None=None

Arbitrary keyword arguments to pass to the model API call.These will be passed directly to the underlying model provider's API.Use with caution as not all models support all parameters.

resolve

resolve(override:ModelSettings|None)->ModelSettings

Produce a new ModelSettings by overlaying any non-None values from theoverride on top of this instance.

Source code insrc/agents/model_settings.py
defresolve(self,override:ModelSettings|None)->ModelSettings:"""Produce a new ModelSettings by overlaying any non-None values from the    override on top of this instance."""ifoverrideisNone:returnselfchanges={field.name:getattr(override,field.name)forfieldinfields(self)ifgetattr(override,field.name)isnotNone}# Handle extra_args merging specially - merge dictionaries instead of replacingifself.extra_argsisnotNoneoroverride.extra_argsisnotNone:merged_args={}ifself.extra_args:merged_args.update(self.extra_args)ifoverride.extra_args:merged_args.update(override.extra_args)changes["extra_args"]=merged_argsifmerged_argselseNonereturnreplace(self,**changes)

[8]ページ先頭

©2009-2025 Movatter.jp