Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Output

"Output" refers to the final value returned fromrunning an agent. This can be either plain text,structured data, or the result of afunction called with arguments provided by the model.

The output is wrapped inAgentRunResult orStreamedRunResult so that you can access other data, likeusage of the run andmessage history.

BothAgentRunResult andStreamedRunResult are generic in the data they wrap, so typing information about the data returned by the agent is preserved.

A run ends when the model responds with one of the structured output types, or, if no output type is specified orstr is one of the allowed options, when a plain text response is received. A run can also be cancelled if usage limits are exceeded, seeUsage Limits.

Here's an example using a Pydantic model as theoutput_type, forcing the model to respond with data matching our specification:

olympics.py
frompydanticimportBaseModelfrompydantic_aiimportAgentclassCityLocation(BaseModel):city:strcountry:stragent=Agent('google-gla:gemini-1.5-flash',output_type=CityLocation)result=agent.run_sync('Where were the olympics held in 2012?')print(result.output)#> city='London' country='United Kingdom'print(result.usage())#> Usage(requests=1, request_tokens=57, response_tokens=8, total_tokens=65)

(This example is complete, it can be run "as is")

Output data

TheAgent class constructor takes anoutput_type argument that takes one or more types oroutput functions. It supports simple scalar types, list and dict types (includingTypedDicts andStructuredDicts), dataclasses and Pydantic models, as well as type unions -- generally everything supported as type hints in a Pydantic model. You can also pass a list of multiple choices.

By default, Pydantic AI leverages the model's tool calling capability to make it return structured data. When multiple output types are specified (in a union or list), each member is registered with the model as a separate output tool in order to reduce the complexity of the schema and maximise the chances a model will respond correctly. This has been shown to work well across a wide range of models. If you'd like to change the names of the output tools, use a model's native structured output feature, or pass the output schema to the model in itsinstructions, you can use anoutput mode marker class.

When no output type is specified, or whenstr is among the output types, any plain text response from the model will be used as the output data.Ifstr is not among the output types, the model is forced to return structured data or call an output function.

If the output type schema is not of type"object" (e.g. it'sint orlist[int]), the output type is wrapped in a single element object, so the schema of all tools registered with the model are object schemas.

Structured outputs (like tools) use Pydantic to build the JSON schema used for the tool, and to validate the data returned by the model.

Type checking considerations

The Agent class is generic in its output type, and this type is carried through toAgentRunResult.output andStreamedRunResult.output so that your IDE or static type checker can warn you when your code doesn't properly take into account all the possible values those outputs could have.

Static type checkers like pyright and mypy will do their best the infer the agent's output type from theoutput_type you've specified, but they're not always able to do so correctly when you provide functions or multiple types in a union or list, even though PydanticAI will behave correctly. When this happens, your type checker will complain even when you're confident you've passed a validoutput_type, and you'll need to help the type checker by explicitly specifying the generic parameters on theAgent constructor. This is shown in the second example below and the output functions example further down.

Specifically, there are three valid uses ofoutput_type where you'll need to do this:

  1. When using a union of types, e.g.output_type=Foo | Bar, or in older Python,output_type=Union[Foo, Bar]. UntilPEP-747 "Annotating Type Forms" lands in Python 3.15, type checkers do not consider these a valid value foroutput_type. In addition to the generic parameters on theAgent constructor, you'll need to add# type: ignore to the line that passes the union tooutput_type. Alternatively, you can use a list:output_type=[Foo, Bar].
  2. With mypy: When using a list, as a functionally equivalent alternative to a union, or because you're passing inoutput functions. Pyright does handle this correctly, and we've filedan issue with mypy to try and get this fixed.
  3. With mypy: when using an async output function. Pyright does handle this correctly, and we've filedan issue with mypy to try and get this fixed.

Here's an example of returning either text or structured data:

box_or_error.py
frompydanticimportBaseModelfrompydantic_aiimportAgentclassBox(BaseModel):width:intheight:intdepth:intunits:stragent=Agent('openai:gpt-4o-mini',output_type=[Box,str],# (1)!system_prompt=("Extract me the dimensions of a box, ""if you can't extract all data, ask the user to try again."),)result=agent.run_sync('The box is 10x20x30')print(result.output)#> Please provide the units for the dimensions (e.g., cm, in, m).result=agent.run_sync('The box is 10x20x30 cm')print(result.output)#> width=10 height=20 depth=30 units='cm'
  1. This could also have been a union:output_type=Box | str (or in older Python,output_type=Union[Box, str]). However, as explained in the "Type checking considerations" section above, that would've required explicitly specifying the generic parameters on theAgent constructor and adding# type: ignore to this line in order to be type checked correctly.

(This example is complete, it can be run "as is")

Here's an example of using a union return type, which will register multiple output tools and wrap non-object schemas in an object:

colors_or_sizes.py
fromtypingimportUnionfrompydantic_aiimportAgentagent=Agent[None,Union[list[str],list[int]]]('openai:gpt-4o-mini',output_type=Union[list[str],list[int]],# type: ignore # (1)!system_prompt='Extract either colors or sizes from the shapes provided.',)result=agent.run_sync('red square, blue circle, green triangle')print(result.output)#> ['red', 'blue', 'green']result=agent.run_sync('square size 10, circle size 20, triangle size 30')print(result.output)#> [10, 20, 30]
  1. As explained in the "Type checking considerations" section above, using a union rather than a list requires explicitly specifying the generic parameters on theAgent constructor and adding# type: ignore to this line in order to be type checked correctly.

(This example is complete, it can be run "as is")

Output functions

Instead of plain text or structured data, you may want the output of your agent run to be the result of a function called with arguments provided by the model, for example to further process or validate the data provided through the arguments (with the option to tell the model to try again), or to hand off to another agent.

Output functions are similar tofunction tools, but the model is forced to call one of them, the call ends the agent run, and the result is not passed back to the model.

As with tool functions, output function arguments provided by the model are validated using Pydantic, they can optionally takeRunContext as the first argument, and they can raiseModelRetry to ask the model to try again with modified arguments (or with a different output type).

To specify output functions, you set the agent'soutput_type to either a single function (or bound instance method), or a list of functions. The list can also contain other output types like simple scalars or entire Pydantic models.You typically do not want to also register your output function as a tool (using the@agent.tool decorator ortools argument), as this could confuse the model about which it should be calling.

Here's an example of all of these features in action:

output_functions.py
importrefromtypingimportUnionfrompydanticimportBaseModelfrompydantic_aiimportAgent,ModelRetry,RunContextfrompydantic_ai.exceptionsimportUnexpectedModelBehaviorclassRow(BaseModel):name:strcountry:strtables={'capital_cities':[Row(name='Amsterdam',country='Netherlands'),Row(name='Mexico City',country='Mexico'),]}classSQLFailure(BaseModel):"""An unrecoverable failure. Only use this when you can't change the query to make it work."""explanation:strdefrun_sql_query(query:str)->list[Row]:"""Run a SQL query on the database."""select_table=re.match(r'SELECT (.+) FROM (\w+)',query)ifselect_table:column_names=select_table.group(1)ifcolumn_names!='*':raiseModelRetry("Only 'SELECT *' is supported, you'll have to do column filtering manually.")table_name=select_table.group(2)iftable_namenotintables:raiseModelRetry(f"Unknown table '{table_name}' in query '{query}'. Available tables:{', '.join(tables.keys())}.")returntables[table_name]raiseModelRetry(f"Unsupported query: '{query}'.")sql_agent=Agent[None,Union[list[Row],SQLFailure]]('openai:gpt-4o',output_type=[run_sql_query,SQLFailure],instructions='You are a SQL agent that can run SQL queries on a database.',)asyncdefhand_off_to_sql_agent(ctx:RunContext,query:str)->list[Row]:"""I take natural language queries, turn them into SQL, and run them on a database."""# Drop the final message with the output tool call, as it shouldn't be passed on to the SQL agentmessages=ctx.messages[:-1]try:result=awaitsql_agent.run(query,message_history=messages)output=result.outputifisinstance(output,SQLFailure):raiseModelRetry(f'SQL agent failed:{output.explanation}')returnoutputexceptUnexpectedModelBehaviorase:# Bubble up potentially retryable errors to the router agentif(cause:=e.__cause__)andhasattr(cause,'tool_retry'):raiseModelRetry(f'SQL agent failed:{cause.tool_retry.content}')fromeelse:raiseclassRouterFailure(BaseModel):"""Use me when no appropriate agent is found or the used agent failed."""explanation:strrouter_agent=Agent[None,Union[list[Row],RouterFailure]]('openai:gpt-4o',output_type=[hand_off_to_sql_agent,RouterFailure],instructions='You are a router to other agents. Never try to solve a problem yourself, just pass it on.',)result=router_agent.run_sync('Select the names and countries of all capitals')print(result.output)"""[    Row(name='Amsterdam', country='Netherlands'),    Row(name='Mexico City', country='Mexico'),]"""result=router_agent.run_sync('Select all pets')print(repr(result.output))"""RouterFailure(explanation="The requested table 'pets' does not exist in the database. The only available table is 'capital_cities', which does not contain data about pets.")"""result=router_agent.run_sync('How do I fly from Amsterdam to Mexico City?')print(repr(result.output))"""RouterFailure(explanation='I am not equipped to provide travel information, such as flights from Amsterdam to Mexico City.')"""

Text output

If you provide an output function that takes a string, Pydantic AI will by default create an output tool like for any other output function. If instead you'd like the model to provide the string using plain text output, you can wrap the function in theTextOutput marker class. If desired, this marker class can be used alongside one or moreToolOutput marker classes (or unmarked types or functions) in a list provided tooutput_type.

text_output_function.py
frompydantic_aiimportAgent,TextOutputdefsplit_into_words(text:str)->list[str]:returntext.split()agent=Agent('openai:gpt-4o',output_type=TextOutput(split_into_words),)result=agent.run_sync('Who was Albert Einstein?')print(result.output)#> ['Albert', 'Einstein', 'was', 'a', 'German-born', 'theoretical', 'physicist.']

(This example is complete, it can be run "as is")

Output modes

Pydantic AI implements three different methods to get a model to output structured data:

  1. Tool Output, where tool calls are used to produce the output.
  2. Native Output, where the model is required to produce text content compliant with a provided JSON schema.
  3. Prompted Output, where a prompt is injected into the model instructions including the desired JSON schema, and we attempt to parse the model's plain-text response as appropriate.

Tool Output

In the default Tool Output mode, the output JSON schema of each output type (or function) is provided to the model as the parameters schema of a special output tool. This is the default as it's supported by virtually all models and has been shown to work very well.

If you'd like to change the name of the output tool, pass a custom description to aid the model, or turn on or off strict mode, you can wrap the type(s) in theToolOutput marker class and provide the appropriate arguments. Note that by default, the description is taken from the docstring specified on a Pydantic model or output function, so specifying it using the marker class is typically not necessary.

tool_output.py
frompydanticimportBaseModelfrompydantic_aiimportAgent,ToolOutputclassFruit(BaseModel):name:strcolor:strclassVehicle(BaseModel):name:strwheels:intagent=Agent('openai:gpt-4o',output_type=[# (1)!ToolOutput(Fruit,name='return_fruit'),ToolOutput(Vehicle,name='return_vehicle'),],)result=agent.run_sync('What is a banana?')print(repr(result.output))#> Fruit(name='banana', color='yellow')
  1. If we were passing justFruit andVehicle without custom tool names, we could have used a union:output_type=Fruit | Vehicle (or in older Python,output_type=Union[Fruit | Vehicle]). However, asToolOutput is an object rather than a type, we have to use a list.

(This example is complete, it can be run "as is")

Native Output

Native Output mode uses a model's native "Structured Outputs" feature (aka "JSON Schema response format"), where the model is forced to only output text matching the provided JSON schema. Note that this is not supported by all models, and sometimes comes with restrictions. For example, Anthropic does not support this at all, and Gemini cannot use tools at the same time as structured output, and attempting to do so will result in an error.

To use this mode, you can wrap the output type(s) in theNativeOutput marker class that also lets you specify aname anddescription if the name and docstring of the type or function are not sufficient.

native_output.py
fromtool_outputimportFruit,Vehiclefrompydantic_aiimportAgent,NativeOutputagent=Agent('openai:gpt-4o',output_type=NativeOutput([Fruit,Vehicle],# (1)!name='Fruit or vehicle',description='Return a fruit or vehicle.'),)result=agent.run_sync('What is a Ford Explorer?')print(repr(result.output))#> Vehicle(name='Ford Explorer', wheels=4)
  1. This could also have been a union:output_type=Fruit | Vehicle (or in older Python,output_type=Union[Fruit, Vehicle]). However, as explained in the "Type checking considerations" section above, that would've required explicitly specifying the generic parameters on theAgent constructor and adding# type: ignore to this line in order to be type checked correctly.

(This example is complete, it can be run "as is")

Prompted Output

In this mode, the model is prompted to output text matching the provided JSON schema through itsinstructions and it's up to the model to interpret those instructions correctly. This is usable with all models, but is often the least reliable approach as the model is not forced to match the schema.

While we would generally suggest starting with tool or native output, in some cases this mode may result in higher quality outputs, and for models without native tool calling or structured output support it is the only option for producing structured outputs.

If the model API supports the "JSON Mode" feature (aka "JSON Object response format") to force the model to output valid JSON, this is enabled, but it's still up to the model to abide by the schema. Pydantic AI will validate the returned structured data and tell the model to try again if validation fails, but if the model is not intelligent enough this may not be sufficient.

To use this mode, you can wrap the output type(s) in thePromptedOutput marker class that also lets you specify aname anddescription if the name and docstring of the type or function are not sufficient. Additionally, it supports antemplate argument lets you specify a custom instructions template to be used instead of thedefault.

prompted_output.py
frompydanticimportBaseModelfromtool_outputimportVehiclefrompydantic_aiimportAgent,PromptedOutputclassDevice(BaseModel):name:strkind:stragent=Agent('openai:gpt-4o',output_type=PromptedOutput([Vehicle,Device],# (1)!name='Vehicle or device',description='Return a vehicle or device.'),)result=agent.run_sync('What is a MacBook?')print(repr(result.output))#> Device(name='MacBook', kind='laptop')agent=Agent('openai:gpt-4o',output_type=PromptedOutput([Vehicle,Device],template='Gimme some JSON:{schema}'),)result=agent.run_sync('What is a Ford Explorer?')print(repr(result.output))#> Vehicle(name='Ford Explorer', wheels=4)
  1. This could also have been a union:output_type=Vehicle | Device (or in older Python,output_type=Union[Vehicle, Device]). However, as explained in the "Type checking considerations" section above, that would've required explicitly specifying the generic parameters on theAgent constructor and adding# type: ignore to this line in order to be type checked correctly.

(This example is complete, it can be run "as is")

Custom JSON schema

If it's not feasible to define your desired structured output object using a PydanticBaseModel, dataclass, orTypedDict, for example when you get a JSON schema from an external source or generate it dynamically, you can use theStructuredDict() helper function to generate adict[str, Any] subclass with a JSON schema attached that Pydantic AI will pass to the model.

Note that Pydantic AI will not perform any validation of the received JSON object and it's up to the model to correctly interpret the schema and any constraints expressed in it, like required fields or integer value ranges.

The output type will be adict[str, Any] and it's up to your code to defensively read from it in case the model made a mistake. You can use anoutput validator to reflect validation errors back to the model and get it to try again.

Along with the JSON schema, you can optionally passname anddescription arguments to provide additional context to the model:

frompydantic_aiimportAgent,StructuredDictHumanDict=StructuredDict({"type":"object","properties":{"name":{"type":"string"},"age":{"type":"integer"}},"required":["name","age"]},name="Human",description="A human with a name and age",)agent=Agent('openai:gpt-4o',output_type=HumanDict)result=agent.run_sync("Create a person")#> {'name': 'John Doe', 'age': 30}

Output validators

Some validation is inconvenient or impossible to do in Pydantic validators, in particular when the validation requires IO and is asynchronous. PydanticAI provides a way to add validation functions via theagent.output_validator decorator.

If you want to implement separate validation logic for different output types, it's recommended to useoutput functions instead, to save you from having to doisinstance checks inside the output validator.If you want the model to output plain text, do your own processing or validation, and then have the agent's final output be the result of your function, it's recommended to use anoutput function with theTextOutput marker class.

Here's a simplified variant of theSQL Generation example:

sql_gen.py
fromtypingimportUnionfromfake_databaseimportDatabaseConn,QueryErrorfrompydanticimportBaseModelfrompydantic_aiimportAgent,RunContext,ModelRetryclassSuccess(BaseModel):sql_query:strclassInvalidRequest(BaseModel):error_message:strOutput=Union[Success,InvalidRequest]agent=Agent[DatabaseConn,Output]('google-gla:gemini-1.5-flash',output_type=Output,# type: ignoredeps_type=DatabaseConn,system_prompt='Generate PostgreSQL flavored SQL queries based on user input.',)@agent.output_validatorasyncdefvalidate_sql(ctx:RunContext[DatabaseConn],output:Output)->Output:ifisinstance(output,InvalidRequest):returnoutputtry:awaitctx.deps.execute(f'EXPLAIN{output.sql_query}')exceptQueryErrorase:raiseModelRetry(f'Invalid query:{e}')fromeelse:returnoutputresult=agent.run_sync('get me users who were last active yesterday.',deps=DatabaseConn())print(result.output)#> sql_query='SELECT * FROM users WHERE last_active::date = today() - interval 1 day'

(This example is complete, it can be run "as is")

Streamed Results

There two main challenges with streamed results:

  1. Validating structured responses before they're complete, this is achieved by "partial validation" which was recently added to Pydantic inpydantic/pydantic#10748.
  2. When receiving a response, we don't know if it's the final response without starting to stream it and peeking at the content. PydanticAI streams just enough of the response to sniff out if it's a tool call or an output, then streams the whole thing and calls tools, or returns the stream as aStreamedRunResult.

Streaming Text

Example of streamed text output:

streamed_hello_world.py
frompydantic_aiimportAgentagent=Agent('google-gla:gemini-1.5-flash')# (1)!asyncdefmain():asyncwithagent.run_stream('Where does "hello world" come from?')asresult:# (2)!asyncformessageinresult.stream_text():# (3)!print(message)#> The first known#> The first known use of "hello,#> The first known use of "hello, world" was in#> The first known use of "hello, world" was in a 1974 textbook#> The first known use of "hello, world" was in a 1974 textbook about the C#> The first known use of "hello, world" was in a 1974 textbook about the C programming language.
  1. Streaming works with the standardAgent class, and doesn't require any special setup, just a model that supports streaming (currently all models support streaming).
  2. TheAgent.run_stream() method is used to start a streamed run, this method returns a context manager so the connection can be closed when the stream completes.
  3. Each item yield byStreamedRunResult.stream_text() is the complete text response, extended as new data is received.

(This example is complete, it can be run "as is" — you'll need to addasyncio.run(main()) to runmain)

We can also stream text as deltas rather than the entire text in each item:

streamed_delta_hello_world.py
frompydantic_aiimportAgentagent=Agent('google-gla:gemini-1.5-flash')asyncdefmain():asyncwithagent.run_stream('Where does "hello world" come from?')asresult:asyncformessageinresult.stream_text(delta=True):# (1)!print(message)#> The first known#> use of "hello,#> world" was in#> a 1974 textbook#> about the C#> programming language.
  1. stream_text will error if the response is not text.

(This example is complete, it can be run "as is" — you'll need to addasyncio.run(main()) to runmain)

Output message not included inmessages

The final output message willNOT be added to result messages if you use.stream_text(delta=True),seeMessages and chat history for more information.

Streaming Structured Output

Not all types are supported with partial validation in Pydantic, seepydantic/pydantic#10748, generally for model-like structures it's currently best to useTypeDict.

Here's an example of streaming a use profile as it's built:

streamed_user_profile.py
fromdatetimeimportdatefromtyping_extensionsimportTypedDictfrompydantic_aiimportAgentclassUserProfile(TypedDict,total=False):name:strdob:datebio:stragent=Agent('openai:gpt-4o',output_type=UserProfile,system_prompt='Extract a user profile from the input',)asyncdefmain():user_input='My name is Ben, I was born on January 28th 1990, I like the chain the dog and the pyramid.'asyncwithagent.run_stream(user_input)asresult:asyncforprofileinresult.stream():print(profile)#> {'name': 'Ben'}#> {'name': 'Ben'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the '}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyr'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}

(This example is complete, it can be run "as is" — you'll need to addasyncio.run(main()) to runmain)

If you want fine-grained control of validation, particularly catching validation errors, you can use the following pattern:

streamed_user_profile.py
fromdatetimeimportdatefrompydanticimportValidationErrorfromtyping_extensionsimportTypedDictfrompydantic_aiimportAgentclassUserProfile(TypedDict,total=False):name:strdob:datebio:stragent=Agent('openai:gpt-4o',output_type=UserProfile)asyncdefmain():user_input='My name is Ben, I was born on January 28th 1990, I like the chain the dog and the pyramid.'asyncwithagent.run_stream(user_input)asresult:asyncformessage,lastinresult.stream_structured(debounce_by=0.01):# (1)!try:profile=awaitresult.validate_structured_output(# (2)!message,allow_partial=notlast,)exceptValidationError:continueprint(profile)#> {'name': 'Ben'}#> {'name': 'Ben'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the '}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyr'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}#> {'name': 'Ben', 'dob': date(1990, 1, 28), 'bio': 'Likes the chain the dog and the pyramid'}
  1. stream_structured streams the data asModelResponse objects, thus iteration can't fail with aValidationError.
  2. validate_structured_output validates the data,allow_partial=True enables pydantic'sexperimental_allow_partial flag onTypeAdapter.

(This example is complete, it can be run "as is" — you'll need to addasyncio.run(main()) to runmain)

Examples

The following examples demonstrate how to use streamed responses in PydanticAI:


[8]ページ先頭

©2009-2025 Movatter.jp