- Notifications
You must be signed in to change notification settings - Fork2k
Comparing changes
Open a pull request
base repository:openai/openai-agents-python
Uh oh!
There was an error while loading.Please reload this page.
base:v0.2.1
head repository:openai/openai-agents-python
Uh oh!
There was an error while loading.Please reload this page.
compare:main
- 14commits
- 46files changed
- 11contributors
Commits on Jul 16, 2025
Add a new GH Actions job to automatically update translated document …
…pagse (#598)This pull request adds a new GitHub Actions job to automate thetranslation of document pages.- Before this job can run, **OPENAI_API_KEY must be added to the projectsecrets.**- It typically takes 8–10 minutes using the o3 model, so the job isconfigured to run only when there are changes under docs/ or inmkdocs.yml.- The job commits and pushes the translated changes, but it does notdeploy the documents to GitHub Pages. If we think it’s better to deploythe latest changes automatically as well, I’m happy to update theworkflow. (Personally, I don’t think it’s necessary, since the changeswill be deployed with the next deployment job execution)
Commits on Jul 17, 2025
Adjust#598to only create a PR rather than pushing changes to main b…
…ranch (#1162)This pull request resolves the execution error of#598 CI job. The jobpushes the changes directly to the main branch. However, our branchpolicies do not allow bypassing required checks, so it always fails.This pull request changes its behavior just to create a pull request andthen ask humans to review (actually you don't need to check translationresults though) and merge it.
Realtime: only update model settings from session (#1169)
### Summary:Was running into bugs. Because the model settings were being set fromboth runner and session, and that was causing issues. Among otherthings, handoffs were broken because the runner wasn't reading them, andthe session wasn't setting them in the connect() call.### Test Plan:Unit tests.
Update all translated document pages (#1173)
Automated update of translated documentationCo-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Commits on Jul 18, 2025
fix: ensure ResponseUsage token fields are int, not None (fixes#1179) (
#1181)### ProblemWhen using streaming responses, some models or API endpoints may return`usage` fields (`prompt_tokens`, `completion_tokens`, `total_tokens`) as`None` or omit them entirely. The current implementation passes thesevalues directly to the `ResponseUsage` Pydantic model, which expectsintegers. This causes a validation error:3 validation errors for ResponseUsageinput_tokensInput should be a valid integer [type=int_type, input_value=None,input_type=NoneType]output_tokensInput should be a valid integer [type=int_type, input_value=None,input_type=NoneType]total_tokensInput should be a valid integer [type=int_type, input_value=None,input_type=NoneType]### SolutionThis PR ensures that all token fields passed to `ResponseUsage` arealways integers. If any of the fields are `None` or missing, theydefault to `0`. This is achieved by using `or 0` and explicit `is notNone` checks for nested fields.**Key changes:**- All `input_tokens`, `output_tokens`, `total_tokens` fields use `or 0`fallback.### Impact- Fixes Pydantic validation errors for streaming responses withmissing/None usage fields.- Improves compatibility with OpenAI and third-party models.- No breaking changes; only adds robustness.fixes#1179Co-authored-by: thomas <thomas@baichuan-inc.com>
Add missing guardrail exception import to quickstart (#1161)
docs: add missing InputGuardrailTripwireTriggered import to quickstartexampleFixes NameError when handling guardrail exceptions by including therequired import in the docs.
fix: fallback to function name for unnamed output_guardrail decorators (
#1133)**Overview:**This PR improves the output_guardrail behavior by ensuring a valid nameis always assigned to the guardrail, even when the decorator is usedwithout parentheses or without explicitly providing a name.**Problem:**Previously, when the decorator @output_guardrail was used without a name(and without parentheses), the name attribute of the guardrail remainedNone. This resulted in issues during runtime — specifically, theguardrail name did not appear in result.input_guardrail_results, makingit harder to trace or debug guardrail outputs.While the OutputGuardrail.get_name() method correctly defaults to thefunction name when name is None, this method is not used inside thedecorator. Hence, unless a name is provided explicitly, theOutputGuardrail instance holds None for its name internally.**Solution:**This PR updates the decorator logic to:Automatically fallback to the function name if the name parameter is notprovided.Ensure that the guardrail always has a meaningful identifier, whichimproves downstream behavior such as logging, debugging, and resulttracing.**Example Behavior Before:**@output_guardraildef validate_output(...): Name remains None**Example Behavior After:**@output_guardraildef validate_output(...):Name becomes "validate_output" automatically**Why it matters:**This small change avoids hidden bugs or inconsistencies in downstreamsystems (like guardrail_results) that rely on guardrail names beingdefined. It also brings consistent behavior whether or not parenthesesare used in the decorator.
Mark some dataclasses as pydantic dataclasses (#1131)
This is the set of top level types used by Temporal for serializationacross activity boundaries. In order to ensure that the models containedin these dataclasses are built prior to use, the dataclasses need to be`pydantic.dataclasses.dataclass` rather than `dataclasses.dataclass`This fixes issues where the types cannot be serialized if the containedtypes happen not to have been built. This happens particularly oftenwhen model logging is disabled, which happened to build the pydanticmodels as a side effect.
fix: Apply strict JSON schema validation in FunctionTool constructor (#…
…1041)## SummaryFixes an issue where directly created `FunctionTool` objects fail withOpenAI's Responses API due to missing `additionalProperties: false` inthe JSON schema, while the `@function_tool` decorator works correctly.## ProblemThe documentation example for creating `FunctionTool` objects directlyfails with:```Error code: 400 - {'error': {'message': "Invalid schema for function 'process_user': In context=(), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[0].parameters', 'code': 'invalid_function_parameters'}}```This creates an inconsistency between `FunctionTool` and`@function_tool` behavior, both of which have `strict_json_schema=True`by default.## Solution- Added `__post_init__` method to `FunctionTool` dataclass- Automatically applies `ensure_strict_json_schema()` when`strict_json_schema=True`- Makes behavior consistent with `@function_tool` decorator- Maintains backward compatibility## TestingThe fix can be verified by running the reproduction case from the issue:```pythonfrom typing import Anyfrom pydantic import BaseModelfrom agents import RunContextWrapper, FunctionTool, Agent, Runnerclass FunctionArgs(BaseModel): username: str age: intasync def run_function(ctx: RunContextWrapper[Any], args: str) -> str: parsed = FunctionArgs.model_validate_json(args) return f"{parsed.username} is {parsed.age} years old"# This now works without manual ensure_strict_json_schema() calltool = FunctionTool( name="process_user", description="Processes extracted user data", params_json_schema=FunctionArgs.model_json_schema(), on_invoke_tool=run_function,)agent = Agent( name="Test Agent", instructions="You are a test agent", tools=[tool])result = Runner.run_sync(agent, "Process user data for John who is 30 years old")```
Fix image_generator example error on Windows OS (#1180)
Due to the method name typo, it does not work on the OS.
Update all translated document pages (#1184)
Automated update of translated documentationCo-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:git diff v0.2.1...main
Uh oh!
There was an error while loading.Please reload this page.