Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The official Python library for the OpenAI API

License

NotificationsYou must be signed in to change notification settings

openai/openai-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyPI version

The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.8+application. The library includes type definitions for all request params and response fields,and offers both synchronous and asynchronous clients powered byhttpx.

It is generated from ourOpenAPI specification withStainless.

Documentation

The REST API documentation can be found onplatform.openai.com. The full API of this library can be found inapi.md.

Installation

# install from PyPIpip install openai

Usage

The full API of this library can be found inapi.md.

The primary API for interacting with OpenAI models is theResponses API. You can generate text from the model with the code below.

importosfromopenaiimportOpenAIclient=OpenAI(# This is the default and can be omittedapi_key=os.environ.get("OPENAI_API_KEY"),)response=client.responses.create(model="gpt-4o",instructions="You are a coding assistant that talks like a pirate.",input="How do I check if a Python object is an instance of a class?",)print(response.output_text)

The previous standard (supported indefinitely) for generating text is theChat Completions API. You can use that API to generate text from the model with the code below.

fromopenaiimportOpenAIclient=OpenAI()completion=client.chat.completions.create(model="gpt-4o",messages=[        {"role":"developer","content":"Talk like a pirate."},        {"role":"user","content":"How do I check if a Python object is an instance of a class?",        },    ],)print(completion.choices[0].message.content)

While you can provide anapi_key keyword argument,we recommend usingpython-dotenvto addOPENAI_API_KEY="My API Key" to your.env fileso that your API key is not stored in source control.Get an API key here.

Vision

With an image URL:

prompt="What is in this image?"img_url="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"response=client.responses.create(model="gpt-4o-mini",input=[        {"role":"user","content": [                {"type":"input_text","text":prompt},                {"type":"input_image","image_url":f"{img_url}"},            ],        }    ],)

With the image as a base64 encoded string:

importbase64fromopenaiimportOpenAIclient=OpenAI()prompt="What is in this image?"withopen("path/to/image.png","rb")asimage_file:b64_image=base64.b64encode(image_file.read()).decode("utf-8")response=client.responses.create(model="gpt-4o-mini",input=[        {"role":"user","content": [                {"type":"input_text","text":prompt},                {"type":"input_image","image_url":f"data:image/png;base64,{b64_image}"},            ],        }    ],)

Async usage

Simply importAsyncOpenAI instead ofOpenAI and useawait with each API call:

importosimportasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI(# This is the default and can be omittedapi_key=os.environ.get("OPENAI_API_KEY"),)asyncdefmain()->None:response=awaitclient.responses.create(model="gpt-4o",input="Explain disestablishmentarianism to a smart five year old."    )print(response.output_text)asyncio.run(main())

Functionality between the synchronous and asynchronous clients is otherwise identical.

Streaming responses

We provide support for streaming responses using Server Side Events (SSE).

fromopenaiimportOpenAIclient=OpenAI()stream=client.responses.create(model="gpt-4o",input="Write a one-sentence bedtime story about a unicorn.",stream=True,)foreventinstream:print(event)

The async client uses the exact same interface.

importasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI()asyncdefmain():stream=awaitclient.responses.create(model="gpt-4o",input="Write a one-sentence bedtime story about a unicorn.",stream=True,    )asyncforeventinstream:print(event)asyncio.run(main())

Realtime API beta

The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well asfunction calling through a WebSocket connection.

Under the hood the SDK uses thewebsockets library to manage connections.

The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be foundhere and a guide can be foundhere.

Basic text based example:

importasynciofromopenaiimportAsyncOpenAIasyncdefmain():client=AsyncOpenAI()asyncwithclient.beta.realtime.connect(model="gpt-4o-realtime-preview")asconnection:awaitconnection.session.update(session={'modalities': ['text']})awaitconnection.conversation.item.create(item={"type":"message","role":"user","content": [{"type":"input_text","text":"Say hello!"}],            }        )awaitconnection.response.create()asyncforeventinconnection:ifevent.type=='response.text.delta':print(event.delta,flush=True,end="")elifevent.type=='response.text.done':print()elifevent.type=="response.done":breakasyncio.run(main())

However the real magic of the Realtime API is handling audio inputs / outputs, see this exampleTUI script for a fully fledged example.

Realtime error handling

Whenever an error occurs, the Realtime API will send anerror event and the connection will stay open and remain usable. This means you need to handle it yourself, asno errors are raised directly by the SDK when anerror event comes in.

client=AsyncOpenAI()asyncwithclient.beta.realtime.connect(model="gpt-4o-realtime-preview")asconnection:    ...asyncforeventinconnection:ifevent.type=='error':print(event.error.type)print(event.error.code)print(event.error.event_id)print(event.error.message)

Using types

Nested request parameters areTypedDicts. Responses arePydantic models which also provide helper methods for things like:

  • Serializing back into JSON,model.to_json()
  • Converting to a dictionary,model.to_dict()

Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, setpython.analysis.typeCheckingMode tobasic.

Pagination

List methods in the OpenAI API are paginated.

This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:

fromopenaiimportOpenAIclient=OpenAI()all_jobs= []# Automatically fetches more pages as needed.forjobinclient.fine_tuning.jobs.list(limit=20,):# Do something with job hereall_jobs.append(job)print(all_jobs)

Or, asynchronously:

importasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI()asyncdefmain()->None:all_jobs= []# Iterate through items across all pages, issuing requests as needed.asyncforjobinclient.fine_tuning.jobs.list(limit=20,    ):all_jobs.append(job)print(all_jobs)asyncio.run(main())

Alternatively, you can use the.has_next_page(),.next_page_info(), or.get_next_page() methods for more granular control working with pages:

first_page=awaitclient.fine_tuning.jobs.list(limit=20,)iffirst_page.has_next_page():print(f"will fetch next page using these details:{first_page.next_page_info()}")next_page=awaitfirst_page.get_next_page()print(f"number of items we just fetched:{len(next_page.data)}")# Remove `await` for non-async usage.

Or just work directly with the returned data:

first_page=awaitclient.fine_tuning.jobs.list(limit=20,)print(f"next page cursor:{first_page.after}")# => "next page cursor: ..."forjobinfirst_page.data:print(job.id)# Remove `await` for non-async usage.

Nested params

Nested parameters are dictionaries, typed usingTypedDict, for example:

fromopenaiimportOpenAIclient=OpenAI()response=client.chat.responses.create(input=[        {"role":"user","content":"How much ?",        }    ],model="gpt-4o",response_format={"type":"json_object"},)

File uploads

Request parameters that correspond to file uploads can be passed asbytes, or aPathLike instance or a tuple of(filename, contents, media type).

frompathlibimportPathfromopenaiimportOpenAIclient=OpenAI()client.files.create(file=Path("input.jsonl"),purpose="fine-tune",)

The async client uses the exact same interface. If you pass aPathLike instance, the file contents will be read asynchronously automatically.

Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass ofopenai.APIConnectionError is raised.

When the API returns a non-success status code (that is, 4xx or 5xxresponse), a subclass ofopenai.APIStatusError is raised, containingstatus_code andresponse properties.

All errors inherit fromopenai.APIError.

importopenaifromopenaiimportOpenAIclient=OpenAI()try:client.fine_tuning.jobs.create(model="gpt-4o",training_file="file-abc123",    )exceptopenai.APIConnectionErrorase:print("The server could not be reached")print(e.__cause__)# an underlying Exception, likely raised within httpx.exceptopenai.RateLimitErrorase:print("A 429 status code was received; we should back off a bit.")exceptopenai.APIStatusErrorase:print("Another non-200-range status code was received")print(e.status_code)print(e.response)

Error codes are as follows:

Status CodeError Type
400BadRequestError
401AuthenticationError
403PermissionDeniedError
404NotFoundError
422UnprocessableEntityError
429RateLimitError
>=500InternalServerError
N/AAPIConnectionError

Request IDs

For more information on debugging requests, seethese docs

All object responses in the SDK provide a_request_id property which is added from thex-request-id response header so that you can quickly log failing requests and report them back to OpenAI.

response=awaitclient.responses.create(model="gpt-4o-mini",input="Say 'this is a test'.",)print(response._request_id)# req_123

Note that unlike other properties that use an_ prefix, the_request_id propertyis public. Unless documented otherwise,all other_ prefix properties,methods and modules areprivate.

Important

If you need to access request IDs for failed requests you must catch theAPIStatusError exception

importopenaitry:completion=awaitclient.chat.completions.create(messages=[{"role":"user","content":"Say this is a test"}],model="gpt-4"    )exceptopenai.APIStatusErrorasexc:print(exc.request_id)# req_123raiseexc

Retries

Certain errors are automatically retried 2 times by default, with a short exponential backoff.Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,429 Rate Limit, and >=500 Internal errors are all retried by default.

You can use themax_retries option to configure or disable retry settings:

fromopenaiimportOpenAI# Configure the default for all requests:client=OpenAI(# default is 2max_retries=0,)# Or, configure per-request:client.with_options(max_retries=5).chat.completions.create(messages=[        {"role":"user","content":"How can I get the name of the current day in JavaScript?",        }    ],model="gpt-4o",)

Timeouts

By default requests time out after 10 minutes. You can configure this with atimeout option,which accepts a float or anhttpx.Timeout object:

fromopenaiimportOpenAI# Configure the default for all requests:client=OpenAI(# 20 seconds (default is 10 minutes)timeout=20.0,)# More granular control:client=OpenAI(timeout=httpx.Timeout(60.0,read=5.0,write=10.0,connect=2.0),)# Override per-request:client.with_options(timeout=5.0).chat.completions.create(messages=[        {"role":"user","content":"How can I list all files in a directory using Python?",        }    ],model="gpt-4o",)

On timeout, anAPITimeoutError is thrown.

Note that requests that time out areretried twice by default.

Advanced

Logging

We use the standard librarylogging module.

You can enable logging by setting the environment variableOPENAI_LOG toinfo.

$export OPENAI_LOG=info

Or todebug for more verbose logging.

How to tell whetherNone meansnull or missing

In an API response, a field may be explicitlynull, or missing entirely; in either case, its value isNone in this library. You can differentiate the two cases with.model_fields_set:

ifresponse.my_fieldisNone:if'my_field'notinresponse.model_fields_set:print('Got json like {}, without a "my_field" key present at all.')else:print('Got json like {"my_field": null}.')

Accessing raw response data (e.g. headers)

The "raw" Response object can be accessed by prefixing.with_raw_response. to any HTTP method call, e.g.,

fromopenaiimportOpenAIclient=OpenAI()response=client.chat.completions.with_raw_response.create(messages=[{"role":"user","content":"Say this is a test",    }],model="gpt-4o",)print(response.headers.get('X-My-Header'))completion=response.parse()# get the object that `chat.completions.create()` would have returnedprint(completion)

These methods return aLegacyAPIResponse object. This is a legacy class as we're changing it slightly in the next major version.

For the sync client this will mostly be the same with the exceptionofcontent &text will be methods instead of properties. In theasync client, all methods will be async.

A migration script will be provided & the migration in general shouldbe smooth.

.with_streaming_response

The above interface eagerly reads the full response body when you make the request, which may not always be what you want.

To stream the response body, use.with_streaming_response instead, which requires a context manager and only reads the response body once you call.read(),.text(),.json(),.iter_bytes(),.iter_text(),.iter_lines() or.parse(). In the async client, these are async methods.

As such,.with_streaming_response methods return a differentAPIResponse object, and the async client returns anAsyncAPIResponse object.

withclient.chat.completions.with_streaming_response.create(messages=[        {"role":"user","content":"Say this is a test",        }    ],model="gpt-4o",)asresponse:print(response.headers.get("X-My-Header"))forlineinresponse.iter_lines():print(line)

The context manager is required so that the response will reliably be closed.

Making custom/undocumented requests

This library is typed for convenient access to the documented API.

If you need to access undocumented endpoints, params, or response properties, the library can still be used.

Undocumented endpoints

To make requests to undocumented endpoints, you can make requests usingclient.get,client.post, and otherhttp verbs. Options on the client will be respected (such as retries) when making this request.

importhttpxresponse=client.post("/foo",cast_to=httpx.Response,body={"my_param":True},)print(response.headers.get("x-foo"))

Undocumented request params

If you want to explicitly send an extra param, you can do so with theextra_query,extra_body, andextra_headers requestoptions.

Undocumented response properties

To access undocumented response properties, you can access the extra fields likeresponse.unknown_prop. Youcan also get all the extra fields on the Pydantic model as a dict withresponse.model_extra.

Configuring the HTTP client

You can directly override thehttpx client to customize it for your use case, including:

importhttpxfromopenaiimportOpenAI,DefaultHttpxClientclient=OpenAI(# Or use the `OPENAI_BASE_URL` env varbase_url="http://my.test.server.example.com:8083/v1",http_client=DefaultHttpxClient(proxy="http://my.test.proxy.example.com",transport=httpx.HTTPTransport(local_address="0.0.0.0"),    ),)

You can also customize the client on a per-request basis by usingwith_options():

client.with_options(http_client=DefaultHttpxClient(...))

Managing HTTP resources

By default the library closes underlying HTTP connections whenever the client isgarbage collected. You can manually close the client using the.close() method if desired, or with a context manager that closes when exiting.

fromopenaiimportOpenAIwithOpenAI()asclient:# make requests here  ...# HTTP client is now closed

Microsoft Azure OpenAI

To use this library withAzure OpenAI, use theAzureOpenAIclass instead of theOpenAI class.

Important

The Azure API shape differs from the core API shape which means that the static types for responses / paramswon't always be correct.

fromopenaiimportAzureOpenAI# gets the API Key from environment variable AZURE_OPENAI_API_KEYclient=AzureOpenAI(# https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioningapi_version="2023-07-01-preview",# https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resourceazure_endpoint="https://example-endpoint.openai.azure.com",)completion=client.chat.completions.create(model="deployment-name",# e.g. gpt-35-instantmessages=[        {"role":"user","content":"How do I output all files in a directory using Python?",        },    ],)print(completion.to_json())

In addition to the options provided in the baseOpenAI client, the following options are provided:

  • azure_endpoint (or theAZURE_OPENAI_ENDPOINT environment variable)
  • azure_deployment
  • api_version (or theOPENAI_API_VERSION environment variable)
  • azure_ad_token (or theAZURE_OPENAI_AD_TOKEN environment variable)
  • azure_ad_token_provider

An example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be foundhere.

Versioning

This package generally followsSemVer conventions, though certain backwards-incompatible changes may be released as minor versions:

  1. Changes that only affect static types, without breaking runtime behavior.
  2. Changes to library internals which are technically public but not intended or documented for external use.(Please open a GitHub issue to let us know if you are relying on such internals.)
  3. Changes that we do not expect to impact the vast majority of users in practice.

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open anissue with questions, bugs, or suggestions.

Determining the installed version

If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.

You can determine the version that is being used at runtime with:

importopenaiprint(openai.__version__)

Requirements

Python 3.8 or higher.

Contributing

Seethe contributing documentation.

About

The official Python library for the OpenAI API

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Languages


[8]ページ先頭

©2009-2025 Movatter.jp