- Notifications
You must be signed in to change notification settings - Fork4.4k
The official Python library for the OpenAI API
License
openai/openai-python
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.8+application. The library includes type definitions for all request params and response fields,and offers both synchronous and asynchronous clients powered byhttpx.
The REST API documentation can be found onplatform.openai.com. The full API of this library can be found inapi.md.
# install from PyPIpip install openaiThe full API of this library can be found inapi.md.
importosfromopenaiimportOpenAIclient=OpenAI(api_key=os.environ.get("OPENAI_API_KEY"),# This is the default and can be omitted)chat_completion=client.chat.completions.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o",)
While you can provide anapi_key keyword argument,we recommend usingpython-dotenvto addOPENAI_API_KEY="My API Key" to your.env fileso that your API Key is not stored in source control.
Simply importAsyncOpenAI instead ofOpenAI and useawait with each API call:
importosimportasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"),# This is the default and can be omitted)asyncdefmain()->None:chat_completion=awaitclient.chat.completions.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o", )asyncio.run(main())
Functionality between the synchronous and asynchronous clients is otherwise identical.
We provide support for streaming responses using Server Side Events (SSE).
fromopenaiimportOpenAIclient=OpenAI()stream=client.chat.completions.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o",stream=True,)forchat_completioninstream:print(chat_completion)
The async client uses the exact same interface.
fromopenaiimportAsyncOpenAIclient=AsyncOpenAI()stream=awaitclient.chat.completions.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o",stream=True,)asyncforchat_completioninstream:print(chat_completion)
Nested request parameters areTypedDicts. Responses arePydantic models which also provide helper methods for things like:
- Serializing back into JSON,
model.to_json() - Converting to a dictionary,
model.to_dict()
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, setpython.analysis.typeCheckingMode tobasic.
List methods in the OpenAI API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
fromopenaiimportOpenAIclient=OpenAI()all_jobs= []# Automatically fetches more pages as needed.forjobinclient.fine_tuning.jobs.list(limit=20,):# Do something with job hereall_jobs.append(job)print(all_jobs)
Or, asynchronously:
importasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI()asyncdefmain()->None:all_jobs= []# Iterate through items across all pages, issuing requests as needed.asyncforjobinclient.fine_tuning.jobs.list(limit=20, ):all_jobs.append(job)print(all_jobs)asyncio.run(main())
Alternatively, you can use the.has_next_page(),.next_page_info(), or.get_next_page() methods for more granular control working with pages:
first_page=awaitclient.fine_tuning.jobs.list(limit=20,)iffirst_page.has_next_page():print(f"will fetch next page using these details:{first_page.next_page_info()}")next_page=awaitfirst_page.get_next_page()print(f"number of items we just fetched:{len(next_page.data)}")# Remove `await` for non-async usage.
Or just work directly with the returned data:
first_page=awaitclient.fine_tuning.jobs.list(limit=20,)print(f"next page cursor:{first_page.after}")# => "next page cursor: ..."forjobinfirst_page.data:print(job.id)# Remove `await` for non-async usage.
Nested parameters are dictionaries, typed usingTypedDict, for example:
fromopenaiimportOpenAIclient=OpenAI()completion=client.chat.completions.create(messages=[ {"role":"user","content":"Can you generate an example json object describing a fruit?", } ],model="gpt-4o",response_format={"type":"json_object"},)
Request parameters that correspond to file uploads can be passed asbytes, aPathLike instance or a tuple of(filename, contents, media type).
frompathlibimportPathfromopenaiimportOpenAIclient=OpenAI()client.files.create(file=Path("input.jsonl"),purpose="fine-tune",)
The async client uses the exact same interface. If you pass aPathLike instance, the file contents will be read asynchronously automatically.
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass ofopenai.APIConnectionError is raised.
When the API returns a non-success status code (that is, 4xx or 5xxresponse), a subclass ofopenai.APIStatusError is raised, containingstatus_code andresponse properties.
All errors inherit fromopenai.APIError.
importopenaifromopenaiimportOpenAIclient=OpenAI()try:client.fine_tuning.jobs.create(model="gpt-4o",training_file="file-abc123", )exceptopenai.APIConnectionErrorase:print("The server could not be reached")print(e.__cause__)# an underlying Exception, likely raised within httpx.exceptopenai.RateLimitErrorase:print("A 429 status code was received; we should back off a bit.")exceptopenai.APIStatusErrorase:print("Another non-200-range status code was received")print(e.status_code)print(e.response)
Error codes are as follows:
| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
Certain errors are automatically retried 2 times by default, with a short exponential backoff.Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use themax_retries option to configure or disable retry settings:
fromopenaiimportOpenAI# Configure the default for all requests:client=OpenAI(# default is 2max_retries=0,)# Or, configure per-request:client.with_options(max_retries=5).chat.completions.create(messages=[ {"role":"user","content":"How can I get the name of the current day in JavaScript?", } ],model="gpt-4o",)
By default requests time out after 10 minutes. You can configure this with atimeout option,which accepts a float or anhttpx.Timeout object:
fromopenaiimportOpenAI# Configure the default for all requests:client=OpenAI(# 20 seconds (default is 10 minutes)timeout=20.0,)# More granular control:client=OpenAI(timeout=httpx.Timeout(60.0,read=5.0,write=10.0,connect=2.0),)# Override per-request:client.with_options(timeout=5.0).chat.completions.create(messages=[ {"role":"user","content":"How can I list all files in a directory using Python?", } ],model="gpt-4o",)
On timeout, anAPITimeoutError is thrown.
Note that requests that time out areretried twice by default.
We use the standard librarylogging module.
You can enable logging by setting the environment variableOPENAI_LOG toinfo.
$export OPENAI_LOG=infoOr todebug for more verbose logging.
In an API response, a field may be explicitlynull, or missing entirely; in either case, its value isNone in this library. You can differentiate the two cases with.model_fields_set:
ifresponse.my_fieldisNone:if'my_field'notinresponse.model_fields_set:print('Got json like {}, without a "my_field" key present at all.')else:print('Got json like {"my_field": null}.')
The "raw" Response object can be accessed by prefixing.with_raw_response. to any HTTP method call, e.g.,
fromopenaiimportOpenAIclient=OpenAI()response=client.chat.completions.with_raw_response.create(messages=[{"role":"user","content":"Say this is a test", }],model="gpt-4o",)print(response.headers.get('X-My-Header'))completion=response.parse()# get the object that `chat.completions.create()` would have returnedprint(completion)
These methods return aLegacyAPIResponse object. This is a legacy class as we're changing it slightly in the next major version.
For the sync client this will mostly be the same with the exceptionofcontent &text will be methods instead of properties. In theasync client, all methods will be async.
A migration script will be provided & the migration in general shouldbe smooth.
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use.with_streaming_response instead, which requires a context manager and only reads the response body once you call.read(),.text(),.json(),.iter_bytes(),.iter_text(),.iter_lines() or.parse(). In the async client, these are async methods.
As such,.with_streaming_response methods return a differentAPIResponse object, and the async client returns anAsyncAPIResponse object.
withclient.chat.completions.with_streaming_response.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o",)asresponse:print(response.headers.get("X-My-Header"))forlineinresponse.iter_lines():print(line)
The context manager is required so that the response will reliably be closed.
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
To make requests to undocumented endpoints, you can make requests usingclient.get,client.post, and otherhttp verbs. Options on the client will be respected (such as retries) when making this request.
importhttpxresponse=client.post("/foo",cast_to=httpx.Response,body={"my_param":True},)print(response.headers.get("x-foo"))
If you want to explicitly send an extra param, you can do so with theextra_query,extra_body, andextra_headers requestoptions.
To access undocumented response properties, you can access the extra fields likeresponse.unknown_prop. Youcan also get all the extra fields on the Pydantic model as a dict withresponse.model_extra.
You can directly override thehttpx client to customize it for your use case, including:
- Support forproxies
- Customtransports
- Additionaladvanced functionality
importhttpxfromopenaiimportOpenAI,DefaultHttpxClientclient=OpenAI(# Or use the `OPENAI_BASE_URL` env varbase_url="http://my.test.server.example.com:8083",http_client=DefaultHttpxClient(proxy="http://my.test.proxy.example.com",transport=httpx.HTTPTransport(local_address="0.0.0.0"), ),)
You can also customize the client on a per-request basis by usingwith_options():
client.with_options(http_client=DefaultHttpxClient(...))
By default the library closes underlying HTTP connections whenever the client isgarbage collected. You can manually close the client using the.close() method if desired, or with a context manager that closes when exiting.
fromopenaiimportOpenAIwithOpenAI()asclient:# make requests here ...# HTTP client is now closed
This package generally followsSemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
- Changes that only affect static types, without breaking runtime behavior.
- Changes to library internals which are technically public but not intended or documented for external use.(Please open a GitHub issue to let us know if you are relying on such internals.)
- Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open anissue with questions, bugs, or suggestions.
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
importopenaiprint(openai.__version__)
Python 3.8 or higher.
About
The official Python library for the OpenAI API
Topics
Resources
License
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.