- Notifications
You must be signed in to change notification settings - Fork0
The official Python library for the OpenAI API
License
Gutter44/openai-python
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
The OpenAI Python library provides convenient access to the OpenAI REST API from any Python 3.8+application. The library includes type definitions for all request params and response fields,and offers both synchronous and asynchronous clients powered byhttpx.
It is generated from ourOpenAPI specification withStainless.
The REST API documentation can be found onplatform.openai.com. The full API of this library can be found inapi.md.
Important
The SDK was rewritten in v1, which was released November 6th 2023. See thev1 migration guide, which includes scripts to automatically update your code.
# install from PyPIpip install openaiThe full API of this library can be found inapi.md.
importosfromopenaiimportOpenAIclient=OpenAI(api_key=os.environ.get("OPENAI_API_KEY"),# This is the default and can be omitted)chat_completion=client.chat.completions.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o",)
While you can provide anapi_key keyword argument,we recommend usingpython-dotenvto addOPENAI_API_KEY="My API Key" to your.env fileso that your API Key is not stored in source control.
With a hosted image:
response=client.chat.completions.create(model="gpt-4o-mini",messages=[ {"role":"user","content": [ {"type":"text","text":prompt}, {"type":"image_url","image_url": {"url":f"{img_url}"}, }, ], } ],)
With the image as a base64 encoded string:
response=client.chat.completions.create(model="gpt-4o-mini",messages=[ {"role":"user","content": [ {"type":"text","text":prompt}, {"type":"image_url","image_url": {"url":f"data:{img_type};base64,{img_b64_str}"}, }, ], } ],)
When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includeshelper functions which will poll the status until it reaches a terminal state and then return the resulting object.If an API method results in an action that could benefit from polling there will be a corresponding version of themethod ending in '_and_poll'.
For instance to create a Run and poll until it reaches a terminal state you can run:
run=client.beta.threads.runs.create_and_poll(thread_id=thread.id,assistant_id=assistant.id,)
More information on the lifecycle of a Run can be found in theRun Lifecycle Documentation
When creating and interacting with vector stores, you can use polling helpers to monitor the status of operations.For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.
sample_files= [Path("sample-paper.pdf"), ...]batch=awaitclient.vector_stores.file_batches.upload_and_poll(store.id,files=sample_files,)
The SDK also includes helpers to process streams and handle incoming events.
withclient.beta.threads.runs.stream(thread_id=thread.id,assistant_id=assistant.id,instructions="Please address the user as Jane Doe. The user has a premium account.",)asstream:foreventinstream:# Print the text from text delta eventsifevent.type=="thread.message.delta"andevent.data.delta.content:print(event.data.delta.content[0].text)
More information on streaming helpers can be found in the dedicated documentation:helpers.md
Simply importAsyncOpenAI instead ofOpenAI and useawait with each API call:
importosimportasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"),# This is the default and can be omitted)asyncdefmain()->None:chat_completion=awaitclient.chat.completions.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o", )asyncio.run(main())
Functionality between the synchronous and asynchronous clients is otherwise identical.
We provide support for streaming responses using Server Side Events (SSE).
fromopenaiimportOpenAIclient=OpenAI()stream=client.chat.completions.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o",stream=True,)forchunkinstream:print(chunk.choices[0].delta.contentor"",end="")
The async client uses the exact same interface.
importasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI()asyncdefmain():stream=awaitclient.chat.completions.create(model="gpt-4",messages=[{"role":"user","content":"Say this is a test"}],stream=True, )asyncforchunkinstream:print(chunk.choices[0].delta.contentor"",end="")asyncio.run(main())
Important
We highly recommend instantiating client instances instead of relying on the global client.
We also expose a global client instance that is accessible in a similar fashion to versions prior to v1.
importopenai# optional; defaults to `os.environ['OPENAI_API_KEY']`openai.api_key='...'# all client options can be configured just like the `OpenAI` instantiation counterpartopenai.base_url="https://..."openai.default_headers= {"x-foo":"true"}completion=openai.chat.completions.create(model="gpt-4o",messages=[ {"role":"user","content":"How do I output all files in a directory using Python?", }, ],)print(completion.choices[0].message.content)
The API is the exact same as the standard client instance-based API.
This is intended to be used within REPLs or notebooks for faster iteration,not in application code.
We recommend that you always instantiate a client (e.g., withclient = OpenAI()) in application code because:
- It can be difficult to reason about where client options are configured
- It's not possible to change certain client options without potentially causing race conditions
- It's harder to mock for testing purposes
- It's not possible to control cleanup of network connections
Nested request parameters areTypedDicts. Responses arePydantic models which also provide helper methods for things like:
- Serializing back into JSON,
model.to_json() - Converting to a dictionary,
model.to_dict()
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, setpython.analysis.typeCheckingMode tobasic.
List methods in the OpenAI API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
fromopenaiimportOpenAIclient=OpenAI()all_jobs= []# Automatically fetches more pages as needed.forjobinclient.fine_tuning.jobs.list(limit=20,):# Do something with job hereall_jobs.append(job)print(all_jobs)
Or, asynchronously:
importasynciofromopenaiimportAsyncOpenAIclient=AsyncOpenAI()asyncdefmain()->None:all_jobs= []# Iterate through items across all pages, issuing requests as needed.asyncforjobinclient.fine_tuning.jobs.list(limit=20, ):all_jobs.append(job)print(all_jobs)asyncio.run(main())
Alternatively, you can use the.has_next_page(),.next_page_info(), or.get_next_page() methods for more granular control working with pages:
first_page=awaitclient.fine_tuning.jobs.list(limit=20,)iffirst_page.has_next_page():print(f"will fetch next page using these details:{first_page.next_page_info()}")next_page=awaitfirst_page.get_next_page()print(f"number of items we just fetched:{len(next_page.data)}")# Remove `await` for non-async usage.
Or just work directly with the returned data:
first_page=awaitclient.fine_tuning.jobs.list(limit=20,)print(f"next page cursor:{first_page.after}")# => "next page cursor: ..."forjobinfirst_page.data:print(job.id)# Remove `await` for non-async usage.
Nested parameters are dictionaries, typed usingTypedDict, for example:
fromopenaiimportOpenAIclient=OpenAI()completion=client.chat.completions.create(messages=[ {"role":"user","content":"Can you generate an example json object describing a fruit?", } ],model="gpt-4o",response_format={"type":"json_object"},)
Request parameters that correspond to file uploads can be passed asbytes, aPathLike instance or a tuple of(filename, contents, media type).
frompathlibimportPathfromopenaiimportOpenAIclient=OpenAI()client.files.create(file=Path("input.jsonl"),purpose="fine-tune",)
The async client uses the exact same interface. If you pass aPathLike instance, the file contents will be read asynchronously automatically.
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass ofopenai.APIConnectionError is raised.
When the API returns a non-success status code (that is, 4xx or 5xxresponse), a subclass ofopenai.APIStatusError is raised, containingstatus_code andresponse properties.
All errors inherit fromopenai.APIError.
importopenaifromopenaiimportOpenAIclient=OpenAI()try:client.fine_tuning.jobs.create(model="gpt-4o",training_file="file-abc123", )exceptopenai.APIConnectionErrorase:print("The server could not be reached")print(e.__cause__)# an underlying Exception, likely raised within httpx.exceptopenai.RateLimitErrorase:print("A 429 status code was received; we should back off a bit.")exceptopenai.APIStatusErrorase:print("Another non-200-range status code was received")print(e.status_code)print(e.response)
Error codes are as followed:
| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
For more information on debugging requests, seethese docs
All object responses in the SDK provide a_request_id property which is added from thex-request-id response header so that you can quickly log failing requests and report them back to OpenAI.
completion=awaitclient.chat.completions.create(messages=[{"role":"user","content":"Say this is a test"}],model="gpt-4")print(completion._request_id)# req_123
Note that unlike other properties that use an_ prefix, the_request_id propertyis public. Unless documented otherwise,all other_ prefix properties,methods and modules areprivate.
Certain errors are automatically retried 2 times by default, with a short exponential backoff.Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use themax_retries option to configure or disable retry settings:
fromopenaiimportOpenAI# Configure the default for all requests:client=OpenAI(# default is 2max_retries=0,)# Or, configure per-request:client.with_options(max_retries=5).chat.completions.create(messages=[ {"role":"user","content":"How can I get the name of the current day in JavaScript?", } ],model="gpt-4o",)
By default requests time out after 10 minutes. You can configure this with atimeout option,which accepts a float or anhttpx.Timeout object:
fromopenaiimportOpenAI# Configure the default for all requests:client=OpenAI(# 20 seconds (default is 10 minutes)timeout=20.0,)# More granular control:client=OpenAI(timeout=httpx.Timeout(60.0,read=5.0,write=10.0,connect=2.0),)# Override per-request:client.with_options(timeout=5.0).chat.completions.create(messages=[ {"role":"user","content":"How can I list all files in a directory using Python?", } ],model="gpt-4o",)
On timeout, anAPITimeoutError is thrown.
Note that requests that time out areretried twice by default.
We use the standard librarylogging module.
You can enable logging by setting the environment variableOPENAI_LOG todebug.
$export OPENAI_LOG=debugIn an API response, a field may be explicitlynull, or missing entirely; in either case, its value isNone in this library. You can differentiate the two cases with.model_fields_set:
ifresponse.my_fieldisNone:if'my_field'notinresponse.model_fields_set:print('Got json like {}, without a "my_field" key present at all.')else:print('Got json like {"my_field": null}.')
The "raw" Response object can be accessed by prefixing.with_raw_response. to any HTTP method call, e.g.,
fromopenaiimportOpenAIclient=OpenAI()response=client.chat.completions.with_raw_response.create(messages=[{"role":"user","content":"Say this is a test", }],model="gpt-4o",)print(response.headers.get('X-My-Header'))completion=response.parse()# get the object that `chat.completions.create()` would have returnedprint(completion)
These methods return anLegacyAPIResponse object. This is a legacy class as we're changing it slightly in the next major version.
For the sync client this will mostly be the same with the exceptionofcontent &text will be methods instead of properties. In theasync client, all methods will be async.
A migration script will be provided & the migration in general shouldbe smooth.
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use.with_streaming_response instead, which requires a context manager and only reads the response body once you call.read(),.text(),.json(),.iter_bytes(),.iter_text(),.iter_lines() or.parse(). In the async client, these are async methods.
As such,.with_streaming_response methods return a differentAPIResponse object, and the async client returns anAsyncAPIResponse object.
withclient.chat.completions.with_streaming_response.create(messages=[ {"role":"user","content":"Say this is a test", } ],model="gpt-4o",)asresponse:print(response.headers.get("X-My-Header"))forlineinresponse.iter_lines():print(line)
The context manager is required so that the response will reliably be closed.
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
To make requests to undocumented endpoints, you can make requests usingclient.get,client.post, and otherhttp verbs. Options on the client will be respected (such as retries) will be respected when making thisrequest.
importhttpxresponse=client.post("/foo",cast_to=httpx.Response,body={"my_param":True},)print(response.headers.get("x-foo"))
If you want to explicitly send an extra param, you can do so with theextra_query,extra_body, andextra_headers requestoptions.
To access undocumented response properties, you can access the extra fields likeresponse.unknown_prop. Youcan also get all the extra fields on the Pydantic model as a dict withresponse.model_extra.
You can directly override thehttpx client to customize it for your use case, including:
- Support for proxies
- Custom transports
- Additionaladvanced functionality
fromopenaiimportOpenAI,DefaultHttpxClientclient=OpenAI(# Or use the `OPENAI_BASE_URL` env varbase_url="http://my.test.server.example.com:8083/v1",http_client=DefaultHttpxClient(proxies="http://my.test.proxy.example.com",transport=httpx.HTTPTransport(local_address="0.0.0.0"), ),)
You can also customize the client on a per-request basis by usingwith_options():
client.with_options(http_client=DefaultHttpxClient(...))
By default the library closes underlying HTTP connections whenever the client isgarbage collected. You can manually close the client using the.close() method if desired, or with a context manager that closes when exiting.
To use this library withAzure OpenAI, use theAzureOpenAIclass instead of theOpenAI class.
Important
The Azure API shape differs from the core API shape which means that the static types for responses / paramswon't always be correct.
fromopenaiimportAzureOpenAI# gets the API Key from environment variable AZURE_OPENAI_API_KEYclient=AzureOpenAI(# https://learn.microsoft.com/azure/ai-services/openai/reference#rest-api-versioningapi_version="2023-07-01-preview",# https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resourceazure_endpoint="https://example-endpoint.openai.azure.com",)completion=client.chat.completions.create(model="deployment-name",# e.g. gpt-35-instantmessages=[ {"role":"user","content":"How do I output all files in a directory using Python?", }, ],)print(completion.to_json())
In addition to the options provided in the baseOpenAI client, the following options are provided:
azure_endpoint(or theAZURE_OPENAI_ENDPOINTenvironment variable)azure_deploymentapi_version(or theOPENAI_API_VERSIONenvironment variable)azure_ad_token(or theAZURE_OPENAI_AD_TOKENenvironment variable)azure_ad_token_provider
An example of using the client with Microsoft Entra ID (formerly known as Azure Active Directory) can be foundhere.
This package generally followsSemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
- Changes that only affect static types, without breaking runtime behavior.
- Changes to library internals which are technically public but not intended or documented for external use.(Please open a GitHub issue to let us know if you are relying on such internals).
- Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open anissue with questions, bugs, or suggestions.
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
importopenaiprint(openai.__version__)
Python 3.8 or higher.
About
The official Python library for the OpenAI API
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Languages
- Python99.8%
- Other0.2%