- Notifications
You must be signed in to change notification settings - Fork0
The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language.
License
googol-lab/openai-python
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
The OpenAI Python library provides convenient access to the OpenAI APIfrom applications written in the Python language. It includes apre-defined set of classes for API resources that initializethemselves dynamically from API responses which makes it compatiblewith a wide range of versions of the OpenAI API.
You can find usage examples for the OpenAI Python library in ourAPI reference and theOpenAI Cookbook.
You don't need this source code unless you want to modify the package. If you justwant to use the package, just run:
pip install --upgrade openai
Install from source with:
python setup.py install
Install dependencies foropenai.embeddings_utils
:
pip install openai[embeddings]
Install support forWeights & Biases:
pip install openai[wandb]
Data libraries likenumpy
andpandas
are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter aMissingDependencyError
, install them with:
pip install openai[datalib]
The library needs to be configured with your account's secret key which is available on thewebsite. Either set it as theOPENAI_API_KEY
environment variable before using the library:
export OPENAI_API_KEY='sk-...'
Or setopenai.api_key
to its value:
importopenaiopenai.api_key="sk-..."# list modelsmodels=openai.Model.list()# print the first model's idprint(models.data[0].id)# create a chat completionchat_completion=openai.ChatCompletion.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"Hello world"}])# print the chat completionprint(chat_completion.choices[0].message.content)
All endpoints have a.create
method that supports arequest_timeout
param. This param takes aUnion[float, Tuple[float, float]]
and will raise anopenai.error.Timeout
error if the request exceeds that time in seconds (See:https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts).
In order to use the library with Microsoft Azure endpoints, you need to set theapi_type
,api_base
andapi_version
in addition to theapi_key
. Theapi_type
must be set to 'azure' and the others correspond to the properties of your endpoint.In addition, the deployment name must be passed as the engine parameter.
importopenaiopenai.api_type="azure"openai.api_key="..."openai.api_base="https://example-endpoint.openai.azure.com"openai.api_version="2023-05-15"# create a chat completionchat_completion=openai.ChatCompletion.create(deployment_id="deployment-name",model="gpt-3.5-turbo",messages=[{"role":"user","content":"Hello world"}])# print the completionprint(completion.choices[0].message.content)
Please note that for the moment, the Microsoft Azure endpoints can only be used for completion, embedding, and fine-tuning operations.For a detailed example of how to use fine-tuning and other operations using Azure endpoints, please check out the following Jupyter notebooks:
In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set theapi_type
to "azure_ad" and pass the acquired credential token toapi_key
. The rest of the parameters need to be set as specified in the previous section.
fromazure.identityimportDefaultAzureCredentialimportopenai# Request credentialdefault_credential=DefaultAzureCredential()token=default_credential.get_token("https://cognitiveservices.azure.com/.default")# Setup parametersopenai.api_type="azure_ad"openai.api_key=token.tokenopenai.api_base="https://example-endpoint.openai.azure.com/"openai.api_version="2023-05-15"# ...
This library additionally provides anopenai
command-line utilitywhich makes it easy to interact with the API from your terminal. Runopenai api -h
for usage.
# list modelsopenai api models.list# create a chat completion (gpt-3.5-turbo, gpt-4, etc.)openai api chat_completions.create -m gpt-3.5-turbo -g user"Hello world"# create a completion (text-davinci-003, text-davinci-002, ada, babbage, curie, davinci, etc.)openai api completions.create -m ada -p"Hello world"# generate images via DALL·E APIopenai api image.create -p"two dogs playing chess, cartoon" -n 1# using openai through a proxyopenai --proxy=http://proxy.com api models.list
Examples of how to use this Python library to accomplish various tasks can be found in theOpenAI Cookbook. It contains code examples for:
- Classification using fine-tuning
- Clustering
- Code search
- Customizing embeddings
- Question answering from a corpus of documents
- Recommendations
- Visualization of embeddings
- And more
Prior to July 2022, this OpenAI Python library hosted code examples in its examples folder, but since then all examples have been migrated to theOpenAI Cookbook.
Conversational models such asgpt-3.5-turbo
can be called using the chat completions endpoint.
importopenaiopenai.api_key="sk-..."# supply your API key however you choosecompletion=openai.ChatCompletion.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"Hello world"}])print(completion.choices[0].message.content)
Text models such astext-davinci-003
,text-davinci-002
and earlier (ada
,babbage
,curie
,davinci
, etc.) can be called using the completions endpoint.
importopenaiopenai.api_key="sk-..."# supply your API key however you choosecompletion=openai.Completion.create(model="text-davinci-003",prompt="Hello world")print(completion.choices[0].text)
In the OpenAI Python library, an embedding represents a text string as a fixed-length vector of floating point numbers. Embeddings are designed to measure the similarity or relevance between text strings.
To get an embedding for a text string, you can use the embeddings method as follows in Python:
importopenaiopenai.api_key="sk-..."# supply your API key however you choose# choose text to embedtext_string="sample text"# choose an embeddingmodel_id="text-similarity-davinci-001"# compute the embedding of the textembedding=openai.Embedding.create(input=text_string,model=model_id)['data'][0]['embedding']
An example of how to call the embeddings method is shown in thisget embeddings notebook.
Examples of how to use embeddings are shared in the following Jupyter notebooks:
- Classification using embeddings
- Clustering using embeddings
- Code search using embeddings
- Semantic text search using embeddings
- User and product embeddings
- Zero-shot classification using embeddings
- Recommendation using embeddings
For more information on embeddings and the types of embeddings OpenAI offers, read theembeddings guide in the OpenAI documentation.
Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and reduce the cost/latency of API calls (chiefly through reducing the need to include training examples in prompts).
Examples of fine-tuning are shared in the following Jupyter notebooks:
- Classification with fine-tuning (a simple notebook that shows the steps required for fine-tuning)
- Fine-tuning a model that answers questions about the 2020 Olympics
Sync your fine-tunes toWeights & Biases to track experiments, models, and datasets in your central dashboard with:
openai wandb sync
For more information on fine-tuning, read thefine-tuning guide in the OpenAI documentation.
OpenAI provides a Moderation endpoint that can be used to check whether content complies with the OpenAIcontent policy
importopenaiopenai.api_key="sk-..."# supply your API key however you choosemoderation_resp=openai.Moderation.create(input="Here is some perfectly innocuous text that follows all OpenAI content policies.")
See themoderation guide for more details.
importopenaiopenai.api_key="sk-..."# supply your API key however you chooseimage_resp=openai.Image.create(prompt="two dogs playing chess, oil painting",n=4,size="512x512")
importopenaiopenai.api_key="sk-..."# supply your API key however you choosef=open("path/to/file.mp3","rb")transcript=openai.Audio.transcribe("whisper-1",f)
Async support is available in the API by prependinga
to a network-bound method:
importopenaiopenai.api_key="sk-..."# supply your API key however you chooseasyncdefcreate_chat_completion():chat_completion_resp=awaitopenai.ChatCompletion.acreate(model="gpt-3.5-turbo",messages=[{"role":"user","content":"Hello world"}])
To make async requests more efficient, you can pass in your ownaiohttp.ClientSession
, but you must manually close the client session at the endof your program/event loop:
importopenaifromaiohttpimportClientSessionopenai.aiosession.set(ClientSession())# At the end of your program, close the http sessionawaitopenai.aiosession.get().close()
See theusage guide for more details.
- Python 3.7.1+
In general, we want to support the versions of Python that ourcustomers are using. If you run into problems with any versionissues, please let us know on oursupport page.
This library is forked from theStripe Python Library.
About
The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language.
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Languages
- Python99.9%
- Makefile0.1%