Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The official Python library for the OpenAI API

License

NotificationsYou must be signed in to change notification settings

openai/openai-python

 
 

Repository files navigation

The OpenAI Python library provides convenient access to the OpenAI APIfrom applications written in the Python language. It includes apre-defined set of classes for API resources that initializethemselves dynamically from API responses which makes it compatiblewith a wide range of versions of the OpenAI API.

You can find usage examples for the OpenAI Python library in ourAPI reference and theOpenAI Cookbook.

Beta Release

Important

We're preparing to release version 1.0 of the OpenAI Python library.

This new version will be a major release and will include breaking changes. We're releasing this beta version to give you a chance to try out the new features and provide feedback before the official release. You can install the beta version with:

pip install --pre openai

And follow along with thebeta release notes.

Installation

To start, ensure you have Python 3.7.1 or newer. If you justwant to use the package, run:

pip install --upgrade openai

After you have installed the package, import it at the top of a file:

importopenai

To install this package from source to make modifications to it, run the following command from the root of the repository:

python setup.py install

Optional dependencies

Install dependencies foropenai.embeddings_utils:

pip install openai[embeddings]

Install support forWeights & Biases which can be used for fine-tuning:

pip install openai[wandb]

Data libraries likenumpy andpandas are not installed by default due to their size. They’re needed for some functionality of this library, but generally not for talking to the API. If you encounter aMissingDependencyError, install them with:

pip install openai[datalib]

Usage

The library needs to be configured with your OpenAI account's private API key which is available on ourdeveloper platform. Either set it as theOPENAI_API_KEY environment variable before using the library:

export OPENAI_API_KEY='sk-...'

Or setopenai.api_key to its value:

openai.api_key="sk-..."

Examples of how to use this library to accomplish various tasks can be found in theOpenAI Cookbook. It contains code examples for: classification using fine-tuning, clustering, code search, customizing embeddings, question answering from a corpus of documents. recommendations, visualization of embeddings, and more.

Most endpoints support arequest_timeout param. This param takes aUnion[float, Tuple[float, float]] and will raise anopenai.error.Timeout error if the request exceeds that time in seconds (See:https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts).

Chat completions

Chat models such asgpt-3.5-turbo andgpt-4 can be called using thechat completions endpoint.

completion=openai.ChatCompletion.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"Hello world"}])print(completion.choices[0].message.content)

You can learn more in ourchat completions guide.

Completions

Text models such asbabbage-002 ordavinci-002 (and ourlegacy completions models) can be called using the completions endpoint.

completion=openai.Completion.create(model="davinci-002",prompt="Hello world")print(completion.choices[0].text)

You can learn more in ourcompletions guide.

Embeddings

Embeddings are designed to measure the similarity or relevance between text strings. To get an embedding for a text string, you can use following:

text_string="sample text"model_id="text-embedding-ada-002"embedding=openai.Embedding.create(input=text_string,model=model_id)['data'][0]['embedding']

You can learn more in ourembeddings guide.

Fine-tuning

Fine-tuning a model on training data can both improve the results (by giving the model more examples to learn from) and lower the cost/latency of API calls by reducing the need to include training examples in prompts.

# Create a fine-tuning job with an already uploaded fileopenai.FineTuningJob.create(training_file="file-abc123",model="gpt-3.5-turbo")# List 10 fine-tuning jobsopenai.FineTuningJob.list(limit=10)# Retrieve the state of a fine-tuneopenai.FineTuningJob.retrieve("ft-abc123")# Cancel a jobopenai.FineTuningJob.cancel("ft-abc123")# List up to 10 events from a fine-tuning jobopenai.FineTuningJob.list_events(id="ft-abc123",limit=10)# Delete a fine-tuned model (must be an owner of the org the model was created in)openai.Model.delete("ft:gpt-3.5-turbo:acemeco:suffix:abc123")

You can learn more in ourfine-tuning guide.

To log the training results from fine-tuning to Weights & Biases use:

openai wandb sync

For more information, read thewandb documentation on Weights & Biases.

Moderation

OpenAI provides a free Moderation endpoint that can be used to check whether content complies with the OpenAIcontent policy.

moderation_resp=openai.Moderation.create(input="Here is some perfectly innocuous text that follows all OpenAI content policies.")

You can learn more in ourmoderation guide.

Image generation (DALL·E)

DALL·E is a generative image model that can create new images based on a prompt.

image_resp=openai.Image.create(prompt="two dogs playing chess, oil painting",n=4,size="512x512")

You can learn more in ourimage generation guide.

Audio (Whisper)

The speech to text API provides two endpoints, transcriptions and translations, based on our state-of-the-artopen source large-v2 Whisper model.

f=open("path/to/file.mp3","rb")transcript=openai.Audio.transcribe("whisper-1",f)transcript=openai.Audio.translate("whisper-1",f)

You can learn more in ourspeech to text guide.

Async API

Async support is available in the API by prependinga to a network-bound method:

asyncdefcreate_chat_completion():chat_completion_resp=awaitopenai.ChatCompletion.acreate(model="gpt-3.5-turbo",messages=[{"role":"user","content":"Hello world"}])

To make async requests more efficient, you can pass in your ownaiohttp.ClientSession, but you must manually close the client session at the endof your program/event loop:

fromaiohttpimportClientSessionopenai.aiosession.set(ClientSession())# At the end of your program, close the http sessionawaitopenai.aiosession.get().close()

Command-line interface

This library additionally provides anopenai command-line utilitywhich makes it easy to interact with the API from your terminal. Runopenai api -h for usage.

# list modelsopenai api models.list# create a chat completion (gpt-3.5-turbo, gpt-4, etc.)openai api chat_completions.create -m gpt-3.5-turbo -g user"Hello world"# create a completion (text-davinci-003, text-davinci-002, ada, babbage, curie, davinci, etc.)openai api completions.create -m ada -p"Hello world"# generate images via DALL·E APIopenai api image.create -p"two dogs playing chess, cartoon" -n 1# using openai through a proxyopenai --proxy=http://proxy.com api models.list

Microsoft Azure Endpoints

In order to use the library with Microsoft Azure endpoints, you need to set theapi_type,api_base andapi_version in addition to theapi_key. Theapi_type must be set to 'azure' and the others correspond to the properties of your endpoint.In addition, the deployment name must be passed as thedeployment_id parameter.

importopenaiopenai.api_type="azure"openai.api_key="..."openai.api_base="https://example-endpoint.openai.azure.com"openai.api_version="2023-05-15"# create a chat completionchat_completion=openai.ChatCompletion.create(deployment_id="deployment-name",model="gpt-3.5-turbo",messages=[{"role":"user","content":"Hello world"}])# print the completionprint(chat_completion.choices[0].message.content)

Please note that for the moment, the Microsoft Azure endpoints can only be used for completion, embedding, and fine-tuning operations.For a detailed example of how to use fine-tuning and other operations using Azure endpoints, please check out the following Jupyter notebooks:

Microsoft Azure Active Directory Authentication

In order to use Microsoft Active Directory to authenticate to your Azure endpoint, you need to set theapi_type to "azure_ad" and pass the acquired credential token toapi_key. The rest of the parameters need to be set as specified in the previous section.

fromazure.identityimportDefaultAzureCredentialimportopenai# Request credentialdefault_credential=DefaultAzureCredential()token=default_credential.get_token("https://cognitiveservices.azure.com/.default")# Setup parametersopenai.api_type="azure_ad"openai.api_key=token.tokenopenai.api_base="https://example-endpoint.openai.azure.com/"openai.api_version="2023-05-15"

Credit

This library is forked from theStripe Python Library.


[8]ページ先頭

©2009-2025 Movatter.jp