gpt3
packagemoduleThis package is not in the latest version of its module.
Details
Validgo.mod file
The Go module system was introduced in Go 1.11 and is the official dependency management solution for Go.
Redistributable license
Redistributable licenses place minimal restrictions on how software can be used, modified, and redistributed.
Tagged version
Modules with tagged versions give importers more predictable builds.
Stable version
When a project reaches major version v1 it is considered stable.
- Learn more about best practices
Repository
Links
README¶
go-gpt3
An OpenAI GPT-3 API client enabling Go/Golang programs to interact with the gpt3 APIs.
Supports using the completion APIs with or without streaming.
Usage
Simple usage to call the main gpt-3 API, completion:
client := gpt3.NewClient(apiKey)resp, err := client.Completion(ctx, gpt3.CompletionRequest{ Prompt: []string{"2, 3, 5, 7, 11,"},})fmt.Print(resp.Choices[0].Text)// prints " 13, 17, 19, 23, 29, 31", etc
Documentation
Check out the go docs for more detailed documentation on the types and methods provided:https://pkg.go.dev/github.com/PullRequestInc/go-gpt3
Full Examples
Try out any of these examples with putting the contents in amain.go
and runninggo run main.go
.I would recommend using go modules in which case you will also need to rungo mod init
within yourtest repo. Alternatively you can clone this repo and run the test script withgo run cmd/test/main.go
.
You will also need to have a.env
file that looks like this to use these examples:
API_KEY=<openAI API Key>
package mainimport ("context""fmt""log""os""github.com/PullRequestInc/go-gpt3""github.com/joho/godotenv")func main() {godotenv.Load()apiKey := os.Getenv("API_KEY")if apiKey == "" {log.Fatalln("Missing API KEY")}ctx := context.Background()client := gpt3.NewClient(apiKey)resp, err := client.Completion(ctx, gpt3.CompletionRequest{Prompt: []string{"The first thing you should know about javascript is"},MaxTokens: gpt3.IntPtr(30),Stop: []string{"."},Echo: true,})if err != nil {log.Fatalln(err)}fmt.Println(resp.Choices[0].Text)}
Support
- List Engines API
- Get Engine API
- Completion API (this is the main gpt-3 API)
- Streaming support for the Completion API
- Document Search API
- Overriding default url, user-agent, timeout, and other options
Powered by
Documentation¶
Index¶
- Constants
- func Float32Ptr(f float32) *float32
- func IntPtr(i int) *int
- type APIError
- type APIErrorResponse
- type ChatCompletionFunctionParameters
- type ChatCompletionFunctions
- type ChatCompletionRequest
- type ChatCompletionRequestMessage
- type ChatCompletionResponse
- type ChatCompletionResponseChoice
- type ChatCompletionResponseMessage
- type ChatCompletionStreamResponse
- type ChatCompletionStreamResponseChoice
- type ChatCompletionsResponseUsage
- type Client
- type ClientOption
- type CompletionRequest
- type CompletionResponse
- type CompletionResponseChoice
- type CompletionResponseUsage
- type EditsRequest
- type EditsResponse
- type EditsResponseChoice
- type EditsResponseUsage
- type EmbeddingEngine
- type EmbeddingsRequest
- type EmbeddingsResponse
- type EmbeddingsResult
- type EmbeddingsUsage
- type EngineObject
- type EnginesResponse
- type Function
- type FunctionParameterPropertyMetadata
- type LogprobResult
- type ModerationCategoryResult
- type ModerationCategoryScores
- type ModerationRequest
- type ModerationResponse
- type ModerationResult
- type RateLimitHeaders
- type SearchData
- type SearchRequest
- type SearchResponse
Constants¶
const (TextAda001Engine = "text-ada-001"TextBabbage001Engine = "text-babbage-001"TextCurie001Engine = "text-curie-001"TextDavinci001Engine = "text-davinci-001"TextDavinci002Engine = "text-davinci-002"TextDavinci003Engine = "text-davinci-003"AdaEngine = "ada"BabbageEngine = "babbage"CurieEngine = "curie"DavinciEngine = "davinci"DefaultEngine =DavinciEngine)
Engine Types
const (GPT3Dot5Turbo = "gpt-3.5-turbo"GPT3Dot5Turbo0301 = "gpt-3.5-turbo-0301"GPT3Dot5Turbo0613 = "gpt-3.5-turbo-0613"TextSimilarityAda001 = "text-similarity-ada-001"TextSimilarityBabbage001 = "text-similarity-babbage-001"TextSimilarityCurie001 = "text-similarity-curie-001"TextSimilarityDavinci001 = "text-similarity-davinci-001"TextSearchAdaDoc001 = "text-search-ada-doc-001"TextSearchAdaQuery001 = "text-search-ada-query-001"TextSearchBabbageDoc001 = "text-search-babbage-doc-001"TextSearchBabbageQuery001 = "text-search-babbage-query-001"TextSearchCurieDoc001 = "text-search-curie-doc-001"TextSearchCurieQuery001 = "text-search-curie-query-001"TextSearchDavinciDoc001 = "text-search-davinci-doc-001"TextSearchDavinciQuery001 = "text-search-davinci-query-001"CodeSearchAdaCode001 = "code-search-ada-code-001"CodeSearchAdaText001 = "code-search-ada-text-001"CodeSearchBabbageCode001 = "code-search-babbage-code-001"CodeSearchBabbageText001 = "code-search-babbage-text-001"TextEmbeddingAda002 = "text-embedding-ada-002")
const (TextModerationLatest = "text-moderation-latest"TextModerationStable = "text-moderation-stable")
Variables¶
This section is empty.
Functions¶
funcFloat32Ptr¶added inv1.1.5
Float32Ptr converts a float32 to a *float32 as a convenience
Types¶
typeAPIError¶added inv1.1.2
type APIError struct {RateLimitHeadersRateLimitHeadersStatusCodeint `json:"status_code"`Messagestring `json:"message"`Typestring `json:"type"`}
APIError represents an error that occured on an API
typeAPIErrorResponse¶added inv1.1.2
type APIErrorResponse struct {ErrorAPIError `json:"error"`}
APIErrorResponse is the full error respnose that has been returned by an API.
typeChatCompletionFunctionParameters¶added inv1.1.16
type ChatCompletionFunctionParameters struct {Typestring `json:"type"`Descriptionstring `json:"description,omitempty"`Properties map[string]FunctionParameterPropertyMetadata `json:"properties"`Required []string `json:"required"`}
ChatCompletionFunctionParameters captures the metadata of the function parameter.
typeChatCompletionFunctions¶added inv1.1.16
type ChatCompletionFunctions struct {Namestring `json:"name"`Descriptionstring `json:"description,omitempty"`ParametersChatCompletionFunctionParameters `json:"parameters"`}
ChatCompletionFunctions represents the functions the model may generate JSON inputs for.
typeChatCompletionRequest¶added inv1.1.12
type ChatCompletionRequest struct {// Model is the name of the model to use. If not specified, will default to gpt-3.5-turbo.Modelstring `json:"model"`// Messages is a list of messages to use as the context for the chat completion.Messages []ChatCompletionRequestMessage `json:"messages"`// Functions is a list of functions the model may generate JSON inputs for.Functions []ChatCompletionFunctions `json:"functions,omitempty"`// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministicTemperature *float32 `json:"temperature,omitempty"`// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.TopPfloat32 `json:"top_p,omitempty"`// Number of responses to generateNint `json:"n,omitempty"`// Whether or not to stream responses back as they are generatedStreambool `json:"stream,omitempty"`// Up to 4 sequences where the API will stop generating further tokens.Stop []string `json:"stop,omitempty"`// MaxTokens is the maximum number of tokens to return.MaxTokensint `json:"max_tokens,omitempty"`// (-2, 2) Penalize tokens that haven't appeared yet in the history.PresencePenaltyfloat32 `json:"presence_penalty,omitempty"`// (-2, 2) Penalize tokens that appear too frequently in the history.FrequencyPenaltyfloat32 `json:"frequency_penalty,omitempty"`// Modify the probability of specific tokens appearing in the completion.LogitBias map[string]float32 `json:"logit_bias,omitempty"`// Can be used to identify an end-userUserstring `json:"user,omitempty"`}
ChatCompletionRequest is a request for the chat completion API
typeChatCompletionRequestMessage¶added inv1.1.12
type ChatCompletionRequestMessage struct {// Role is the role is the role of the the message. Can be "system", "user", or "assistant"Rolestring `json:"role"`// Content is the content of the messageContentstring `json:"content"`// FunctionCall is the name and arguments of a function that should be called, as generated by the model.FunctionCall *Function `json:"function_call,omitempty"`// Name is the the name of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`.Namestring `json:"name,omitempty"`}
ChatCompletionRequestMessage is a message to use as the context for the chat completion API
typeChatCompletionResponse¶added inv1.1.12
type ChatCompletionResponse struct {RateLimitHeadersRateLimitHeadersIDstring `json:"id"`Objectstring `json:"object"`Createdint `json:"created"`Modelstring `json:"model"`Choices []ChatCompletionResponseChoice `json:"choices"`UsageChatCompletionsResponseUsage `json:"usage"`}
ChatCompletionResponse is the full response from a request to the Chat Completions API
typeChatCompletionResponseChoice¶added inv1.1.12
type ChatCompletionResponseChoice struct {Indexint `json:"index"`FinishReasonstring `json:"finish_reason"`MessageChatCompletionResponseMessage `json:"message"`}
ChatCompletionResponseChoice is one of the choices returned in the response to the Chat Completions API
typeChatCompletionResponseMessage¶added inv1.1.12
type ChatCompletionResponseMessage struct {Rolestring `json:"role"`Contentstring `json:"content"`FunctionCall *Function `json:"function_call,omitempty"`}
ChatCompletionResponseMessage is a message returned in the response to the Chat Completions API
typeChatCompletionStreamResponse¶added inv1.1.13
type ChatCompletionStreamResponse struct {IDstring `json:"id"`Objectstring `json:"object"`Createdint `json:"created"`Modelstring `json:"model"`Choices []ChatCompletionStreamResponseChoice `json:"choices"`UsageChatCompletionsResponseUsage `json:"usage"`}
typeChatCompletionStreamResponseChoice¶added inv1.1.13
type ChatCompletionStreamResponseChoice struct {Indexint `json:"index"`FinishReasonstring `json:"finish_reason"`DeltaChatCompletionResponseMessage `json:"delta"`}
ChatCompletionResponseChoice is one of the choices returned in the response to the Chat Completions API
typeChatCompletionsResponseUsage¶added inv1.1.12
type ChatCompletionsResponseUsage struct {PromptTokensint `json:"prompt_tokens"`CompletionTokensint `json:"completion_tokens"`TotalTokensint `json:"total_tokens"`}
ChatCompletionsResponseUsage is the object that returns how many tokens the completion's request used
typeClient¶
type Client interface {// Engines lists the currently available engines, and provides basic information about each// option such as the owner and availability.Engines(ctxcontext.Context) (*EnginesResponse,error)// Engine retrieves an engine instance, providing basic information about the engine such// as the owner and availability.Engine(ctxcontext.Context, enginestring) (*EngineObject,error)// ChatCompletion creates a completion with the Chat completion endpoint which// is what powers the ChatGPT experience.ChatCompletion(ctxcontext.Context, requestChatCompletionRequest) (*ChatCompletionResponse,error)// ChatCompletion creates a completion with the Chat completion endpoint which// is what powers the ChatGPT experience.ChatCompletionStream(ctxcontext.Context, requestChatCompletionRequest, onData func(*ChatCompletionStreamResponse)error)error// Completion creates a completion with the default engine. This is the main endpoint of the API// which auto-completes based on the given prompt.Completion(ctxcontext.Context, requestCompletionRequest) (*CompletionResponse,error)// CompletionStream creates a completion with the default engine and streams the results through// multiple calls to onData.CompletionStream(ctxcontext.Context, requestCompletionRequest, onData func(*CompletionResponse))error// CompletionWithEngine is the same as Completion except allows overriding the default engine on the clientCompletionWithEngine(ctxcontext.Context, enginestring, requestCompletionRequest) (*CompletionResponse,error)// CompletionStreamWithEngine is the same as CompletionStream except allows overriding the default engine on the clientCompletionStreamWithEngine(ctxcontext.Context, enginestring, requestCompletionRequest, onData func(*CompletionResponse))error// Given a prompt and an instruction, the model will return an edited version of the prompt.Edits(ctxcontext.Context, requestEditsRequest) (*EditsResponse,error)// Search performs a semantic search over a list of documents with the default engine.Search(ctxcontext.Context, requestSearchRequest) (*SearchResponse,error)// SearchWithEngine performs a semantic search over a list of documents with the specified engine.SearchWithEngine(ctxcontext.Context, enginestring, requestSearchRequest) (*SearchResponse,error)// Returns an embedding using the provided request.Embeddings(ctxcontext.Context, requestEmbeddingsRequest) (*EmbeddingsResponse,error)// Moderation performs a moderation check on the given text against an OpenAI classifier to determine whether the// provided content complies with OpenAI's usage policies.Moderation(ctxcontext.Context, requestModerationRequest) (*ModerationResponse,error)}
A Client is an API client to communicate with the OpenAI gpt-3 APIs
funcNewClient¶
func NewClient(apiKeystring, options ...ClientOption)Client
NewClient returns a new OpenAI GPT-3 API client. An apiKey is required to use the client
typeClientOption¶
type ClientOption func(*client)error
ClientOption are options that can be passed when creating a new client
funcWithBaseURL¶
func WithBaseURL(baseURLstring)ClientOption
WithBaseURL is a client option that allows you to override the default base url of the client.The default base url is "https://api.openai.com/v1"
funcWithDefaultEngine¶
func WithDefaultEngine(enginestring)ClientOption
WithDefaultEngine is a client option that allows you to override the default engine of the client
funcWithHTTPClient¶added inv1.1.2
func WithHTTPClient(httpClient *http.Client)ClientOption
WithHTTPClient allows you to override the internal http.Client used
funcWithOrg¶added inv1.1.4
func WithOrg(idstring)ClientOption
WithOrg is a client option that allows you to override the organization ID
funcWithTimeout¶
func WithTimeout(timeouttime.Duration)ClientOption
WithTimeout is a client option that allows you to override the default timeout duration of requestsfor the client. The default is 30 seconds. If you are overriding the http client as well, just includethe timeout there.
funcWithUserAgent¶
func WithUserAgent(userAgentstring)ClientOption
WithUserAgent is a client option that allows you to override the default user agent of the client
typeCompletionRequest¶
type CompletionRequest struct {// A list of string prompts to use.// TODO there are other prompt types here for using token integers that we could add support for.Prompt []string `json:"prompt"`// How many tokens to complete up to. Max of 512MaxTokens *int `json:"max_tokens,omitempty"`// Sampling temperature to useTemperature *float32 `json:"temperature,omitempty"`// Alternative to temperature for nucleus samplingTopP *float32 `json:"top_p,omitempty"`// How many choice to create for each promptN *int `json:"n"`// Include the probabilities of most likely tokensLogProbs *int `json:"logprobs"`// Echo back the prompt in addition to the completionEchobool `json:"echo"`// Up to 4 sequences where the API will stop generating tokens. Response will not contain the stop sequence.Stop []string `json:"stop,omitempty"`// PresencePenalty number between 0 and 1 that penalizes tokens that have already appeared in the text so far.PresencePenaltyfloat32 `json:"presence_penalty"`// FrequencyPenalty number between 0 and 1 that penalizes tokens on existing frequency in the text so far.FrequencyPenaltyfloat32 `json:"frequency_penalty"`// Whether to stream back results or not. Don't set this value in the request yourself// as it will be overriden depending on if you use CompletionStream or Completion methods.Streambool `json:"stream,omitempty"`}
CompletionRequest is a request for the completions API
typeCompletionResponse¶
type CompletionResponse struct {RateLimitHeadersRateLimitHeadersIDstring `json:"id"`Objectstring `json:"object"`Createdint `json:"created"`Modelstring `json:"model"`Choices []CompletionResponseChoice `json:"choices"`UsageCompletionResponseUsage `json:"usage"`}
CompletionResponse is the full response from a request to the completions API
typeCompletionResponseChoice¶
type CompletionResponseChoice struct {Textstring `json:"text"`Indexint `json:"index"`LogProbsLogprobResult `json:"logprobs"`FinishReasonstring `json:"finish_reason"`}
CompletionResponseChoice is one of the choices returned in the response to the Completions API
typeCompletionResponseUsage¶added inv1.1.10
type CompletionResponseUsage struct {PromptTokensint `json:"prompt_tokens"`CompletionTokensint `json:"completion_tokens"`TotalTokensint `json:"total_tokens"`}
CompletionResponseUsage is the object that returns how many tokens the completion's request used
typeEditsRequest¶added inv1.1.7
type EditsRequest struct {// ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.Modelstring `json:"model"`// The input text to use as a starting point for the edit.Inputstring `json:"input"`// The instruction that tells the model how to edit the prompt.Instructionstring `json:"instruction"`// Sampling temperature to useTemperature *float32 `json:"temperature,omitempty"`// Alternative to temperature for nucleus samplingTopP *float32 `json:"top_p,omitempty"`// How many edits to generate for the input and instruction. Defaults to 1N *int `json:"n"`}
EditsRequest is a request for the edits API
typeEditsResponse¶added inv1.1.7
type EditsResponse struct {Objectstring `json:"object"`Createdint `json:"created"`Choices []EditsResponseChoice `json:"choices"`UsageEditsResponseUsage `json:"usage"`}
EditsResponse is the full response from a request to the edits API
typeEditsResponseChoice¶added inv1.1.7
EditsResponseChoice is one of the choices returned in the response to the Edits API
typeEditsResponseUsage¶added inv1.1.7
type EditsResponseUsage struct {PromptTokensint `json:"prompt_tokens"`CompletionTokensint `json:"completion_tokens"`TotalTokensint `json:"total_tokens"`}
EditsResponseUsage is a structure used in the response from a request to the edits API
typeEmbeddingEngine¶added inv1.1.8
type EmbeddingEnginestring
typeEmbeddingsRequest¶added inv1.1.8
type EmbeddingsRequest struct {// Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings// for multiple inputs in a single request, pass an array of strings or array of token arrays.// Each input must not exceed 2048 tokens in length.Input []string `json:"input"`// ID of the model to useModelstring `json:"model"`// The request user is an optional parameter meant to be used to trace abusive requests// back to the originating user. OpenAI states:// "The [user] IDs should be a string that uniquely identifies each user. We recommend hashing// their username or email address, in order to avoid sending us any identifying information.// If you offer a preview of your product to non-logged in users, you can send a session ID// instead."Userstring `json:"user,omitempty"`}
EmbeddingsRequest is a request for the Embeddings API
typeEmbeddingsResponse¶added inv1.1.8
type EmbeddingsResponse struct {Objectstring `json:"object"`Data []EmbeddingsResult `json:"data"`UsageEmbeddingsUsage `json:"usage"`}
EmbeddingsResponse is the response from a create embeddings request.
See:https://beta.openai.com/docs/api-reference/embeddings/create
typeEmbeddingsResult¶added inv1.1.8
type EmbeddingsResult struct {// The type of object returned (e.g., "list", "object")Objectstring `json:"object"`// The embedding data for the inputEmbedding []float64 `json:"embedding"`Indexint `json:"index"`}
The inner result of a create embeddings request, containing the embeddings for a single input.
typeEmbeddingsUsage¶added inv1.1.8
type EmbeddingsUsage struct {// The number of tokens used by the promptPromptTokensint `json:"prompt_tokens"`// The total tokens usedTotalTokensint `json:"total_tokens"`}
The usage stats for an embeddings response
typeEngineObject¶
type EngineObject struct {IDstring `json:"id"`Objectstring `json:"object"`Ownerstring `json:"owner"`Readybool `json:"ready"`}
EngineObject contained in an engine reponse
typeEnginesResponse¶
type EnginesResponse struct {Data []EngineObject `json:"data"`Objectstring `json:"object"`}
EnginesResponse is returned from the Engines API
typeFunctionParameterPropertyMetadata¶added inv1.1.16
type FunctionParameterPropertyMetadata struct {Typestring `json:"type"`Descriptionstring `json:"description,omitempty"`Enum []string `json:"enum,omitempty"`}
FunctionParameterPropertyMetadata represents the metadata of the function parameter property.
typeLogprobResult¶added inv1.1.6
type LogprobResult struct {Tokens []string `json:"tokens"`TokenLogprobs []float32 `json:"token_logprobs"`TopLogprobs []map[string]float32 `json:"top_logprobs"`TextOffset []int `json:"text_offset"`}
LogprobResult represents logprob result of Choice
typeModerationCategoryResult¶added inv1.1.15
type ModerationCategoryResult struct {Hatebool `json:"hate"`HateThreateningbool `json:"hate/threatening"`SelfHarmbool `json:"self-harm"`Sexualbool `json:"sexual"`SexualMinorsbool `json:"sexual/minors"`Violencebool `json:"violence"`ViolenceGraphicbool `json:"violence/graphic"`}
ModerationCategoryResult shows the categories that the moderation classifier flagged the input text for.
typeModerationCategoryScores¶added inv1.1.15
type ModerationCategoryScores struct {Hatefloat32 `json:"hate"`HateThreateningfloat32 `json:"hate/threatening"`SelfHarmfloat32 `json:"self-harm"`Sexualfloat32 `json:"sexual"`SexualMinorsfloat32 `json:"sexual/minors"`Violencefloat32 `json:"violence"`ViolenceGraphicfloat32 `json:"violence/graphic"`}
ModerationCategoryScores shows the classifier scores for each moderation category.
typeModerationRequest¶added inv1.1.15
type ModerationRequest struct {// Input is the input text that should be classified. Required.Inputstring `json:"input"`// Model is the content moderation model to use. If not specified, will default to OpenAI API defaults, which is// currently "text-moderation-latest".Modelstring `json:"model,omitempty"`}
ModerationRequest is a request for the moderation API.
typeModerationResponse¶added inv1.1.15
type ModerationResponse struct {IDstring `json:"id"`Modelstring `json:"model"`Results []ModerationResult `json:"results"`}
ModerationResponse is the full response from a request to the moderation API.
typeModerationResult¶added inv1.1.15
type ModerationResult struct {Flaggedbool `json:"flagged"`CategoriesModerationCategoryResult `json:"categories"`CategoryScoresModerationCategoryScores `json:"category_scores"`}
ModerationResult represents a single moderation classification result returned by the moderation API.
typeRateLimitHeaders¶added inv1.1.18
type RateLimitHeaders struct {// x-ratelimit-limit-requests: The maximum number of requests that are permitted before exhausting the rate limit.LimitRequestsint// x-ratelimit-limit-tokens: The maximum number of tokens that are permitted before exhausting the rate limit.LimitTokensint// x-ratelimit-remaining-requests: The remaining number of requests that are permitted before exhausting the rate limit.RemainingRequestsint// x-ratelimit-remaining-tokens: The remaining number of tokens that are permitted before exhausting the rate limit.RemainingTokensint// x-ratelimit-reset-requests: The time until the rate limit (based on requests) resets to its initial state.ResetRequeststime.Duration// x-ratelimit-reset-tokens: The time until the rate limit (based on tokens) resets to its initial state.ResetTokenstime.Duration}
RateLimitHeaders contain the HTTP response headers indicating rate limiting status
funcNewRateLimitHeadersFromResponse¶added inv1.1.18
func NewRateLimitHeadersFromResponse(resp *http.Response)RateLimitHeaders
NewRateLimitHeadersFromResponse does a best effort to parse the rate limit information included in response headers
typeSearchData¶
type SearchData struct {Documentint `json:"document"`Objectstring `json:"object"`Scorefloat64 `json:"score"`}
SearchData is a single search result from the document search API
typeSearchRequest¶
SearchRequest is a request for the document search API
typeSearchResponse¶
type SearchResponse struct {Data []SearchData `json:"data"`Objectstring `json:"object"`}
SearchResponse is the full response from a request to the document search API