Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Swift community driven package for OpenAI public API

License

NotificationsYou must be signed in to change notification settings

MacPaw/OpenAI

Repository files navigation

logo


Swift WorkflowTwitter

This repository contains Swift community-maintained implementation overOpenAI public API.

What is OpenAI

OpenAI is a non-profit artificial intelligence research organization founded in San Francisco, California in 2015. It was created with the purpose of advancing digital intelligence in ways that benefit humanity as a whole and promote societal progress. The organization strives to develop AI (Artificial Intelligence) programs and systems that can think, act and adapt quickly on their own – autonomously. OpenAI's mission is to ensure safe and responsible use of AI for civic good, economic growth and other public benefits; this includes cutting-edge research into important topics such as general AI safety, natural language processing, applied reinforcement learning methods, machine vision algorithms etc.

The OpenAI API can be applied to virtually any task that involves understanding or generating natural language or code. We offer a spectrum of models with different levels of power suitable for different tasks, as well as the ability to fine-tune your own custom models. These models can be used for everything from content generation to semantic search and classification.

Installation

OpenAI is available with Swift Package Manager.The Swift Package Manager is a tool for automating the distribution of Swift code and is integrated into the swift compiler.Once you have your Swift package set up, adding OpenAI as a dependency is as easy as adding it to the dependencies value of your Package.swift.

dependencies:[.package(url:"https://github.com/MacPaw/OpenAI.git", branch:"main")]

Usage

Initialization

To initialize API instance you need toobtain API token from your Open AI organization.

Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.

company

Once you have a token, you can initializeOpenAI class, which is an entry point to the API.

⚠️ OpenAI strongly recommends developers of client-side applications proxy requests through a separate backend service to keep their API key safe. API keys can access and manipulate customer billing, usage, and organizational data, so it's a significant risk toexpose them.

letopenAI=OpenAI(apiToken:"YOUR_TOKEN_HERE")

Optionally you can initializeOpenAI with token, organization identifier and timeoutInterval.

letconfiguration=OpenAI.Configuration(token:"YOUR_TOKEN_HERE", organizationIdentifier:"YOUR_ORGANIZATION_ID_HERE", timeoutInterval:60.0)letopenAI=OpenAI(configuration: configuration)

SeeOpenAI.Configuration for more values that can be passed on init for customization, like:host,basePath,port,scheme andcustomHeaders.

Once you posses the token, and the instance is initialized you are ready to make requests.

Chats

Using the OpenAI Chat API, you can build your own applications withgpt-3.5-turbo to do things like:

  • Draft an email or other piece of writing
  • Write Python code
  • Answer questions about a set of documents
  • Create conversational agents
  • Give your software a natural language interface
  • Tutor in a range of subjects
  • Translate languages
  • Simulate characters for video games and much more

Request

structChatQuery:Codable{    /// ID of the model to use.publicletmodel:Model    /// An object specifying the format that the model must output.publicletresponseFormat:ResponseFormat?    /// The messages to generate chat completions forpublicletmessages:[Message]    /// A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.publiclettools:[Tool]?    /// Controls how the model responds to tool calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between and end-user or calling a function. Specifying a particular function via `{"name": "my_function"}` forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present.publiclettoolChoice:ToolChoice?    /// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and  We generally recommend altering this or top_p but not both.publiclettemperature:Double?    /// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.publiclettopP:Double?    /// How many chat completion choices to generate for each input message.publicletn:Int?    /// Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.publicletstop:[String]?    /// The maximum number of tokens to generate in the completion.publicletmaxTokens:Int?    /// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.publicletpresencePenalty:Double?    /// Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.publicletfrequencyPenalty:Double?    /// Modify the likelihood of specified tokens appearing in the completion.publicletlogitBias:[String:Int]?    /// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.publicletuser:String?}

Response

structChatResult:Codable,Equatable{publicstructChoice:Codable,Equatable{publicletindex:Intpublicletmessage:ChatpublicletfinishReason:String}publicstructUsage:Codable,Equatable{publicletpromptTokens:IntpublicletcompletionTokens:IntpubliclettotalTokens:Int}publicletid:Stringpublicletobject:Stringpublicletcreated:TimeIntervalpublicletmodel:Modelpublicletchoices:[Choice]publicletusage:Usage}

Example

letquery=ChatQuery(model:.gpt3_5Turbo, messages:[.init(role:.user, content:"who are you")])letresult=tryawait openAI.chats(query: query)
(lldb) po result▿ ChatResult  - id : "chatcmpl-6pwjgxGV2iPP4QGdyOLXnTY0LE3F8"  - object : "chat.completion"  - created : 1677838528.0  - model : "gpt-3.5-turbo-0301"  ▿ choices : 1 element    ▿ 0 : Choice      - index : 0      ▿ message : Chat        - role : "assistant"        - content : "\n\nI\'m an AI language model developed by OpenAI, created to provide assistance and support for various tasks such as answering questions, generating text, and providing recommendations. Nice to meet you!"      - finish_reason : "stop"  ▿ usage : Usage    - prompt_tokens : 10    - completion_tokens : 39    - total_tokens : 49

Chats Streaming

Chats streaming is available by usingchatStream function. Tokens will be sent one-by-one.

Closures

openAI.chatsStream(query: query){ partialResultinswitch partialResult{case.success(let result):print(result.choices)case.failure(let error):        //Handle chunk error here}} completion:{ errorin    //Handle streaming error here}

Combine

openAI.chatsStream(query: query).sink{ completionin        //Handle completion result here} receiveValue:{ resultin        //Handle chunk here}.store(in:&cancellables)

Structured concurrency

fortryawaitresultin openAI.chatsStream(query: query){   //Handle result here}

Function calls

letopenAI=OpenAI(apiToken:"...")// Declare functions which GPT-3 might decide to call.letfunctions=[FunctionDeclaration(      name:"get_current_weather",      description:"Get the current weather in a given location",      parameters:JSONSchema(          type:.object,          properties:["location":.init(type:.string, description:"The city and state, e.g. San Francisco, CA"),"unit":.init(type:.string, enumValues:["celsius","fahrenheit"])],          required:["location"]))]letquery=ChatQuery(  model:"gpt-3.5-turbo-0613",  // 0613 is the earliest version with function calls support.  messages:[Chat(role:.user, content:"What's the weather like in Boston?")],  tools: functions.map{Tool.function($0)})letresult=tryawait openAI.chats(query: query)

Result will be (serialized as JSON here for readability):

{"id":"chatcmpl-1234","object":"chat.completion","created":1686000000,"model":"gpt-3.5-turbo-0613","choices": [    {"index":0,"message": {"role":"assistant","tool_calls": [          {"id":"call-0","type":"function","function": {"name":"get_current_weather","arguments":"{\n\"location\":\"Boston, MA\"\n}"            }          }        ]      },"finish_reason":"function_call"    }  ],"usage": {"total_tokens":100,"completion_tokens":18,"prompt_tokens":82 }}

ReviewChat Documentation for more info.

Structured Output

JSON is one of the most widely used formats in the world for applications to exchange data.

Structured Outputs is a feature that ensures the model will always generate responses that adhere to your supplied JSON Schema, so you don't need to worry about the model omitting a required key, or hallucinating an invalid enum value.

Example

structMovieInfo:StructuredOutput{lettitle:Stringletdirector:Stringletrelease:Dateletgenres:[MovieGenre]letcast:[String]staticletexample:Self={.init(            title:"Earth",            director:"Alexander Dovzhenko",            release:Calendar.current.date(from:DateComponents(year:1930, month:4, day:1))!,            genres:[.drama],            cast:["Stepan Shkurat","Semyon Svashenko","Yuliya Solntseva"])}()}enumMovieGenre:String,Codable,StructuredOutputEnum{case action, drama, comedy, scifivarcaseNames:[String]{Self.allCases.map{ $0.rawValue}}}letquery=ChatQuery(    messages:[.system(.init(content:"Best Picture winner at the 2011 Oscars"))],    model:.gpt4_o,    responseFormat:.jsonSchema(name:"movie-info", type:MovieInfo.self))letresult=tryawait openAI.chats(query: query)
  • Use thejsonSchema(name:type:) response format when creating aChatQuery
  • Provide a schema name and a type that conforms toChatQuery.StructuredOutput and generates an instance as an example
  • Make sure all enum types within the provided type conform toChatQuery.StructuredOutputEnum and generate an array of names for all cases

ReviewStructured Output Documentation for more info.

Images

Given a prompt and/or an input image, the model will generate a new image.

As Artificial Intelligence continues to develop, so too does the intriguing concept of Dall-E. Developed by OpenAI, a research lab for artificial intelligence purposes, Dall-E has been classified as an AI system that can generate images based on descriptions provided by humans. With its potential applications spanning from animation and illustration to design and engineering - not to mention the endless possibilities in between - it's easy to see why there is such excitement over this new technology.

Create Image

Request

structImagesQuery:Codable{    /// A text description of the desired image(s). The maximum length is 1000 characters.publicletprompt:String    /// The number of images to generate. Must be between 1 and 10.publicletn:Int?    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.publicletsize:String?}

Response

structImagesResult:Codable,Equatable{publicstructURLResult:Codable,Equatable{publicleturl:String}publicletcreated:TimeIntervalpublicletdata:[URLResult]}

Example

letquery=ImagesQuery(prompt:"White cat with heterochromia sitting on the kitchen table", n:1, size:"1024x1024")openAI.images(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.images(query: query)
(lldb) po result▿ ImagesResult  - created : 1671453505.0  ▿ data : 1 element    ▿ 0 : URLResult      - url : "https://oaidalleapiprodscus.blob.core.windows.net/private/org-CWjU5cDIzgCcVjq10pp5yX5Q/user-GoBXgChvLBqLHdBiMJBUbPqF/img-WZVUK2dOD4HKbKwW1NeMJHBd.png?st=2022-12-19T11%3A38%3A25Z&se=2022-12-19T13%3A38%3A25Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2022-12-19T09%3A35%3A16Z&ske=2022-12-20T09%3A35%3A16Z&sks=b&skv=2021-08-06&sig=mh52rmtbQ8CXArv5bMaU6lhgZHFBZz/ePr4y%2BJwLKOc%3D"

Generated image

Generated Image

Create Image Edit

Creates an edited or extended image given an original image and a prompt.

Request

publicstructImageEditsQuery:Codable{    /// The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.publicletimage:DatapublicletfileName:String    /// An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.publicletmask:Data?publicletmaskFileName:String?    /// A text description of the desired image(s). The maximum length is 1000 characters.publicletprompt:String    /// The number of images to generate. Must be between 1 and 10.publicletn:Int?    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.publicletsize:String?}

Response

Uses the ImagesResult response similarly to ImagesQuery.

Example

letdata= image.pngData()letquery=ImageEditQuery(image: data, fileName:"whitecat.png", prompt:"White cat with heterochromia sitting on the kitchen table with a bowl of food", n:1, size:"1024x1024")openAI.imageEdits(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.imageEdits(query: query)

Create Image Variation

Creates a variation of a given image.

Request

publicstructImageVariationsQuery:Codable{    /// The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.publicletimage:DatapublicletfileName:String    /// The number of images to generate. Must be between 1 and 10.publicletn:Int?    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.publicletsize:String?}

Response

Uses the ImagesResult response similarly to ImagesQuery.

Example

letdata= image.pngData()letquery=ImageVariationQuery(image: data, fileName:"whitecat.png", n:1, size:"1024x1024")openAI.imageVariations(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.imageVariations(query: query)

ReviewImages Documentation for more info.

Audio

The speech to text API provides two endpoints, transcriptions and translations, based on our state-of-the-art open source large-v2Whisper model. They can be used to:

Transcribe audio into whatever language the audio is in.Translate and transcribe the audio into english.File uploads are currently limited to 25 MB and the following input file types are supported: mp3, mp4, mpeg, mpga, m4a, wav, and webm.

Audio Create Speech

This function sends anAudioSpeechQuery to the OpenAI API to create audio speech from text using a specific voice and format.

Learn more about voices.
Learn more about models.

Request:

publicstructAudioSpeechQuery:Codable,Equatable{    //...publicletmodel:Model // tts-1 or tts-1-hdpublicletinput:Stringpublicletvoice:AudioSpeechVoicepublicletresponseFormat:AudioSpeechResponseFormatpublicletspeed:String? // Initializes with Double?    //...}

Response:

/// Audio data for one of the following formats :`mp3`, `opus`, `aac`, `flac`, `pcm`publicletaudioData:Data?

Example:

letquery=AudioSpeechQuery(model:.tts_1, input:"Hello, world!", voice:.alloy, responseFormat:.mp3, speed:1.0)openAI.audioCreateSpeech(query: query){ resultin    // Handle response here}//orletresult=tryawait openAI.audioCreateSpeech(query: query)

OpenAI Create Speech – Documentation

Audio Create Speech Streaming

Audio Create Speech is available by usingaudioCreateSpeechStream function. Tokens will be sent one-by-one.

Closures

openAI.audioCreateSpeechStream(query: query){ partialResultinswitch partialResult{case.success(let result):print(result.audio)case.failure(let error):        //Handle chunk error here}} completion:{ errorin    //Handle streaming error here}

Combine

openAI.audioCreateSpeechStream(query: query).sink{ completionin        //Handle completion result here} receiveValue:{ resultin        //Handle chunk here}.store(in:&cancellables)

Structured concurrency

fortryawaitresultin openAI.audioCreateSpeechStream(query: query){   //Handle result here}

Audio Transcriptions

Transcribes audio into the input language.

Request

publicstructAudioTranscriptionQuery:Codable,Equatable{publicletfile:DatapublicletfileName:Stringpublicletmodel:Modelpublicletprompt:String?publiclettemperature:Double?publicletlanguage:String?}

Response

publicstructAudioTranscriptionResult:Codable,Equatable{publiclettext:String}

Example

letdata=Data(contentsOfURL:...)letquery=AudioTranscriptionQuery(file: data, fileName:"audio.m4a", model:.whisper_1)        openAI.audioTranscriptions(query: query){ resultin    //Handle result here}//orletresult=tryawait openAI.audioTranscriptions(query: query)

Audio Translations

Translates audio into into English.

Request

publicstructAudioTranslationQuery:Codable,Equatable{publicletfile:DatapublicletfileName:Stringpublicletmodel:Modelpublicletprompt:String?publiclettemperature:Double?}

Response

publicstructAudioTranslationResult:Codable,Equatable{publiclettext:String}

Example

letdata=Data(contentsOfURL:...)letquery=AudioTranslationQuery(file: data, fileName:"audio.m4a", model:.whisper_1)  openAI.audioTranslations(query: query){ resultin    //Handle result here}//orletresult=tryawait openAI.audioTranslations(query: query)

ReviewAudio Documentation for more info.

Embeddings

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Request

structEmbeddingsQuery:Codable{    /// ID of the model to use.publicletmodel:Model    /// Input text to get embeddings forpublicletinput:String}

Response

structEmbeddingsResult:Codable,Equatable{publicstructEmbedding:Codable,Equatable{publicletobject:Stringpublicletembedding:[Double]publicletindex:Int}publicletdata:[Embedding]publicletusage:Usage}

Example

letquery=EmbeddingsQuery(model:.textSearchBabbageDoc, input:"The food was delicious and the waiter...")openAI.embeddings(query: query){ resultin  //Handle response here}//orletresult=tryawait openAI.embeddings(query: query)
(lldb) po result▿ EmbeddingsResult  ▿ data : 1 element    ▿ 0 : Embedding      - object : "embedding"      ▿ embedding : 2048 elements        - 0 : 0.0010535449        - 1 : 0.024234328        - 2 : -0.0084999        - 3 : 0.008647452    .......        - 2044 : 0.017536353        - 2045 : -0.005897616        - 2046 : -0.026559394        - 2047 : -0.016633155      - index : 0(lldb)

ReviewEmbeddings Documentation for more info.

Models

Models are represented as a typealiastypealias Model = String.

publicextensionModel{staticletgpt4_turbo_preview="gpt-4-turbo-preview"staticletgpt4_vision_preview="gpt-4-vision-preview"staticletgpt4_0125_preview="gpt-4-0125-preview"staticletgpt4_1106_preview="gpt-4-1106-preview"staticletgpt4="gpt-4"staticletgpt4_0613="gpt-4-0613"staticletgpt4_0314="gpt-4-0314"staticletgpt4_32k="gpt-4-32k"staticletgpt4_32k_0613="gpt-4-32k-0613"staticletgpt4_32k_0314="gpt-4-32k-0314"staticletgpt3_5Turbo="gpt-3.5-turbo"staticletgpt3_5Turbo_0125="gpt-3.5-turbo-0125"staticletgpt3_5Turbo_1106="gpt-3.5-turbo-1106"staticletgpt3_5Turbo_0613="gpt-3.5-turbo-0613"staticletgpt3_5Turbo_0301="gpt-3.5-turbo-0301"staticletgpt3_5Turbo_16k="gpt-3.5-turbo-16k"staticletgpt3_5Turbo_16k_0613="gpt-3.5-turbo-16k-0613"staticlettextDavinci_003="text-davinci-003"staticlettextDavinci_002="text-davinci-002"staticlettextCurie="text-curie-001"staticlettextBabbage="text-babbage-001"staticlettextAda="text-ada-001"staticlettextDavinci_001="text-davinci-001"staticletcodeDavinciEdit_001="code-davinci-edit-001"staticlettts_1="tts-1"staticlettts_1_hd="tts-1-hd"staticletwhisper_1="whisper-1"staticletdall_e_2="dall-e-2"staticletdall_e_3="dall-e-3"staticletdavinci="davinci"staticletcurie="curie"staticletbabbage="babbage"staticletada="ada"staticlettextEmbeddingAda="text-embedding-ada-002"staticlettextSearchAda="text-search-ada-doc-001"staticlettextSearchBabbageDoc="text-search-babbage-doc-001"staticlettextSearchBabbageQuery001="text-search-babbage-query-001"staticlettextEmbedding3="text-embedding-3-small"staticlettextEmbedding3Large="text-embedding-3-large"staticlettextModerationStable="text-moderation-stable"staticlettextModerationLatest="text-moderation-latest"staticletmoderation="text-moderation-007"}

GPT-4 models are supported.

As an example: To use thegpt-4-turbo-preview model, pass.gpt4_turbo_preview as the parameter to theChatQuery init.

letquery=ChatQuery(model:.gpt4_turbo_preview, messages:[.init(role:.system, content:"You are Librarian-GPT. You know everything about the books."),.init(role:.user, content:"Who wrote Harry Potter?")])letresult=tryawait openAI.chats(query: query)XCTAssertFalse(result.choices.isEmpty)

You can also pass a custom string if you need to use some model, that is not represented above.

List Models

Lists the currently available models.

Response

publicstructModelsResult:Codable,Equatable{publicletdata:[ModelResult]publicletobject:String}

Example

openAI.models(){ resultin  //Handle result here}//orletresult=tryawait openAI.models()

Retrieve Model

Retrieves a model instance, providing ownership information.

Request

publicstructModelQuery:Codable,Equatable{publicletmodel:Model}

Response

publicstructModelResult:Codable,Equatable{publicletid:Modelpublicletobject:StringpublicletownedBy:String}

Example

letquery=ModelQuery(model:.gpt4)openAI.model(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.model(query: query)

ReviewModels Documentation for more info.

Moderations

Given a input text, outputs if the model classifies it as violating OpenAI's content policy.

Request

publicstructModerationsQuery:Codable{publicletinput:Stringpublicletmodel:Model?}

Response

publicstructModerationsResult:Codable,Equatable{publicletid:Stringpublicletmodel:Modelpublicletresults:[CategoryResult]}

Example

letquery=ModerationsQuery(input:"I want to kill them.")openAI.moderations(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.moderations(query: query)

ReviewModerations Documentation for more info.

Utilities

The component comes with several handy utility functions to work with the vectors.

publicstructVector{    /// Returns the similarity between two vectors    ///    /// - Parameters:    ///     - a: The first vector    ///     - b: The second vectorpublicstaticfunc cosineSimilarity(a:[Double], b:[Double])->Double{returndot(a, b)/(mag(a)* mag(b))}    /// Returns the difference between two vectors. Cosine distance is defined as `1 - cosineSimilarity(a, b)`    ///    /// - Parameters:    ///     - a: The first vector    ///     - b: The second vectorpublicfunc cosineDifference(a:[Double], b:[Double])->Double{return1- Self.cosineSimilarity(a: a, b: b)}}

Example

letvector1=[0.213123,0.3214124,0.421412,0.3214521251,0.412412,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.4214214,0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3214521251]letvector2=[0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.511515,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3213213]letsimilarity=Vector.cosineSimilarity(a: vector1, b: vector2)print(similarity) //0.9510201910206734

In data analysis, cosine similarity is a measure of similarity between two sequences of numbers.

Screenshot 2022-12-19 at 6 00 33 PM

Read more about Cosine Similarityhere.

Combine Extensions

The library contains built-inCombine extensions.

func images(query:ImagesQuery)->AnyPublisher<ImagesResult,Error>func embeddings(query:EmbeddingsQuery)->AnyPublisher<EmbeddingsResult,Error>func chats(query:ChatQuery)->AnyPublisher<ChatResult,Error>func model(query:ModelQuery)->AnyPublisher<ModelResult,Error>func models()->AnyPublisher<ModelsResult,Error>func moderations(query:ModerationsQuery)->AnyPublisher<ModerationsResult,Error>func audioTranscriptions(query:AudioTranscriptionQuery)->AnyPublisher<AudioTranscriptionResult,Error>func audioTranslations(query:AudioTranslationQuery)->AnyPublisher<AudioTranslationResult,Error>

Assistants

ReviewAssistants Documentation for more info.

Create Assistant

Example: Create Assistant

letquery=AssistantsQuery(model:Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)openAI.assistantCreate(query: query){ resultin   //Handle response here}

Modify Assistant

Example: Modify Assistant

letquery=AssistantsQuery(model:Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)openAI.assistantModify(query: query, assistantId:"asst_1234"){ resultin    //Handle response here}

List Assistants

Example: List Assistants

openAI.assistants(){ resultin   //Handle response here}

Threads

ReviewThreads Documentation for more info.

Create Thread

Example: Create Thread

letthreadsQuery=ThreadsQuery(messages:[Chat(role: message.role, content: message.content)])openAI.threads(query: threadsQuery){ resultin  //Handle response here}
Create and Run Thread

Example: Create and Run Thread

letthreadsQuery=ThreadQuery(messages:[Chat(role: message.role, content: message.content)])letthreadRunQuery=ThreadRunQuery(assistantId:"asst_1234"  thread: threadsQuery)openAI.threadRun(query: threadRunQuery){ resultin  //Handle response here}
Get Threads Messages

ReviewMessages Documentation for more info.

Example: Get Threads Messages

openAI.threadsMessages(threadId: currentThreadId){ resultin  //Handle response here}
Add Message to Thread

Example: Add Message to Thread

letquery=MessageQuery(role: message.role.rawValue, content: message.content)openAI.threadsAddMessage(threadId: currentThreadId, query: query){ resultin  //Handle response here}

Runs

ReviewRuns Documentation for more info.

Create Run

Example: Create Run

letrunsQuery=RunsQuery(assistantId:  currentAssistantId)openAI.runs(threadId: threadsResult.id, query: runsQuery){ resultin  //Handle response here}
Retrieve Run

Example: Retrieve Run

openAI.runRetrieve(threadId: currentThreadId, runId: currentRunId){ resultin  //Handle response here}
Retrieve Run Steps

Example: Retrieve Run Steps

openAI.runRetrieveSteps(threadId: currentThreadId, runId: currentRunId){ resultin  //Handle response here}
Submit Tool Outputs for Run

Example: Submit Tool Outputs for Run

letoutput=RunToolOutputsQuery.ToolOutput(toolCallId:"call123", output:"Success")letquery=RunToolOutputsQuery(toolOutputs:[output])openAI.runSubmitToolOutputs(threadId: currentThreadId, runId: currentRunId, query: query){ resultin  //Handle response here}

Files

ReviewFiles Documentation for more info.

Upload file

Example: Upload file

letquery=FilesQuery(purpose:"assistants", file: fileData, fileName: url.lastPathComponent, contentType:"application/pdf")openAI.files(query: query){ resultin  //Handle response here}

Cancelling requests

Closure based API

When you call any of the closure-based API methods, it returns discardableCancellableRequest. Hold a reference to it to be able to cancel the request later.

letcancellableRequest= object.chats(query: query, completion:{ _in})cancellableReques

Swift Concurrency

For Swift Concurrency calls, you can simply cancel the calling task, and correspondingURLSessionDataTask would get cancelled automatically.

lettask=Task{do{letchatResult=tryawait openAIClient.chats(query:.init(messages:[], model:"asd"))}catch{        // Handle cancellation or error}}            task.cancel()

Combine

In Combine, use a default cancellation mechanism. Just discard the reference to a subscription, or callcancel() on it.

letsubscription= openAIClient.images(query: query).sink(receiveCompletion:{ completionin}, receiveValue:{ imagesResultin})    subscription.cancel()

Support for other providers

This SDK has a limited support for other providers like Gemini, Perplexity etc.

The top priority of this SDK is OpenAI, and the main rule is for all the main types to be fully compatible withOpenAI's API Reference. If it says a field should be optional, it must be optional in main subset of Query/Result types of this SDK. The same goes for other info declared in the reference, like default values.

That said we still want to give a support for other providers. For the time being, we'll cover the requests case by case.

Perplexity - Chat Completions Response

citations - added toChatResult as optional field to enable parsing of Perplexity responses

Example Project

You can find example iOS application inDemo folder.

mockuuups-iphone-13-pro-mockup-perspective-right

Contribution Guidelines

Make your Pull Requests clear and obvious to anyone viewing them.
Setmain as your target branch.

UseConventional Commits principles in naming PRs and branches:

  • Feat: ... for new features and new functionality implementations.
  • Bug: ... for bug fixes.
  • Fix: ... for minor issues fixing, like typos or inaccuracies in code.
  • Chore: ... for boring stuff like code polishing, refactoring, deprecation fixing etc.

PR naming example:Feat: Add Threads API handling orBug: Fix message result duplication

Branch naming example:feat/add-threads-API-handling orbug/fix-message-result-duplication

Write description to pull requests in following format:

  • What

    ...

  • Why

    ...

  • Affected Areas

    ...

  • More Info

    ...

We'll appreciate you including tests to your code if it is needed and possible. ❤️

Links

License

MIT LicenseCopyright (c) 2023 MacPaw Inc.Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.

[8]ページ先頭

©2009-2025 Movatter.jp