Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Swift community driven package for OpenAI public API

License

NotificationsYou must be signed in to change notification settings

MacPaw/OpenAI

Repository files navigation

logo


Swift WorkflowTwitter

This repository contains Swift community-maintained implementation overOpenAI public API.

Documentation

This library implements it's types and methods in close accordance to the REST API documentation, which can be found onplatform.openai.com.

Installation

Swift Package Manager

To integrate OpenAI into your Xcode project using Swift Package Manager:

  1. In Xcode, go toFile > Add Package Dependencies...
  2. Enter the repository URL:https://github.com/MacPaw/OpenAI.git
  3. Choose your desired dependency rule (e.g., "Up to Next Major Version").

Alternatively, you can add it directly to yourPackage.swift file:

dependencies:[.package(url:"https://github.com/MacPaw/OpenAI.git", branch:"main")]

Usage

Initialization

To initialize API instance you need toobtain API token from your Open AI organization.

Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.

company

Once you have a token, you can initializeOpenAI class, which is an entry point to the API.

⚠️ OpenAI strongly recommends developers of client-side applications proxy requests through a separate backend service to keep their API key safe. API keys can access and manipulate customer billing, usage, and organizational data, so it's a significant risk toexpose them.

letopenAI=OpenAI(apiToken:"YOUR_TOKEN_HERE")

Optionally you can initializeOpenAI with token, organization identifier and timeoutInterval.

letconfiguration=OpenAI.Configuration(token:"YOUR_TOKEN_HERE", organizationIdentifier:"YOUR_ORGANIZATION_ID_HERE", timeoutInterval:60.0)letopenAI=OpenAI(configuration: configuration)

SeeOpenAI.Configuration for more values that can be passed on init for customization, like:host,basePath,port,scheme andcustomHeaders.

Once you posses the token, and the instance is initialized you are ready to make requests.

Using the SDK for other providers except OpenAI

This SDK is more focused on working with OpenAI Platform, but also works with other providers that support OpenAI-compatible API.

Use.relaxed parsing option on Configuration, or see more details on the topichere

Cancelling requests

For Swift Concurrency calls, you can simply cancel the calling task, and corresponding underlyingURLSessionDataTask would get cancelled automatically.

lettask=Task{do{letchatResult=tryawait openAIClient.chats(query:.init(messages:[], model:"asd"))}catch{        // Handle cancellation or error}}            task.cancel()
Cancelling closure-based API calls

When you call any of the closure-based API methods, it returns discardableCancellableRequest. Hold a reference to it to be able to cancel the request later.

letcancellableRequest= object.chats(query: query, completion:{ _in})cancellableReques
Cancelling Combine subscriptionsIn Combine, use a default cancellation mechanism. Just discard the reference to a subscription, or call `cancel()` on it.
letsubscription= openAIClient.images(query: query).sink(receiveCompletion:{ completionin}, receiveValue:{ imagesResultin})    subscription.cancel()

Text and prompting

Responses

Useresponses variable onOpenAIProtocol to call Responses API methods.

publicprotocolOpenAIProtocol{    // ...varresponses:ResponsesEndpointProtocol{get}    // ...}

Specify params by passingCreateModelResponseQuery to a method. GetResponseObject or a stream ofResponseStreamEvent events in response.

Example: Generate text from a simple prompt

letclient:OpenAIProtocol= /* client initialization code */let query=CreateModelResponseQuery(    input:.textInput("Write a one-sentence bedtime story about a unicorn."),    model:.gpt4_1)letresponse:ResponseObject=tryawait client.responses.createResponse(query: query)// ...
print(response)
ResponseObject(  createdAt: 1752146109,  error: nil,  id: "resp_686fa0bd8f588198affbbf5a8089e2d208a5f6e2111e31f5",  incompleteDetails: nil,  instructions: nil,  maxOutputTokens: nil,  metadata: [:],  model: "gpt-4.1-2025-04-14",  object: "response",  output: [    OpenAI.OutputItem.outputMessage(      OpenAI.Components.Schemas.OutputMessage(        id: "msg_686fa0bee24881988a4d1588d7f65c0408a5f6e2111e31f5",        _type: OpenAI.Components.Schemas.OutputMessage._TypePayload.message,        role: OpenAI.Components.Schemas.OutputMessage.RolePayload.assistant,        content: [          OpenAI.Components.Schemas.OutputContent.OutputTextContent(            OpenAI.Components.Schemas.OutputTextContent(              _type: OpenAI.Components.Schemas.OutputTextContent._TypePayload.outputText,              text: "Under a sky full of twinkling stars, a gentle unicorn named Luna danced through fields of stardust, spreading sweet dreams to every sleeping child.",              annotations: [],              logprobs: Optional([])            )          )        ],        status: OpenAI.Components.Schemas.OutputMessage.StatusPayload.completed      )    )  ],  parallelToolCalls: true,  previousResponseId: nil,  reasoning: Optional(    OpenAI.Components.Schemas.Reasoning(      effort: nil,      summary: nil,      generateSummary: nil    )  ),  status: "completed",  temperature: Optional(1.0),  text: OpenAI.Components.Schemas.ResponseProperties.TextPayload(    format: Optional(      OpenAI.Components.Schemas.TextResponseFormatConfiguration.ResponseFormatText(        OpenAI.Components.Schemas.ResponseFormatText(          _type: OpenAI.Components.Schemas.ResponseFormatText._TypePayload.text        )      )    ),    toolChoice: OpenAI.Components.Schemas.ResponseProperties.ToolChoicePayload.ToolChoiceOptions(      OpenAI.Components.Schemas.ToolChoiceOptions.auto    ),    tools: [],    topP: Optional(1.0),    truncation: Optional("disabled"),    usage: Optional(      OpenAI.Components.Schemas.ResponseUsage(        inputTokens: 18,        inputTokensDetails: OpenAI.Components.Schemas.ResponseUsage.InputTokensDetailsPayload(          cachedTokens: 0        ),        outputTokens: 32,        outputTokensDetails: OpenAI.Components.Schemas.ResponseUsage.OutputTokensDetailsPayload(          reasoningTokens: 0        ),        totalTokens: 50      )    ),    user: nil  ))

An array of content generated by the model is in theoutput property of the response.

[!NOTE]Theoutput array often has more than one item in it! It can contain tool calls, data about reasoning tokens generated by reasoning models, and other items. It is not safe to assume that the model's text output is present atoutput[0].content[0].text.

Because of the note above, to safely and fully read the response, we'd need to switch both over messages and their contents, like this:

// ...foroutputin response.output{switch output{case.outputMessage(let outputMessage):forcontentin outputMessage.content{switch content{case.OutputTextContent(let textContent):print(textContent.text)case.RefusalContent(let refusalContent):print(refusalContent.refusal)}}default:        // Unhandled output items. Handle or throw an error.}}

Chat Completions

UseChatQuery withfunc chats(query:) andfunc chatsStream(query:) methods onOpenAIProtocol to generate text using Chat Completions API. GetChatResult orChatStreamResult in response.

Example: Generate text from a simple prompt

letquery=ChatQuery(    messages:[.user(.init(content:.string("Who are you?")))],    model:.gpt4_o)letresult=tryawait openAI.chats(query: query)print(result.choices.first?.message.content??"")// printed to console:// I'm an AI language model created by OpenAI, designed to assist with a wide range of questions and tasks. How can I help you today?
po result
(lldb) po result▿ ChatResult  - id : "chatcmpl-BgWJTzbVczdJDusTqVpnR6AQ2w6Fd"  - created : 1749473687  - model : "gpt-4o-2024-08-06"  - object : "chat.completion"  ▿ serviceTier : Optional<ServiceTier>    - some : OpenAI.ServiceTier.defaultTier  ▿ systemFingerprint : Optional<String>    - some : "fp_07871e2ad8"  ▿ choices : 1 element    ▿ 0 : Choice      - index : 0      - logprobs : nil      ▿ message : Message        ▿ content : Optional<String>          - some : "I am an AI language model created by OpenAI, known as ChatGPT. I\'m here to assist with answering questions, providing explanations, and engaging in conversation on a wide range of topics. If you have any questions or need assistance, feel free to ask!"        - refusal : nil        - role : "assistant"        ▿ annotations : Optional<Array<Annotation>>          - some : 0 elements        - audio : nil        - toolCalls : nil        - _reasoning : nil        - _reasoningContent : nil      - finishReason : "stop"  ▿ usage : Optional<CompletionUsage>    ▿ some : CompletionUsage      - completionTokens : 52      - promptTokens : 11      - totalTokens : 63      ▿ promptTokensDetails : Optional<PromptTokensDetails>        ▿ some : PromptTokensDetails          - audioTokens : 0          - cachedTokens : 0  - citations : nil

Function calling

SeeOpenAI Platform Guide: Function calling for more details.

Chat Completions API Examples

Function calling with get_weather function

letopenAI=OpenAI(apiToken:"...")// Declare functions which model might decide to call.letfunctions=[ChatQuery.ChatCompletionToolParam.FunctionDefinition(        name:"get_weather",        description:"Get current temperature for a given location.",        parameters:.init(fields:[.type(.object),.properties(["location":.init(fields:[.type(.string),.description("City and country e.g. Bogotá, Colombia")])]),.required(["location"]),.additionalProperties(.boolean(false))]))]letquery= ChatQuery(    messages:[.user(.init(content:.string("What is the weather like in Paris today?"],    model:.gpt4_1,    tools: functions.map{.init(function: $0)})let result=tryawait openAI.chats(query: query)print(result.choices[0].message.toolCalls)

Result will be (serialized as JSON here for readability):

{"id":"chatcmpl-1234","object":"chat.completion","created":1686000000,"model":"gpt-3.5-turbo-0613","choices": [    {"index":0,"message": {"role":"assistant","tool_calls": [          {"id":"call-0","type":"function","function": {"name":"get_current_weather","arguments":"{\n\"location\":\"Boston, MA\"\n}"            }          }        ]      },"finish_reason":"function_call"    }  ],"usage": {"total_tokens":100,"completion_tokens":18,"prompt_tokens":82 }}

Images

Given a prompt and/or an input image, the model will generate a new image.

As Artificial Intelligence continues to develop, so too does the intriguing concept of Dall-E. Developed by OpenAI, a research lab for artificial intelligence purposes, Dall-E has been classified as an AI system that can generate images based on descriptions provided by humans. With its potential applications spanning from animation and illustration to design and engineering - not to mention the endless possibilities in between - it's easy to see why there is such excitement over this new technology.

Create Image

Request

structImagesQuery:Codable{    /// A text description of the desired image(s). The maximum length is 1000 characters.publicletprompt:String    /// The number of images to generate. Must be between 1 and 10.publicletn:Int?    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.publicletsize:String?}

Response

structImagesResult:Codable,Equatable{publicstructURLResult:Codable,Equatable{publicleturl:String}publicletcreated:TimeIntervalpublicletdata:[URLResult]}

Example

letquery=ImagesQuery(prompt:"White cat with heterochromia sitting on the kitchen table", n:1, size:"1024x1024")openAI.images(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.images(query: query)
(lldb) po result▿ ImagesResult  - created : 1671453505.0  ▿ data : 1 element    ▿ 0 : URLResult      - url : "https://oaidalleapiprodscus.blob.core.windows.net/private/org-CWjU5cDIzgCcVjq10pp5yX5Q/user-GoBXgChvLBqLHdBiMJBUbPqF/img-WZVUK2dOD4HKbKwW1NeMJHBd.png?st=2022-12-19T11%3A38%3A25Z&se=2022-12-19T13%3A38%3A25Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2022-12-19T09%3A35%3A16Z&ske=2022-12-20T09%3A35%3A16Z&sks=b&skv=2021-08-06&sig=mh52rmtbQ8CXArv5bMaU6lhgZHFBZz/ePr4y%2BJwLKOc%3D"

Generated image

Generated Image

Create Image Edit

Creates an edited or extended image given an original image and a prompt.

Request

publicstructImageEditsQuery:Codable{    /// The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.publicletimage:DatapublicletfileName:String    /// An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.publicletmask:Data?publicletmaskFileName:String?    /// A text description of the desired image(s). The maximum length is 1000 characters.publicletprompt:String    /// The number of images to generate. Must be between 1 and 10.publicletn:Int?    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.publicletsize:String?}

Response

Uses the ImagesResult response similarly to ImagesQuery.

Example

letdata= image.pngData()letquery=ImageEditQuery(image: data, fileName:"whitecat.png", prompt:"White cat with heterochromia sitting on the kitchen table with a bowl of food", n:1, size:"1024x1024")openAI.imageEdits(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.imageEdits(query: query)

Create Image Variation

Creates a variation of a given image.

Request

publicstructImageVariationsQuery:Codable{    /// The image to edit. Must be a valid PNG file, less than 4MB, and square. If mask is not provided, image must have transparency, which will be used as the mask.publicletimage:DatapublicletfileName:String    /// The number of images to generate. Must be between 1 and 10.publicletn:Int?    /// The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.publicletsize:String?}

Response

Uses the ImagesResult response similarly to ImagesQuery.

Example

letdata= image.pngData()letquery=ImageVariationQuery(image: data, fileName:"whitecat.png", n:1, size:"1024x1024")openAI.imageVariations(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.imageVariations(query: query)

ReviewImages Documentation for more info.

Audio

The speech to text API provides two endpoints, transcriptions and translations, based on our state-of-the-art open source large-v2Whisper model. They can be used to:

Transcribe audio into whatever language the audio is in.Translate and transcribe the audio into english.File uploads are currently limited to 25 MB and the following input file types are supported: mp3, mp4, mpeg, mpga, m4a, wav, and webm.

Audio Create Speech

This function sends anAudioSpeechQuery to the OpenAI API to create audio speech from text using a specific voice and format.

Learn more about voices.
Learn more about models.

Request:

publicstructAudioSpeechQuery:Codable,Equatable{    //...publicletmodel:Model // tts-1 or tts-1-hdpublicletinput:Stringpublicletvoice:AudioSpeechVoicepublicletresponseFormat:AudioSpeechResponseFormatpublicletspeed:String? // Initializes with Double?    //...}

Response:

/// Audio data for one of the following formats :`mp3`, `opus`, `aac`, `flac`, `pcm`publicletaudioData:Data?

Example:

letquery=AudioSpeechQuery(model:.tts_1, input:"Hello, world!", voice:.alloy, responseFormat:.mp3, speed:1.0)openAI.audioCreateSpeech(query: query){ resultin    // Handle response here}//orletresult=tryawait openAI.audioCreateSpeech(query: query)

OpenAI Create Speech – Documentation

Audio Create Speech Streaming

Audio Create Speech is available by usingaudioCreateSpeechStream function. Tokens will be sent one-by-one.

Closures

openAI.audioCreateSpeechStream(query: query){ partialResultinswitch partialResult{case.success(let result):print(result.audio)case.failure(let error):        //Handle chunk error here}} completion:{ errorin    //Handle streaming error here}

Combine

openAI.audioCreateSpeechStream(query: query).sink{ completionin        //Handle completion result here} receiveValue:{ resultin        //Handle chunk here}.store(in:&cancellables)

Structured concurrency

fortryawaitresultin openAI.audioCreateSpeechStream(query: query){   //Handle result here}

Audio Transcriptions

Transcribes audio into the input language.

Request

publicstructAudioTranscriptionQuery:Codable,Equatable{publicletfile:DatapublicletfileName:Stringpublicletmodel:Modelpublicletprompt:String?publiclettemperature:Double?publicletlanguage:String?}

Response

publicstructAudioTranscriptionResult:Codable,Equatable{publiclettext:String}

Example

letdata=Data(contentsOfURL:...)letquery=AudioTranscriptionQuery(file: data, fileName:"audio.m4a", model:.whisper_1)        openAI.audioTranscriptions(query: query){ resultin    //Handle result here}//orletresult=tryawait openAI.audioTranscriptions(query: query)

Audio Translations

Translates audio into into English.

Request

publicstructAudioTranslationQuery:Codable,Equatable{publicletfile:DatapublicletfileName:Stringpublicletmodel:Modelpublicletprompt:String?publiclettemperature:Double?}

Response

publicstructAudioTranslationResult:Codable,Equatable{publiclettext:String}

Example

letdata=Data(contentsOfURL:...)letquery=AudioTranslationQuery(file: data, fileName:"audio.m4a", model:.whisper_1)  openAI.audioTranslations(query: query){ resultin    //Handle result here}//orletresult=tryawait openAI.audioTranslations(query: query)

ReviewAudio Documentation for more info.

Structured Outputs

[!NOTE] This section focuses on non-function calling use cases in the Responses and Chat Completions APIs. To learn more about how to use Structured Outputs with function calling, check out theFunction Calling.

To configure structured outputs you would define a JSON Schema and pass it to a query.

This SDK supports multiple ways to define a schema; choose the one you prefer.

JSONSchemaDefinition.jsonSchema

Build a schema by specifying fields

This definition acceptsJSONSchema which is eitherboolean orobject JSON Document.

Instead of providing schema yourself you can build one in a type-safe manner using initializers that accept[JSONSchemaField], as shown in the example below.

While this method of defining a schema is direct, it can be verbose. For alternative ways to define a schema, see the options below.

Example

letquery=CreateModelResponseQuery(    input:.textInput("Return structured output"),    model:.gpt4_o,    text:.jsonSchema(.init(        name:"research_paper_extraction",        schema:.jsonSchema(.init(.type(.object),.properties(["title":Schema.buildBlock(.type(.string)),"authors":.init(.type(.array),.items(.init(.type(.string)))),"abstract":.init(.type(.string)),"keywords":.init(.type(.array),.items(.init(.type(.string))))]),.required(["title, authors, abstract, keywords"]),.additionalProperties(.boolean(false)))),        description:"desc",        strict:false)))letresponse=tryawait openAIClient.responses.createResponse(query: query)foroutputin response.output{switch output{case.outputMessage(let message):forcontentin message.content{switch content{case.OutputTextContent(let textContent):print("json output structured by the schema:", textContent.text)case.RefusalContent(let refusal):                // Handle refusalbreak}}default:        // Handle other OutputItemsbreak}}
JSONSchemaDefinition.derivedJsonSchema

Implement a type that describes a schema

UsePydantic orZod fashion to define schemas.

  • Use thederivedJsonSchema(_ type:) response format when creating aChatQuery orCreateModelResponseQuery
  • Provide a type that conforms toJSONSchemaConvertible and generates an instance as an example
  • Make sure all enum types within the provided type conform toJSONSchemaEnumConvertible and generate an array of names for all cases

Example

structMovieInfo:JSONSchemaConvertible{lettitle:Stringletdirector:Stringletrelease:Dateletgenres:[MovieGenre]letcast:[String]staticletexample:Self={.init(            title:"Earth",            director:"Alexander Dovzhenko",            release:Calendar.current.date(from:DateComponents(year:1930, month:4, day:1))!,            genres:[.drama],            cast:["Stepan Shkurat","Semyon Svashenko","Yuliya Solntseva"])}()}enumMovieGenre:String,Codable,JSONSchemaEnumConvertible{case action, drama, comedy, scifivarcaseNames:[String]{Self.allCases.map{ $0.rawValue}}}letquery=ChatQuery(    messages:[.system(.init(content:"Best Picture winner at the 2011 Oscars"))],    model:.gpt4_o,    responseFormat:.jsonSchema(.derivedJsonSchema(name:"movie-info", type:MovieInfo.self)))letresult=tryawait openAI.chats(query: query)
JSONSchemaDefinition.dynamicJsonSchema

Define a schema with an instance of any type that conforms to Encodable

Define your JSON schema using simple Dictionaries, or specify JSON schema with a library likehttps://github.com/kevinhermawan/swift-json-schema.

Example

structAnyEncodable:Encodable{privatelet_encode:(Encoder)throws->Voidpublicinit<T:Encodable>(_ wrapped:T){        _encode= wrapped.encode}func encode(to encoder:Encoder)throws{try_encode(encoder)}}letschema=["type":AnyEncodable("object"),"properties":AnyEncodable(["title":AnyEncodable(["type":"string"]),"director":AnyEncodable(["type":"string"]),"release":AnyEncodable(["type":"string"]),"genres":AnyEncodable(["type":AnyEncodable("array"),"items":AnyEncodable(["type":AnyEncodable("string"),"enum":AnyEncodable(["action","drama","comedy","scifi"])])]),"cast":AnyEncodable(["type":AnyEncodable("array"),"items":AnyEncodable(["type":"string"])])]),"additionalProperties":AnyEncodable(false)]letquery=ChatQuery(    messages:[.system(.init(content:.textContent("Return a structured response.")))],    model:.gpt4_o,    responseFormat:.jsonSchema(.init(name:"movie-info", schema:.dynamicJsonSchema(schema))))letresult=tryawait openAI.chats(query: query)

ReviewStructured Output Documentation for more info.

Tools

Remote MCP (Model Context Protocol)

The Model Context Protocol (MCP) enables AI models to securely connect to external data sources and tools through standardized server connections. This OpenAI Swift library supports MCP integration, allowing you to extend model capabilities with remote tools and services.

You can use theMCP Swift library to connect to MCP servers and discover available tools, then integrate those tools with OpenAI's chat completions.

MCP Tool Integration

Request

// Create an MCP tool for connecting to a remote serverletmcpTool=Tool.mcpTool(.init(        _type:.mcp,        serverLabel:"GitHub_MCP_Server",        serverUrl:"https://api.githubcopilot.com/mcp/",        headers:.init(additionalProperties:["Authorization":"Bearer YOUR_TOKEN_HERE"]),        allowedTools:.case1(["search_repositories","get_file_contents"]),        requireApproval:.case2(.always)))letquery=ChatQuery(    messages:[.user(.init(content:.string("Search for Swift repositories on GitHub")))],    model:.gpt4_o,    tools:[mcpTool])

MCP Tool Properties

  • serverLabel: A unique identifier for the MCP server
  • serverUrl: The URL endpoint of the MCP server
  • headers: Authentication headers and other HTTP headers required by the server
  • allowedTools: Specific tools to enable from the server (optional - if not specified, all tools are available)
  • requireApproval: Whether tool calls require user approval (.always,.never, or conditional)

Example with MCP Swift Library

import MCPimport OpenAI// Connect to MCP server using the MCP Swift libraryletmcpClient=MCP.Client(name:"MyApp", version:"1.0.0")lettransport=HTTPClientTransport(    endpoint:URL(string:"https://api.githubcopilot.com/mcp/")!,    configuration:URLSessionConfiguration.default)letresult=tryawait mcpClient.connect(transport: transport)lettoolsResponse=tryawait mcpClient.listTools()// Create OpenAI MCP tool with discovered toolsletenabledToolNames= toolsResponse.tools.map{ $0.name}letmcpTool=Tool.mcpTool(.init(        _type:.mcp,        serverLabel:"GitHub_MCP_Server",        serverUrl:"https://api.githubcopilot.com/mcp/",        headers:.init(additionalProperties: authHeaders),        allowedTools:.case1(enabledToolNames),        requireApproval:.case2(.always)))// Use in chat completionletquery=ChatQuery(    messages:[.user(.init(content:.string("Help me search GitHub repositories")))],    model:.gpt4_o,    tools:[mcpTool])letchatResult=tryawait openAI.chats(query: query)

MCP Tool Call Handling

When using MCP tools, the model may generate tool calls that are executed on the remote MCP server. Handle MCP-specific output items in your response processing:

// Handle MCP tool calls in streaming responsesfortryawaitresultin openAI.chatsStream(query: query){forchoicein result.choices{iflet outputItem= choice.delta.content{switch outputItem{case.mcpToolCall(let mcpCall):print("MCP tool call:\(mcpCall.name)")iflet output= mcpCall.output{print("Result:\(output)")}case.mcpApprovalRequest(let approvalRequest):                // Handle approval request if requireApproval is enabledprint("MCP tool requires approval:\(approvalRequest)")default:                // Handle other output typesbreak}}}}

Specialized models

Embeddings

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Request

structEmbeddingsQuery:Codable{    /// ID of the model to use.publicletmodel:Model    /// Input text to get embeddings forpublicletinput:String}

Response

structEmbeddingsResult:Codable,Equatable{publicstructEmbedding:Codable,Equatable{publicletobject:Stringpublicletembedding:[Double]publicletindex:Int}publicletdata:[Embedding]publicletusage:Usage}

Example

letquery=EmbeddingsQuery(model:.textSearchBabbageDoc, input:"The food was delicious and the waiter...")openAI.embeddings(query: query){ resultin  //Handle response here}//orletresult=tryawait openAI.embeddings(query: query)
(lldb) po result▿ EmbeddingsResult  ▿ data : 1 element    ▿ 0 : Embedding      - object : "embedding"      ▿ embedding : 2048 elements        - 0 : 0.0010535449        - 1 : 0.024234328        - 2 : -0.0084999        - 3 : 0.008647452    .......        - 2044 : 0.017536353        - 2045 : -0.005897616        - 2046 : -0.026559394        - 2047 : -0.016633155      - index : 0(lldb)

ReviewEmbeddings Documentation for more info.

Moderations

Given a input text, outputs if the model classifies it as violating OpenAI's content policy.

Request

publicstructModerationsQuery:Codable{publicletinput:Stringpublicletmodel:Model?}

Response

publicstructModerationsResult:Codable,Equatable{publicletid:Stringpublicletmodel:Modelpublicletresults:[CategoryResult]}

Example

letquery=ModerationsQuery(input:"I want to kill them.")openAI.moderations(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.moderations(query: query)

ReviewModerations Documentation for more info.

Other APIs

Models

Models are represented as a typealiastypealias Model = String.

publicextensionModel{staticletgpt4_1="gpt-4.1"staticletgpt4_1_mini="gpt-4.1-mini"staticletgpt4_1_nano="gpt-4.1-nano"staticletgpt4_turbo_preview="gpt-4-turbo-preview"staticletgpt4_vision_preview="gpt-4-vision-preview"staticletgpt4_0125_preview="gpt-4-0125-preview"staticletgpt4_1106_preview="gpt-4-1106-preview"staticletgpt4="gpt-4"staticletgpt4_0613="gpt-4-0613"staticletgpt4_0314="gpt-4-0314"staticletgpt4_32k="gpt-4-32k"staticletgpt4_32k_0613="gpt-4-32k-0613"staticletgpt4_32k_0314="gpt-4-32k-0314"staticletgpt3_5Turbo="gpt-3.5-turbo"staticletgpt3_5Turbo_0125="gpt-3.5-turbo-0125"staticletgpt3_5Turbo_1106="gpt-3.5-turbo-1106"staticletgpt3_5Turbo_0613="gpt-3.5-turbo-0613"staticletgpt3_5Turbo_0301="gpt-3.5-turbo-0301"staticletgpt3_5Turbo_16k="gpt-3.5-turbo-16k"staticletgpt3_5Turbo_16k_0613="gpt-3.5-turbo-16k-0613"staticlettextDavinci_003="text-davinci-003"staticlettextDavinci_002="text-davinci-002"staticlettextCurie="text-curie-001"staticlettextBabbage="text-babbage-001"staticlettextAda="text-ada-001"staticlettextDavinci_001="text-davinci-001"staticletcodeDavinciEdit_001="code-davinci-edit-001"staticlettts_1="tts-1"staticlettts_1_hd="tts-1-hd"staticletwhisper_1="whisper-1"staticletdall_e_2="dall-e-2"staticletdall_e_3="dall-e-3"staticletdavinci="davinci"staticletcurie="curie"staticletbabbage="babbage"staticletada="ada"staticlettextEmbeddingAda="text-embedding-ada-002"staticlettextSearchAda="text-search-ada-doc-001"staticlettextSearchBabbageDoc="text-search-babbage-doc-001"staticlettextSearchBabbageQuery001="text-search-babbage-query-001"staticlettextEmbedding3="text-embedding-3-small"staticlettextEmbedding3Large="text-embedding-3-large"staticlettextModerationStable="text-moderation-stable"staticlettextModerationLatest="text-moderation-latest"staticletmoderation="text-moderation-007"}

GPT-4 models are supported.

As an example: To use thegpt-4-turbo-preview model, pass.gpt4_turbo_preview as the parameter to theChatQuery init.

letquery=ChatQuery(model:.gpt4_turbo_preview, messages:[.init(role:.system, content:"You are Librarian-GPT. You know everything about the books."),.init(role:.user, content:"Who wrote Harry Potter?")])letresult=tryawait openAI.chats(query: query)XCTAssertFalse(result.choices.isEmpty)

You can also pass a custom string if you need to use some model, that is not represented above.

List Models

Lists the currently available models.

Response

publicstructModelsResult:Codable,Equatable{publicletdata:[ModelResult]publicletobject:String}

Example

openAI.models(){ resultin  //Handle result here}//orletresult=tryawait openAI.models()

Retrieve Model

Retrieves a model instance, providing ownership information.

Request

publicstructModelQuery:Codable,Equatable{publicletmodel:Model}

Response

publicstructModelResult:Codable,Equatable{publicletid:Modelpublicletobject:StringpublicletownedBy:String}

Example

letquery=ModelQuery(model:.gpt4)openAI.model(query: query){ resultin  //Handle result here}//orletresult=tryawait openAI.model(query: query)

ReviewModels Documentation for more info.

Utilities

The component comes with several handy utility functions to work with the vectors.

publicstructVector{    /// Returns the similarity between two vectors    ///    /// - Parameters:    ///     - a: The first vector    ///     - b: The second vectorpublicstaticfunc cosineSimilarity(a:[Double], b:[Double])->Double{returndot(a, b)/(mag(a)* mag(b))}    /// Returns the difference between two vectors. Cosine distance is defined as `1 - cosineSimilarity(a, b)`    ///    /// - Parameters:    ///     - a: The first vector    ///     - b: The second vectorpublicfunc cosineDifference(a:[Double], b:[Double])->Double{return1- Self.cosineSimilarity(a: a, b: b)}}

Example

letvector1=[0.213123,0.3214124,0.421412,0.3214521251,0.412412,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.4214214,0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3214521251]letvector2=[0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.511515,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3214521251,0.213123,0.3214124,0.1414124,0.3213213]letsimilarity=Vector.cosineSimilarity(a: vector1, b: vector2)print(similarity) //0.9510201910206734

In data analysis, cosine similarity is a measure of similarity between two sequences of numbers.

Screenshot 2022-12-19 at 6 00 33 PM

Read more about Cosine Similarityhere.

Assistants

ReviewAssistants Documentation for more info.

Create Assistant

Example: Create Assistant

letquery=AssistantsQuery(model:Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)openAI.assistantCreate(query: query){ resultin   //Handle response here}

Modify Assistant

Example: Modify Assistant

letquery=AssistantsQuery(model:Model.gpt4_o_mini, name: name, description: description, instructions: instructions, tools: tools, toolResources: toolResources)openAI.assistantModify(query: query, assistantId:"asst_1234"){ resultin    //Handle response here}

List Assistants

Example: List Assistants

openAI.assistants(){ resultin   //Handle response here}

Threads

ReviewThreads Documentation for more info.

Create Thread

Example: Create Thread

letthreadsQuery=ThreadsQuery(messages:[Chat(role: message.role, content: message.content)])openAI.threads(query: threadsQuery){ resultin  //Handle response here}

Create and Run Thread

Example: Create and Run Thread

letthreadsQuery=ThreadQuery(messages:[Chat(role: message.role, content: message.content)])letthreadRunQuery=ThreadRunQuery(assistantId:"asst_1234"  thread: threadsQuery)openAI.threadRun(query: threadRunQuery){ resultin  //Handle response here}

Get Threads Messages

ReviewMessages Documentation for more info.

Example: Get Threads Messages

openAI.threadsMessages(threadId: currentThreadId){ resultin  //Handle response here}

Add Message to Thread

Example: Add Message to Thread

letquery=MessageQuery(role: message.role.rawValue, content: message.content)openAI.threadsAddMessage(threadId: currentThreadId, query: query){ resultin  //Handle response here}

Runs

ReviewRuns Documentation for more info.

Create Run

Example: Create Run

letrunsQuery=RunsQuery(assistantId:  currentAssistantId)openAI.runs(threadId: threadsResult.id, query: runsQuery){ resultin  //Handle response here}

Retrieve Run

Example: Retrieve Run

openAI.runRetrieve(threadId: currentThreadId, runId: currentRunId){ resultin  //Handle response here}

Retrieve Run Steps

Example: Retrieve Run Steps

openAI.runRetrieveSteps(threadId: currentThreadId, runId: currentRunId){ resultin  //Handle response here}

Submit Tool Outputs for Run

Example: Submit Tool Outputs for Run

letoutput=RunToolOutputsQuery.ToolOutput(toolCallId:"call123", output:"Success")letquery=RunToolOutputsQuery(toolOutputs:[output])openAI.runSubmitToolOutputs(threadId: currentThreadId, runId: currentRunId, query: query){ resultin  //Handle response here}

Files

ReviewFiles Documentation for more info.

Upload file

Example: Upload file

letquery=FilesQuery(purpose:"assistants", file: fileData, fileName: url.lastPathComponent, contentType:"application/pdf")openAI.files(query: query){ resultin  //Handle response here}

Support for other providers

TL;DR Use.relaxed parsing option on Configuration

This SDK has a limited support for other providers like Gemini, Perplexity etc.

The top priority of this SDK is OpenAI, and the main rule is for all the main types to be fully compatible withOpenAI's API Reference. If it says a field should be optional, it must be optional in main subset of Query/Result types of this SDK. The same goes for other info declared in the reference, like default values.

That said we still want to give a support for other providers.

Option 1: Use.relaxed parsing option

.relaxed parsing option handles both missing and additional key/values in responses. It should be sufficient for most use-cases. Let us know if it doesn't cover any case you need.

Option 2: Specify parsing options separately

Handle missing keys in responses

Some providers return responses that don't completely satisfy OpenAI's scheme. Like, Gemini chat completion response ommitsid field which is a required field in OpenAI's API Reference.

In such case usefillRequiredFieldIfKeyNotFound Parsing Option, like this:

letconfiguration=OpenAI.Configuration(token:"", parsingOptions:.fillRequiredFieldIfKeyNotFound)

Handle missing values in responses

Some fields are required to be present (non-optional) by OpenAI, but other providers may returnnull for them.

Use.fillRequiredFieldIfValueNotFound to handle missing values

What if a provider returns additional fields?

Currently we handle such cases by simply adding additional fields to main model set. This is possible because optional fields wouldn't break or conflict with OpenAI's scheme. At the moment, such additional fields are added:

ChatResult

ChatResult.Choice.Message

Example Project

You can find example iOS application inDemo folder.

mockuuups-iphone-13-pro-mockup-perspective-right

Contribution Guidelines

Make your Pull Requests clear and obvious to anyone viewing them.
Setmain as your target branch.

UseConventional Commits principles in naming PRs and branches:

  • Feat: ... for new features and new functionality implementations.
  • Bug: ... for bug fixes.
  • Fix: ... for minor issues fixing, like typos or inaccuracies in code.
  • Chore: ... for boring stuff like code polishing, refactoring, deprecation fixing etc.

PR naming example:Feat: Add Threads API handling orBug: Fix message result duplication

Branch naming example:feat/add-threads-API-handling orbug/fix-message-result-duplication

Write description to pull requests in following format:

  • What

    ...

  • Why

    ...

  • Affected Areas

    ...

  • More Info

    ...

We'll appreciate you including tests to your code if it is needed and possible. ❤️

Links

License

MIT LicenseCopyright (c) 2023 MacPaw Inc.Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.

About

Swift community driven package for OpenAI public API

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors66

Languages


[8]ページ先頭

©2009-2025 Movatter.jp