- Notifications
You must be signed in to change notification settings - Fork429
An unofficial C#/.NET SDK for accessing the OpenAI GPT-3 API
License
OkGoDoIt/OpenAI-API-dotnet
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Microsoft reached out to me about transitioning this library into a new official C# OpenAI library and now it's ready to go! Starting withv2.0.0-beta.3, the official library now has full coverage and will stay up-to-date. More details in the blog post here:https://devblogs.microsoft.com/dotnet/openai-dotnet-library
This github repo will remain here to document my original version of the library throughversion 1.11, which is still available on Nuget as well.🎉
A simple C# .NET wrapper library to use with OpenAI's API. More contexton my blog.This is my original unofficial wrapper library around the OpenAI API.
varapi=newOpenAI_API.OpenAIAPI("YOUR_API_KEY");varresult=awaitapi.Chat.CreateChatCompletionAsync("Hello!");Console.WriteLine(result);// should print something like "Hi! How can I help you?"
- Status
- Requirements
- Installation
- Authentication
- Chat API
- Completions API
- Audio
- Embeddings API
- Moderation API
- Files API
- Image APIs (DALL-E)
- Azure
- Additional Documentation
- License
Starting withv2.0.0-beta, this library has been adopted by Microsoft. The new official version of the library will have full coverage and will stay fully up-to-date. More details in the blog post here:https://devblogs.microsoft.com/dotnet/openai-dotnet-library/This github repo will remain here to document my original version of the library throughversion 1.11, which is still available on Nuget as well.
This library is based on .NET Standard 2.0, so it should work across all versions of .Net, from the traditional .NET Framework >=4.7.2 to .NET (Core) >= 3.0. It should work across console apps, winforms, wpf, asp.net, unity, Xamarin, etc. It should work across Windows, Linux, and Mac, and possibly even mobile. There are minimal dependencies, and it's licensed in the public domain.
Install packageOpenAI
v1.11 from Nuget. Here's how via commandline:
Install-Package OpenAI-Version1.11.0
There are 3 ways to provide your API keys, in order of precedence:
- Pass keys directly to
APIAuthentication(string key)
constructor - Set environment var for OPENAI_API_KEY (or OPENAI_KEY for backwards compatibility)
- Include a config file in the local directory or in your user directory named
.openai
and containing the line:
OPENAI_API_KEY=sk-aaaabbbbbccccddddd
You use theAPIAuthentication
when you initialize the API as shown:
// for exampleOpenAIAPIapi=newOpenAIAPI("YOUR_API_KEY");// shorthand// orOpenAIAPIapi=newOpenAIAPI(newAPIAuthentication("YOUR_API_KEY"));// create object manually// orOpenAIAPIapi=newOpenAIAPI(APIAuthenticationLoadFromEnv());// use env vars// orOpenAIAPIapi=newOpenAIAPI(APIAuthenticationLoadFromPath());// use config file (can optionally specify where to look)// orOpenAIAPIapi=newOpenAIAPI();// uses default, env, or config file
You may optionally include an openAIOrganization (OPENAI_ORGANIZATION in env or config file) specifying which organization is used for an API request. Usage from these API requests will count against the specified organization's subscription quota. Organization IDs can be found on yourOrganization settings page.
// for exampleOpenAIAPIapi=newOpenAIAPI(newAPIAuthentication("YOUR_API_KEY","org-yourOrgHere"));
The Chat API is accessed viaOpenAIAPI.Chat
. There are two ways to use the Chat Endpoint, either via simplified conversations or with the full Request/Response methods.
The Conversation Class allows you to easily interact with ChatGPT by adding messages to a chat and asking ChatGPT to reply.
varchat=api.Chat.CreateConversation();chat.Model=Model.GPT4_Turbo;chat.RequestParameters.Temperature=0;/// give instruction as Systemchat.AppendSystemMessage("You are a teacher who helps children understand if things are animals or not. If the user tells you an animal, you say\"yes\". If the user tells you something that is not an animal, you say\"no\". You only ever respond with\"yes\" or\"no\". You do not say anything else.");// give a few examples as user and assistantchat.AppendUserInput("Is this an animal? Cat");chat.AppendExampleChatbotOutput("Yes");chat.AppendUserInput("Is this an animal? House");chat.AppendExampleChatbotOutput("No");// now let's ask it a questionchat.AppendUserInput("Is this an animal? Dog");// and get the responsestringresponse=awaitchat.GetResponseFromChatbotAsync();Console.WriteLine(response);// "Yes"// and continue the conversation by asking anotherchat.AppendUserInput("Is this an animal? Chair");// and get another responseresponse=awaitchat.GetResponseFromChatbotAsync();Console.WriteLine(response);// "No"// the entire chat history is available in chat.Messagesforeach(ChatMessagemsginchat.Messages){Console.WriteLine($"{msg.Role}:{msg.Content}");}
Streaming allows you to get results are they are generated, which can help your application feel more responsive.
Using the new C# 8.0 async iterators:
varchat=api.Chat.CreateConversation();chat.AppendUserInput("How to make a hamburger?");awaitforeach(varresinchat.StreamResponseEnumerableFromChatbotAsync()){Console.Write(res);}
Or if using classic .NET Framework or C# <8.0:
varchat=api.Chat.CreateConversation();chat.AppendUserInput("How to make a hamburger?");awaitchat.StreamResponseFromChatbotAsync(res=>{Console.Write(res);});
You can send images to the chat to use the new GPT-4 Vision model. This only works with theModel.GPT4_Vision
model. Please seehttps://platform.openai.com/docs/guides/vision for more information and limitations.
// the simplest formvarresult=awaitapi.Chat.CreateChatCompletionAsync("What is the primary non-white color in this logo?",ImageInput.FromFile("path/to/logo.png"));// or in a conversationvarchat=api.Chat.CreateConversation();chat.Model=Model.GPT4_Vision;chat.AppendSystemMessage("You are a graphic design assistant who helps identify colors.");chat.AppendUserInput("What are the primary non-white colors in this logo?",ImageInput.FromFile("path/to/logo.png"));stringresponse=awaitchat.GetResponseFromChatbotAsync();Console.WriteLine(response);// "Blue and purple"chat.AppendUserInput("What are the primary non-white colors in this logo?",ImageInput.FromImageUrl("https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png"));response=awaitchat.GetResponseFromChatbotAsync();Console.WriteLine(response);// "Blue, red, and yellow"// or when manually creating the ChatMessagemessageWithImage=newChatMessage(ChatMessageRole.User,"What colors do these logos have in common?");messageWithImage.images.Add(ImageInput.FromFile("path/to/logo.png"));messageWithImage.images.Add(ImageInput.FromImageUrl("https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png"));// you can specify multiple images at oncechat.AppendUserInput("What colors do these logos have in common?",ImageInput.FromFile("path/to/logo.png"),ImageInput.FromImageUrl("https://rogerpincombe.com/templates/rp/center-aligned-no-shadow-small.png"));
If the chat conversation history gets too long, it may not fit into the context length of the model. By default, the earliest non-system message(s) will be removed from the chat history and the API call will be retried. You may disable this by settingchat.AutoTruncateOnContextLengthExceeded = false
, or you can override the truncation algorithm like this:
chat.OnTruncationNeeded+=(sender,args)=>{// args is a List<ChatMessage> with the current chat history. Remove or edit as nessisary.// replace this with more sophisticated logic for your use-case, such as summarizing the chat historyfor(inti=0;i<args.Count;i++){if(args[i].Role!=ChatMessageRole.System){args.RemoveAt(i);return;}}};
You may also wish to use a new model with a larger context length. You can do this by settingchat.Model = Model.GPT4_Turbo
orchat.Model = Model.ChatGPTTurbo_16k
, etc.
You can see token usage viachat.MostRecentApiResult.Usage.PromptTokens
and related properties.
You can access full control of the Chat API by using theOpenAIAPI.Chat.CreateChatCompletionAsync()
and related methods.
asyncTask<ChatResult>CreateChatCompletionAsync(ChatRequestrequest);// for examplevarresult=awaitapi.Chat.CreateChatCompletionAsync(newChatRequest(){Model=Model.ChatGPTTurbo,Temperature=0.1,MaxTokens=50,Messages=newChatMessage[]{newChatMessage(ChatMessageRole.User,"Hello!")}})// orvar result= api.Chat.CreateChatCompletionAsync("Hello!");varreply=results.Choices[0].Message;Console.WriteLine($"{reply.Role}:{reply.Content.Trim()}");// orConsole.WriteLine(results);
It returns aChatResult
which is mostly metadata, so use its.ToString()
method to get the text if all you want is assistant's reply text.
There's also an async streaming API which works similarly to theCompletions endpoint streaming results.
With the newModel.GPT4_Turbo
orgpt-3.5-turbo-1106
models, you can set theChatRequest.ResponseFormat
toChatRequest.ResponseFormats.JsonObject
to enable JSON mode.When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON object.Seehttps://platform.openai.com/docs/guides/text-generation/json-mode for more details.
ChatRequestchatRequest=newChatRequest(){Model=model,Temperature=0.0,MaxTokens=500,ResponseFormat=ChatRequest.ResponseFormats.JsonObject,Messages=newChatMessage[]{newChatMessage(ChatMessageRole.System,"You are a helpful assistant designed to output JSON."),newChatMessage(ChatMessageRole.User,"Who won the world series in 2020? Return JSON of a 'wins' dictionary with the year as the numeric key and the winning team as the string value.")}};varresults=awaitapi.Chat.CreateChatCompletionAsync(chatRequest);Console.WriteLine(results);/* prints:{ "wins": {2020: "Los Angeles Dodgers" }}*/
Completions are considered legacy by OpenAI. The Completion API is accessed viaOpenAIAPI.Completions
:
asyncTask<CompletionResult>CreateCompletionAsync(CompletionRequestrequest);// for examplevarresult=awaitapi.Completions.CreateCompletionAsync(newCompletionRequest("One Two Three One Two",model:Model.CurieText,temperature:0.1));// orvarresult=awaitapi.Completions.CreateCompletionAsync("One Two Three One Two",temperature:0.1);// or other convenience overloads
You can create yourCompletionRequest
ahead of time or use one of the helper overloads for convenience. It returns aCompletionResult
which is mostly metadata, so use its.ToString()
method to get the text if all you want is the completion.
Streaming allows you to get results are they are generated, which can help your application feel more responsive, especially on slow models like Davinci.
Using the new C# 8.0 async iterators:
IAsyncEnumerable<CompletionResult>StreamCompletionEnumerableAsync(CompletionRequestrequest);// for exampleawaitforeach(vartokeninapi.Completions.StreamCompletionEnumerableAsync(newCompletionRequest("My name is Roger and I am a principal software engineer at Salesforce. This is my resume:",Model.DavinciText,200,0.5,presencePenalty:0.1,frequencyPenalty:0.1))){Console.Write(token);}
Or if using classic .NET framework or C# <8.0:
asyncTaskStreamCompletionAsync(CompletionRequestrequest,Action<CompletionResult>resultHandler);// for exampleawaitapi.Completions.StreamCompletionAsync(newCompletionRequest("My name is Roger and I am a principal software engineer at Salesforce. This is my resume:",Model.DavinciText,200,0.5,presencePenalty:0.1,frequencyPenalty:0.1),res=>ResumeTextbox.Text+=res.ToString());
The Audio API's are Text to Speech, Transcription (speech to text), and Translation (non-English speech to English text).
The TTS API is accessed viaOpenAIAPI.TextToSpeech
:
awaitapi.TextToSpeech.SaveSpeechToFileAsync("Hello, brave new world! This is a test.",outputPath);// You can open it in the defaul audio player like this:Process.Start(outputPath);
You can also specify all of the request parameters with aTextToSpeechRequest
object:
varrequest=newTextToSpeechRequest(){Input="Hello, brave new world! This is a test.",ResponseFormat=ResponseFormats.AAC,Model=Model.TTS_HD,Voice=Voices.Nova,Speed=0.9};awaitapi.TextToSpeech.SaveSpeechToFileAsync(request,"test.aac");
Instead of saving to a file, you can get audio byte stream withapi.TextToSpeech.GetSpeechAsStreamAsync(request)
:
using(Streamresult=awaitapi.TextToSpeech.GetSpeechAsStreamAsync("Hello, brave new world!",Voices.Fable))using(StreamReaderreader=newStreamReader(result)){// do something with the audio stream here}
The Audio Transcription API allows you to generate text from audio, in any of the supported languages. It is accessed viaOpenAIAPI.Transcriptions
:
stringresultText=awaitapi.Transcriptions.GetTextAsync("path/to/file.mp3");
You can ask for verbose results, which will give you segment and token-level information, as well as the standard OpenAI metadata such as processing time:
AudioResultVerboseresult=awaitapi.Transcriptions.GetWithDetailsAsync("path/to/file.m4a");Console.WriteLine(result.ProcessingTime.TotalMilliseconds);// 496msConsole.WriteLine(result.text);// "Hello, this is a test of the transcription function."Console.WriteLine(result.language);// "english"Console.WriteLine(result.segments[0].no_speech_prob);// 0.03712// etc
You can also ask for results in SRT or VTT format, which is useful for generating subtitles for videos:
stringresult=awaitapi.Transcriptions.GetAsFormatAsync("path/to/file.m4a",AudioRequest.ResponseFormats.SRT);
Additional parameters such as temperature, prompt, language, etc can be specified either per-request or as a default:
// inlineresult=awaitapi.Transcriptions.GetTextAsync("conversation.mp3","en","This is a transcript of a conversation between a medical doctor and her patient: ",0.3);// set defaultsapi.Transcriptions.DefaultTranscriptionRequestArgs.Language="en";
Instead of providing a local file on disk, you can provide a stream of audio bytes. This can be useful for streaming audio from the microphone or another source without having to first write to disk. Please not you must specify a filename, which does not have to exist, but which must have an accurate extension for the type of audio that you are sending. OpenAI uses the filename extension to determine what format your audio stream is in.
using(varaudioStream=File.OpenRead("path-here.mp3")){returnawaitapi.Transcriptions.GetTextAsync(audioStream,"file.mp3");}
Translations allow you to transcribe text from any of the supported languages to English. OpenAI does not support translating into any other language, only English. It is accessed viaOpenAIAPI.Translations
.It supports all of the same functionality as the Transcriptions.
stringresult=awaitapi.Translations.GetTextAsync("chinese-example.m4a");
The Embedding API is accessed viaOpenAIAPI.Embeddings
:
asyncTask<EmbeddingResult>CreateEmbeddingAsync(EmbeddingRequestrequest);// for examplevarresult=awaitapi.Embeddings.CreateEmbeddingAsync(newEmbeddingRequest("A test text for embedding",model:Model.AdaTextEmbedding));// orvarresult=awaitapi.Embeddings.CreateEmbeddingAsync("A test text for embedding");
The embedding result contains a lot of metadata, the actual vector of floats is in result.Data[].Embedding.
For simplicity, you can directly ask for the vector of floats and disgard the extra metadata withapi.Embeddings.GetEmbeddingsAsync("test text here")
The Moderation API is accessed viaOpenAIAPI.Moderation
:
asyncTask<ModerationResult>CreateEmbeddingAsync(ModerationRequestrequest);// for examplevarresult=awaitapi.Moderation.CallModerationAsync(newModerationRequest("A test text for moderating",Model.TextModerationLatest));// orvarresult=awaitapi.Moderation.CallModerationAsync("A test text for moderating");Console.WriteLine(result.results[0].MainContentFlag);
The results are in.results[0]
and have nice helper methods likeFlaggedCategories
andMainContentFlag
.
The Files API endpoint is accessed viaOpenAIAPI.Files
:
// uploadingasyncTask<File>UploadFileAsync(stringfilePath,stringpurpose="fine-tune");// for examplevarresponse=awaitapi.Files.UploadFileAsync("fine-tuning-data.jsonl");Console.Write(response.Id);//the id of the uploaded file// listingasyncTask<List<File>>GetFilesAsync();// for examplevarresponse=awaitapi.Files.GetFilesAsync();foreach(varfileinresponse){Console.WriteLine(file.Name);}
There are also methods to get file contents, delete a file, etc.
The fine-tuning endpoint itself has not yet been implemented, but will be added soon.
The DALL-E Image Generation API is accessed viaOpenAIAPI.ImageGenerations
:
asyncTask<ImageResult>CreateImageAsync(ImageGenerationRequestrequest);// for examplevarresult=awaitapi.ImageGenerations.CreateImageAsync(newImageGenerationRequest("A drawing of a computer writing a test",1,ImageSize._512));// orvarresult=awaitapi.ImageGenerations.CreateImageAsync("A drawing of a computer writing a test");Console.WriteLine(result.Data[0].Url);
The image result contains a URL for an online image or a base64-encoded image, depending on the ImageGenerationRequest.ResponseFormat (url is the default).
Use DALL-E 3 like this:
asyncTask<ImageResult>CreateImageAsync(ImageGenerationRequestrequest);// for examplevarresult=awaitapi.ImageGenerations.CreateImageAsync(newImageGenerationRequest("A drawing of a computer writing a test",OpenAI_API.Models.Model.DALLE3,ImageSize._1024x1792,"hd"));// orvarresult=awaitapi.ImageGenerations.CreateImageAsync("A drawing of a computer writing a test",OpenAI_API.Models.Model.DALLE3);Console.WriteLine(result.Data[0].Url);
For using the Azure OpenAI Service, you need to specify the name of your Azure OpenAI resource as well as your model deployment id.
I do not have access to the Microsoft Azure OpenAI service, so I am unable to test this functionality. If you have access and can test, please submit an issue describing your results. A PR with integration tests would also be greatly appreciated. Specifically, it is unclear to me that specifying models works the same way with Azure.
Refer theAzure OpenAI documentation anddetailed screenshots in #64 for further information.
Configuration should look something like this for the Azure service:
OpenAIAPIapi=OpenAIAPI.ForAzure("YourResourceName","deploymentId","api-key");api.ApiVersion="2023-03-15-preview";// needed to access chat endpoint on Azure
You may then use theapi
object like normal. You may also specify theAPIAuthentication
is any of the other ways listed in theAuthentication section above. Currently this library only supports the api-key flow, not the AD-Flow.
As of April 2, 2023, you need to manually select api version2023-03-15-preview
as shown above to access the chat endpoint on Azure. Once this is out of preview I will update the default.
You may specify anIHttpClientFactory
to be used for HTTP requests, which allows for tweaking http request properties, connection pooling, and mocking. Details in#103.
OpenAIAPIapi=newOpenAIAPI();api.HttpClientFactory=myIHttpClientFactoryObject;
Every single class, method, and property has extensive XML documentation, so it should show up automatically in IntelliSense. That combined with the official OpenAI documentation should be enough to get started. Feel free to open an issue here if you have any questions. Better documentation may come later.
CC-0 Public Domain
This library is licensed CC-0, in the public domain. You can use it for whatever you want, publicly or privately, without worrying about permission or licensing or whatever. It's just a wrapper around the OpenAI API, so you still need to get access to OpenAI from them directly. I am not affiliated with OpenAI and this library is not endorsed by them, I just have beta access and wanted to make a C# library to access it more easily. Hopefully others find this useful as well. Feel free to open a PR if there's anything you want to contribute.
About
An unofficial C#/.NET SDK for accessing the OpenAI GPT-3 API
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Uh oh!
There was an error while loading.Please reload this page.