Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The official .NET library for the OpenAI API

License

NotificationsYou must be signed in to change notification settings

openai/openai-dotnet

Repository files navigation

NuGet stable version

The OpenAI .NET library provides convenient access to the OpenAI REST API from .NET applications.

It is generated from ourOpenAPI specification in collaboration with Microsoft.

Table of Contents

Getting started

Prerequisites

To call the OpenAI REST API, you will need an API key. To obtain one, firstcreate a new OpenAI account orlog in. Next, navigate to theAPI key page and select "Create new secret key", optionally naming the key. Make sure to save your API key somewhere safe and do not share it with anyone.

Install the NuGet package

Add the client library to your .NET project by installing theNuGet package via your IDE or by running the following command in the .NET CLI:

dotnet add package OpenAI

If you would like to try the latest preview version, remember to append the--prerelease command option.

Note that the code examples included below were written using.NET 8. The OpenAI .NET library is compatible with all .NET Standard 2.0 applications, but the syntax used in some of the code examples in this document may depend on newer language features.

Using the client library

The full API of this library can be found in theOpenAI.netstandard2.0.cs file, and there are manycode examples to help. For instance, the following snippet illustrates the basic use of the chat completions API:

usingOpenAI.Chat;ChatClientclient=new(model:"gpt-4o",apiKey:Environment.GetEnvironmentVariable("OPENAI_API_KEY"));ChatCompletioncompletion=client.CompleteChat("Say 'this is a test.'");Console.WriteLine($"[ASSISTANT]:{completion.Content[0].Text}");

While you can pass your API key directly as a string, it is highly recommended that you keep it in a secure location and instead access it via an environment variable or configuration file as shown above to avoid storing it in source control.

Using a custom base URL and API key

If you need to connect to an alternative API endpoint (for example, a proxy or self-hosted OpenAI-compatible LLM), you can specify a custom base URL and API key using theApiKeyCredential andOpenAIClientOptions:

usingOpenAI;usingOpenAI.Chat;ChatClientclient=new(model:"MODEL_NAME",credential:newApiKeyCredential(Environment.GetEnvironmentVariable("OPENAI_API_KEY")),options:newOpenAIClientOptions(){Endpoint=newUri("BASE_URL")});

ReplaceCHAT_MODEL with your model name andBASE_URL with your endpoint URI. This is useful when working with OpenAI-compatible APIs or custom deployments.

Namespace organization

The library is organized into namespaces by feature areas in the OpenAI REST API. Each namespace contains a corresponding client class.

NamespaceClient class
OpenAI.AssistantsAssistantClient
OpenAI.AudioAudioClient
OpenAI.BatchBatchClient
OpenAI.ChatChatClient
OpenAI.EmbeddingsEmbeddingClient
OpenAI.EvalsEvaluationClient
OpenAI.FineTuningFineTuningClient
OpenAI.FilesOpenAIFileClient
OpenAI.ImagesImageClient
OpenAI.ModelsOpenAIModelClient
OpenAI.ModerationsModerationClient
OpenAI.RealtimeRealtimeClient
OpenAI.ResponsesOpenAIResponseClient
OpenAI.VectorStoresVectorStoreClient

Using the async API

Every client method that performs a synchronous API call has an asynchronous variant in the same client class. For instance, the asynchronous variant of theChatClient'sCompleteChat method isCompleteChatAsync. To rewrite the call above using the asynchronous counterpart, simplyawait the call to the corresponding async variant:

ChatCompletioncompletion=awaitclient.CompleteChatAsync("Say 'this is a test.'");

Using theOpenAIClient class

In addition to the namespaces mentioned above, there is also the parentOpenAI namespace itself:

usingOpenAI;

This namespace contains theOpenAIClient class, which offers certain conveniences when you need to work with multiple feature area clients. Specifically, you can use an instance of this class to create instances of the other clients and have them share the same implementation details, which might be more efficient.

You can create anOpenAIClient by specifying the API key that all clients will use for authentication:

OpenAIClientclient=new(Environment.GetEnvironmentVariable("OPENAI_API_KEY"));

Next, to create an instance of anAudioClient, for example, you can call theOpenAIClient'sGetAudioClient method by passing the OpenAI model that theAudioClient will use, just as if you were using theAudioClient constructor directly. If necessary, you can create additional clients of the same type to target different models.

AudioClientttsClient=client.GetAudioClient("tts-1");AudioClientwhisperClient=client.GetAudioClient("whisper-1");

How to use dependency injection

The OpenAI clients arethread-safe and can be safely registered assingletons in ASP.NET Core's Dependency Injection container. This maximizes resource efficiency and HTTP connection reuse.

Register theChatClient as a singleton in yourProgram.cs:

builder.Services.AddSingleton<ChatClient>(serviceProvider=>{varapiKey=Environment.GetEnvironmentVariable("OPENAI_API_KEY");varmodel="gpt-4o";returnnewChatClient(model,apiKey);});

Then inject and use the client in your controllers or services:

[ApiController][Route("api/[controller]")]publicclassChatController:ControllerBase{privatereadonlyChatClient_chatClient;publicChatController(ChatClientchatClient){_chatClient=chatClient;}[HttpPost("complete")]publicasyncTask<IActionResult>CompleteChat([FromBody]stringmessage){ChatCompletioncompletion=await_chatClient.CompleteChatAsync(message);returnOk(new{response=completion.Content[0].Text});}}

How to use chat completions with streaming

When you request a chat completion, the default behavior is for the server to generate it in its entirety before sending it back in a single response. Consequently, long chat completions can require waiting for several seconds before hearing back from the server. To mitigate this, the OpenAI REST API supports the ability to stream partial results back as they are being generated, allowing you to start processing the beginning of the completion before it is finished.

The client library offers a convenient approach to working with streaming chat completions. If you wanted to re-write the example from the previous section using streaming, rather than calling theChatClient'sCompleteChat method, you would call itsCompleteChatStreaming method instead:

CollectionResult<StreamingChatCompletionUpdate>completionUpdates=client.CompleteChatStreaming("Say 'this is a test.'");

Notice that the returned value is aCollectionResult<StreamingChatCompletionUpdate> instance, which can be enumerated to process the streaming response chunks as they arrive:

Console.Write($"[ASSISTANT]: ");foreach(StreamingChatCompletionUpdatecompletionUpdateincompletionUpdates){if(completionUpdate.ContentUpdate.Count>0){Console.Write(completionUpdate.ContentUpdate[0].Text);}}

Alternatively, you can do this asynchronously by calling theCompleteChatStreamingAsync method to get anAsyncCollectionResult<StreamingChatCompletionUpdate> and enumerate it usingawait foreach:

AsyncCollectionResult<StreamingChatCompletionUpdate>completionUpdates=client.CompleteChatStreamingAsync("Say 'this is a test.'");Console.Write($"[ASSISTANT]: ");awaitforeach(StreamingChatCompletionUpdatecompletionUpdateincompletionUpdates){if(completionUpdate.ContentUpdate.Count>0){Console.Write(completionUpdate.ContentUpdate[0].Text);}}

How to use chat completions with tools and function calling

In this example, you have two functions. The first function can retrieve a user's current geographic location (e.g., by polling the location service APIs of the user's device), while the second function can query the weather in a given location (e.g., by making an API call to some third-party weather service). You want the model to be able to call these functions if it deems it necessary to have this information in order to respond to a user request as part of generating a chat completion. For illustrative purposes, consider the following:

privatestaticstringGetCurrentLocation(){// Call the location API here.return"San Francisco";}privatestaticstringGetCurrentWeather(stringlocation,stringunit="celsius"){// Call the weather API here.return$"31{unit}";}

Start by creating twoChatTool instances using the staticCreateFunctionTool method to describe each function:

privatestaticreadonlyChatToolgetCurrentLocationTool=ChatTool.CreateFunctionTool(functionName:nameof(GetCurrentLocation),functionDescription:"Get the user's current location");privatestaticreadonlyChatToolgetCurrentWeatherTool=ChatTool.CreateFunctionTool(functionName:nameof(GetCurrentWeather),functionDescription:"Get the current weather in a given location",functionParameters:BinaryData.FromBytes("""        {            "type": "object",            "properties": {                "location": {                    "type": "string",                    "description": "The city and state, e.g. Boston, MA"                },                "unit": {                    "type": "string",                    "enum": [ "celsius", "fahrenheit" ],                    "description": "The temperature unit to use. Infer this from the specified location."                }            },            "required": [ "location" ]        }        """u8.ToArray()));

Next, create aChatCompletionOptions instance and add both to itsTools property. You will pass theChatCompletionOptions as an argument in your calls to theChatClient'sCompleteChat method.

List<ChatMessage>messages=[newUserChatMessage("What's the weather like today?"),];ChatCompletionOptionsoptions=new(){Tools={getCurrentLocationTool,getCurrentWeatherTool},};

When the resultingChatCompletion has aFinishReason property equal toChatFinishReason.ToolCalls, it means that the model has determined that one or more tools must be called before the assistant can respond appropriately. In those cases, you must first call the function specified in theChatCompletion'sToolCalls and then call theChatClient'sCompleteChat method again while passing the function's result as an additionalChatRequestToolMessage. Repeat this process as needed.

boolrequiresAction;do{requiresAction=false;ChatCompletioncompletion=client.CompleteChat(messages,options);switch(completion.FinishReason){caseChatFinishReason.Stop:{// Add the assistant message to the conversation history.messages.Add(newAssistantChatMessage(completion));break;}caseChatFinishReason.ToolCalls:{// First, add the assistant message with tool calls to the conversation history.messages.Add(newAssistantChatMessage(completion));// Then, add a new tool message for each tool call that is resolved.foreach(ChatToolCalltoolCallincompletion.ToolCalls){switch(toolCall.FunctionName){casenameof(GetCurrentLocation):{stringtoolResult=GetCurrentLocation();messages.Add(newToolChatMessage(toolCall.Id,toolResult));break;}casenameof(GetCurrentWeather):{// The arguments that the model wants to use to call the function are specified as a// stringified JSON object based on the schema defined in the tool definition. Note that// the model may hallucinate arguments too. Consequently, it is important to do the// appropriate parsing and validation before calling the function.usingJsonDocumentargumentsJson=JsonDocument.Parse(toolCall.FunctionArguments);boolhasLocation=argumentsJson.RootElement.TryGetProperty("location",outJsonElementlocation);boolhasUnit=argumentsJson.RootElement.TryGetProperty("unit",outJsonElementunit);if(!hasLocation){thrownewArgumentNullException(nameof(location),"The location argument is required.");}stringtoolResult=hasUnit?GetCurrentWeather(location.GetString(),unit.GetString()):GetCurrentWeather(location.GetString());messages.Add(newToolChatMessage(toolCall.Id,toolResult));break;}default:{// Handle other unexpected calls.thrownewNotImplementedException();}}}requiresAction=true;break;}caseChatFinishReason.Length:thrownewNotImplementedException("Incomplete model output due to MaxTokens parameter or token limit exceeded.");caseChatFinishReason.ContentFilter:thrownewNotImplementedException("Omitted content due to a content filter flag.");caseChatFinishReason.FunctionCall:thrownewNotImplementedException("Deprecated in favor of tool calls.");default:thrownewNotImplementedException(completion.FinishReason.ToString());}}while(requiresAction);

How to use chat completions with structured outputs

Beginning with thegpt-4o-mini,gpt-4o-mini-2024-07-18, andgpt-4o-2024-08-06 model snapshots, structured outputs are available for both top-level response content and tool calls in the chat completion and assistants APIs. For information about the feature, seethe Structured Outputs guide.

To use structured outputs to constrain chat completion content, set an appropriateChatResponseFormat as in the following example:

List<ChatMessage>messages=[newUserChatMessage("How can I solve 8x + 7 = -23?"),];ChatCompletionOptionsoptions=new(){ResponseFormat=ChatResponseFormat.CreateJsonSchemaFormat(jsonSchemaFormatName:"math_reasoning",jsonSchema:BinaryData.FromBytes("""            {                "type": "object",                "properties": {                    "steps": {                        "type": "array",                        "items": {                            "type": "object",                            "properties": {                                "explanation": { "type": "string" },                                "output": { "type": "string" }                            },                            "required": ["explanation", "output"],                            "additionalProperties": false                        }                    },                    "final_answer": { "type": "string" }                },                "required": ["steps", "final_answer"],                "additionalProperties": false            }            """u8.ToArray()),jsonSchemaIsStrict:true)};ChatCompletioncompletion=client.CompleteChat(messages,options);usingJsonDocumentstructuredJson=JsonDocument.Parse(completion.Content[0].Text);Console.WriteLine($"Final answer:{structuredJson.RootElement.GetProperty("final_answer")}");Console.WriteLine("Reasoning steps:");foreach(JsonElementstepElementinstructuredJson.RootElement.GetProperty("steps").EnumerateArray()){Console.WriteLine($"  - Explanation:{stepElement.GetProperty("explanation")}");Console.WriteLine($"    Output:{stepElement.GetProperty("output")}");}

How to use chat completions with audio

Starting with thegpt-4o-audio-preview model, chat completions can process audio input and output.

This example demonstrates:

  1. Configuring the client with the supportedgpt-4o-audio-preview model
  2. Supplying user audio input on a chat completion request
  3. Requesting model audio output from the chat completion operation
  4. Retrieving audio output from aChatCompletion instance
  5. Using past audio output asChatMessage conversation history
// Chat audio input and output is only supported on specific models, beginning with gpt-4o-audio-previewChatClientclient=new("gpt-4o-audio-preview",Environment.GetEnvironmentVariable("OPENAI_API_KEY"));// Input audio is provided to a request by adding an audio content part to a user messagestringaudioFilePath=Path.Combine("Assets","realtime_whats_the_weather_pcm16_24khz_mono.wav");byte[]audioFileRawBytes=File.ReadAllBytes(audioFilePath);BinaryDataaudioData=BinaryData.FromBytes(audioFileRawBytes);List<ChatMessage>messages=[newUserChatMessage(ChatMessageContentPart.CreateInputAudioPart(audioData,ChatInputAudioFormat.Wav)),];// Output audio is requested by configuring ChatCompletionOptions to include the appropriate// ResponseModalities values and corresponding AudioOptions.ChatCompletionOptionsoptions=new(){ResponseModalities=ChatResponseModalities.Text|ChatResponseModalities.Audio,AudioOptions=new(ChatOutputAudioVoice.Alloy,ChatOutputAudioFormat.Mp3),};ChatCompletioncompletion=client.CompleteChat(messages,options);voidPrintAudioContent(){if(completion.OutputAudioisChatOutputAudiooutputAudio){Console.WriteLine($"Response audio transcript:{outputAudio.Transcript}");stringoutputFilePath=$"{outputAudio.Id}.mp3";using(FileStreamoutputFileStream=File.OpenWrite(outputFilePath)){outputFileStream.Write(outputAudio.AudioBytes);}Console.WriteLine($"Response audio written to file:{outputFilePath}");Console.WriteLine($"Valid on followup requests until:{outputAudio.ExpiresAt}");}}PrintAudioContent();// To refer to past audio output, create an assistant message from the earlier ChatCompletion, use the earlier// response content part, or use ChatMessageContentPart.CreateAudioPart(string) to manually instantiate a part.messages.Add(newAssistantChatMessage(completion));messages.Add("Can you say that like a pirate?");completion=client.CompleteChat(messages,options);PrintAudioContent();

Streaming is highly parallel:StreamingChatCompletionUpdate instances can include aOutputAudioUpdate that maycontain any of:

  • TheId of the streamed audio content, which can be referenced by subsequentAssistantChatMessage instances viaChatAudioReference once the streaming response is complete; this may appear across multipleStreamingChatCompletionUpdate instances but will always be the same value when present
  • TheExpiresAt value that describes when theId will no longer be valid for use withChatAudioReference in subsequent requests; this typically appears once and only once, in the finalStreamingOutputAudioUpdate
  • IncrementalTranscriptUpdate and/orAudioBytesUpdate values, which can incrementally consumed and, when concatenated, form the complete audio transcript and audio output for the overall response; many of these typically appear

How to use responses with streaming and reasoning

OpenAIResponseClientclient=new(model:"o3-mini",apiKey:Environment.GetEnvironmentVariable("OPENAI_API_KEY"));OpenAIResponseresponse=awaitclient.CreateResponseAsync(userInputText:"What's the optimal strategy to win at poker?",newResponseCreationOptions(){ReasoningOptions=newResponseReasoningOptions(){ReasoningEffortLevel=ResponseReasoningEffortLevel.High,},});awaitforeach(StreamingResponseUpdateupdateinclient.CreateResponseStreamingAsync(userInputText:"What's the optimal strategy to win at poker?",newResponseCreationOptions(){ReasoningOptions=newResponseReasoningOptions(){ReasoningEffortLevel=ResponseReasoningEffortLevel.High,},})){if(updateisStreamingResponseOutputItemAddedUpdateitemUpdate&&itemUpdate.ItemisReasoningResponseItemreasoningItem){Console.WriteLine($"[Reasoning] ({reasoningItem.Status})");}elseif(updateisStreamingResponseOutputItemAddedUpdateitemDone&&itemDone.ItemisReasoningResponseItemreasoningDone){Console.WriteLine($"[Reasoning DONE] ({reasoningDone.Status})");}elseif(updateisStreamingResponseOutputTextDeltaUpdatedelta){Console.Write(delta.Delta);}}

How to use responses with file search

OpenAIResponseClientclient=new(model:"gpt-4o-mini",apiKey:Environment.GetEnvironmentVariable("OPENAI_API_KEY"));ResponseToolfileSearchTool=ResponseTool.CreateFileSearchTool(vectorStoreIds:[ExistingVectorStoreForTest.Id]);OpenAIResponseresponse=awaitclient.CreateResponseAsync(userInputText:"According to available files, what's the secret number?",newResponseCreationOptions(){Tools={fileSearchTool}});foreach(ResponseItemoutputIteminresponse.OutputItems){if(outputItemisFileSearchCallResponseItemfileSearchCall){Console.WriteLine($"[file_search] ({fileSearchCall.Status}):{fileSearchCall.Id}");foreach(stringqueryinfileSearchCall.Queries){Console.WriteLine($"  -{query}");}}elseif(outputItemisMessageResponseItemmessage){Console.WriteLine($"[{message.Role}]{message.Content.FirstOrDefault()?.Text}");}}

How to use responses with web search

OpenAIResponseClientclient=new(model:"gpt-4o-mini",apiKey:Environment.GetEnvironmentVariable("OPENAI_API_KEY"));OpenAIResponseresponse=awaitclient.CreateResponseAsync(userInputText:"What's a happy news headline from today?",newResponseCreationOptions(){Tools={ResponseTool.CreateWebSearchTool()},});foreach(ResponseItemiteminresponse.OutputItems){if(itemisWebSearchCallResponseItemwebSearchCall){Console.WriteLine($"[Web search invoked]({webSearchCall.Status}){webSearchCall.Id}");}elseif(itemisMessageResponseItemmessage){Console.WriteLine($"[{message.Role}]{message.Content?.FirstOrDefault()?.Text}");}}

How to generate text embeddings

In this example, you want to create a trip-planning website that allows customers to write a prompt describing the kind of hotel that they are looking for and then offers hotel recommendations that closely match this description. To achieve this, it is possible to use text embeddings to measure the relatedness of text strings. In summary, you can get embeddings of the hotel descriptions, store them in a vector database, and use them to build a search index that you can query using the embedding of a given customer's prompt.

To generate a text embedding, useEmbeddingClient from theOpenAI.Embeddings namespace:

usingOpenAI.Embeddings;EmbeddingClientclient=new("text-embedding-3-small",Environment.GetEnvironmentVariable("OPENAI_API_KEY"));stringdescription="Best hotel in town if you like luxury hotels. They have an amazing infinity pool, a spa,"+" and a really helpful concierge. The location is perfect -- right downtown, close to all the tourist"+" attractions. We highly recommend this hotel.";OpenAIEmbeddingembedding=client.GenerateEmbedding(description);ReadOnlyMemory<float>vector=embedding.ToFloats();

Notice that the resulting embedding is a list (also called a vector) of floating point numbers represented as an instance ofReadOnlyMemory<float>. By default, the length of the embedding vector will be 1536 when using thetext-embedding-3-small model or 3072 when using thetext-embedding-3-large model. Generally, larger embeddings perform better, but using them also tends to cost more in terms of compute, memory, and storage. You can reduce the dimensions of the embedding by creating an instance of theEmbeddingGenerationOptions class, setting theDimensions property, and passing it as an argument in your call to theGenerateEmbedding method:

EmbeddingGenerationOptionsoptions=new(){Dimensions=512};OpenAIEmbeddingembedding=client.GenerateEmbedding(description,options);

How to generate images

In this example, you want to build an app to help interior designers prototype new ideas based on the latest design trends. As part of the creative process, an interior designer can use this app to generate images for inspiration simply by describing the scene in their head as a prompt. As expected, high-quality, strikingly dramatic images with finer details deliver the best results for this application.

To generate an image, useImageClient from theOpenAI.Images namespace:

usingOpenAI.Images;ImageClientclient=new("dall-e-3",Environment.GetEnvironmentVariable("OPENAI_API_KEY"));

Generating an image always requires aprompt that describes what should be generated. To further tailor the image generation to your specific needs, you can create an instance of theImageGenerationOptions class and set theQuality,Size, andStyle properties accordingly. Note that you can also set theResponseFormat property ofImageGenerationOptions toGeneratedImageFormat.Bytes in order to receive the resulting PNG asBinaryData (instead of the default remoteUri) if this is convenient for your use case.

stringprompt="The concept for a living room that blends Scandinavian simplicity with Japanese minimalism for"+" a serene and cozy atmosphere. It's a space that invites relaxation and mindfulness, with natural light"+" and fresh air. Using neutral tones, including colors like white, beige, gray, and black, that create a"+" sense of harmony. Featuring sleek wood furniture with clean lines and subtle curves to add warmth and"+" elegance. Plants and flowers in ceramic pots adding color and life to a space. They can serve as focal"+" points, creating a connection with nature. Soft textiles and cushions in organic fabrics adding comfort"+" and softness to a space. They can serve as accents, adding contrast and texture.";ImageGenerationOptionsoptions=new(){Quality=GeneratedImageQuality.High,Size=GeneratedImageSize.W1792xH1024,Style=GeneratedImageStyle.Vivid,ResponseFormat=GeneratedImageFormat.Bytes};

Finally, call theImageClient'sGenerateImage method by passing the prompt and theImageGenerationOptions instance as arguments:

GeneratedImageimage=client.GenerateImage(prompt,options);BinaryDatabytes=image.ImageBytes;

For illustrative purposes, you could then save the generated image to local storage:

usingFileStreamstream=File.OpenWrite($"{Guid.NewGuid()}.png");bytes.ToStream().CopyTo(stream);

How to transcribe audio

In this example, an audio file is transcribed using the Whisper speech-to-text model, including both word- and audio-segment-level timestamp information.

usingOpenAI.Audio;AudioClientclient=new("whisper-1",Environment.GetEnvironmentVariable("OPENAI_API_KEY"));stringaudioFilePath=Path.Combine("Assets","audio_houseplant_care.mp3");AudioTranscriptionOptionsoptions=new(){ResponseFormat=AudioTranscriptionFormat.Verbose,TimestampGranularities=AudioTimestampGranularities.Word|AudioTimestampGranularities.Segment,};AudioTranscriptiontranscription=client.TranscribeAudio(audioFilePath,options);Console.WriteLine("Transcription:");Console.WriteLine($"{transcription.Text}");Console.WriteLine();Console.WriteLine($"Words:");foreach(TranscribedWordwordintranscription.Words){Console.WriteLine($"{word.Word,15} :{word.StartTime.TotalMilliseconds,5:0} -{word.EndTime.TotalMilliseconds,5:0}");}Console.WriteLine();Console.WriteLine($"Segments:");foreach(TranscribedSegmentsegmentintranscription.Segments){Console.WriteLine($"{segment.Text,90} :{segment.StartTime.TotalMilliseconds,5:0} -{segment.EndTime.TotalMilliseconds,5:0}");}

How to use assistants with retrieval augmented generation (RAG)

In this example, you have a JSON document with the monthly sales information of different products, and you want to build an assistant capable of analyzing it and answering questions about it.

To achieve this, use bothOpenAIFileClient from theOpenAI.Files namespace andAssistantClient from theOpenAI.Assistants namespace.

Important: The Assistants REST API is currently in beta. As such, the details are subject to change, and correspondingly theAssistantClient is attributed as[Experimental]. To use it, you must suppress theOPENAI001 warning first.

usingOpenAI.Assistants;usingOpenAI.Files;OpenAIClientopenAIClient=new(Environment.GetEnvironmentVariable("OPENAI_API_KEY"));OpenAIFileClientfileClient=openAIClient.GetOpenAIFileClient();AssistantClientassistantClient=openAIClient.GetAssistantClient();

Here is an example of what the JSON document might look like:

usingStreamdocument=BinaryData.FromBytes("""    {        "description": "This document contains the sale history data for Contoso products.",        "sales": [            {                "month": "January",                "by_product": {                    "113043": 15,                    "113045": 12,                    "113049": 2                }            },            {                "month": "February",                "by_product": {                    "113045": 22                }            },            {                "month": "March",                "by_product": {                    "113045": 16,                    "113055": 5                }            }        ]    }    """u8.ToArray()).ToStream();

Upload this document to OpenAI using theOpenAIFileClient'sUploadFile method, ensuring that you useFileUploadPurpose.Assistants to allow your assistant to access it later:

OpenAIFilesalesFile=fileClient.UploadFile(document,"monthly_sales.json",FileUploadPurpose.Assistants);

Create a new assistant using an instance of theAssistantCreationOptions class to customize it. Here, we use:

  • A friendlyName for the assistant, as will display in the Playground
  • Tool definition instances for the tools that the assistant should have access to; here, we useFileSearchToolDefinition to process the sales document we just uploaded andCodeInterpreterToolDefinition so we can analyze and visualize the numeric data
  • Resources for the assistant to use with its tools, here using theVectorStoreCreationHelper type to automatically make a new vector store that indexes the sales file; alternatively, you could useVectorStoreClient to manage the vector store separately
AssistantCreationOptionsassistantOptions=new(){Name="Example: Contoso sales RAG",Instructions="You are an assistant that looks up sales data and helps visualize the information based"+" on user queries. When asked to generate a graph, chart, or other visualization, use"+" the code interpreter tool to do so.",Tools={newFileSearchToolDefinition(),newCodeInterpreterToolDefinition(),},ToolResources=new(){FileSearch=new(){NewVectorStores={newVectorStoreCreationHelper([salesFile.Id]),}}},};Assistantassistant=assistantClient.CreateAssistant("gpt-4o",assistantOptions);

Next, create a new thread. For illustrative purposes, you could include an initial user message asking about the sales information of a given product and then use theAssistantClient'sCreateThreadAndRun method to get it started:

ThreadCreationOptionsthreadOptions=new(){InitialMessages={"How well did product 113045 sell in February? Graph its trend over time."}};ThreadRunthreadRun=assistantClient.CreateThreadAndRun(assistant.Id,threadOptions);

Poll the status of the run until it is no longer queued or in progress:

do{Thread.Sleep(TimeSpan.FromSeconds(1));threadRun=assistantClient.GetRun(threadRun.ThreadId,threadRun.Id);}while(!threadRun.Status.IsTerminal);

If everything went well, the terminal status of the run will beRunStatus.Completed.

Finally, you can use theAssistantClient'sGetMessages method to retrieve the messages associated with this thread, which now include the responses from the assistant to the initial user message.

For illustrative purposes, you could print the messages to the console and also save any images produced by the assistant to local storage:

CollectionResult<ThreadMessage>messages=assistantClient.GetMessages(threadRun.ThreadId,newMessageCollectionOptions(){Order=MessageCollectionOrder.Ascending});foreach(ThreadMessagemessageinmessages){Console.Write($"[{message.Role.ToString().ToUpper()}]: ");foreach(MessageContentcontentIteminmessage.Content){if(!string.IsNullOrEmpty(contentItem.Text)){Console.WriteLine($"{contentItem.Text}");if(contentItem.TextAnnotations.Count>0){Console.WriteLine();}// Include annotations, if any.foreach(TextAnnotationannotationincontentItem.TextAnnotations){if(!string.IsNullOrEmpty(annotation.InputFileId)){Console.WriteLine($"* File citation, file ID:{annotation.InputFileId}");}if(!string.IsNullOrEmpty(annotation.OutputFileId)){Console.WriteLine($"* File output, new file ID:{annotation.OutputFileId}");}}}if(!string.IsNullOrEmpty(contentItem.ImageFileId)){OpenAIFileimageInfo=fileClient.GetFile(contentItem.ImageFileId);BinaryDataimageBytes=fileClient.DownloadFile(contentItem.ImageFileId);usingFileStreamstream=File.OpenWrite($"{imageInfo.Filename}.png");imageBytes.ToStream().CopyTo(stream);Console.WriteLine($"<image:{imageInfo.Filename}.png>");}}Console.WriteLine();}

And it would yield something like this:

[USER]: How well did product 113045 sell in February? Graph its trend over time.[ASSISTANT]: Product 113045 sold 22 units in February【4:0†monthly_sales.json】.Now, I will generate a graph to show its sales trend over time.* File citation, file ID: file-hGOiwGNftMgOsjbynBpMCPFn[ASSISTANT]: <image: 015d8e43-17fe-47de-af40-280f25452280.png>The sales trend for Product 113045 over the past three months shows that:- In January, 12 units were sold.- In February, 22 units were sold, indicating significant growth.- In March, sales dropped slightly to 16 units.The graph above visualizes this trend, showing a peak in sales during February.

How to use assistants with streaming and vision

This example shows how to use the v2 Assistants API to provide image data to an assistant and then stream the run's response.

As before, you will use aOpenAIFileClient and anAssistantClient:

OpenAIClientopenAIClient=new(Environment.GetEnvironmentVariable("OPENAI_API_KEY"));OpenAIFileClientfileClient=openAIClient.GetOpenAIFileClient();AssistantClientassistantClient=openAIClient.GetAssistantClient();

For this example, we will use both image data from a local file as well as an image located at a URL. For the local data, we upload the file with theVision upload purpose, which would also allow it to be downloaded and retrieved later.

OpenAIFilepictureOfAppleFile=fileClient.UploadFile(Path.Combine("Assets","images_apple.png"),FileUploadPurpose.Vision);UrilinkToPictureOfOrange=new("https://raw.githubusercontent.com/openai/openai-dotnet/refs/heads/main/examples/Assets/images_orange.png");

Next, create a new assistant with a vision-capable model likegpt-4o and a thread with the image information referenced:

Assistantassistant=assistantClient.CreateAssistant("gpt-4o",newAssistantCreationOptions(){Instructions="When asked a question, attempt to answer very concisely. "+"Prefer one-sentence answers whenever feasible."});AssistantThreadthread=assistantClient.CreateThread(newThreadCreationOptions(){InitialMessages={newThreadInitializationMessage(MessageRole.User,["Hello, assistant! Please compare these two images for me:",MessageContent.FromImageFileId(pictureOfAppleFile.Id),MessageContent.FromImageUri(linkToPictureOfOrange),]),}});

With the assistant and thread prepared, use theCreateRunStreaming method to get an enumerableCollectionResult<StreamingUpdate>. You can then iterate over this collection withforeach. For async calling patterns, useCreateRunStreamingAsync and iterate over theAsyncCollectionResult<StreamingUpdate> withawait foreach, instead. Note that streaming variants also exist forCreateThreadAndRunStreaming andSubmitToolOutputsToRunStreaming.

CollectionResult<StreamingUpdate>streamingUpdates=assistantClient.CreateRunStreaming(thread.Id,assistant.Id,newRunCreationOptions(){AdditionalInstructions="When possible, try to sneak in puns if you're asked to compare things.",});

Finally, to handle theStreamingUpdates as they arrive, you can use theUpdateKind property on the baseStreamingUpdate and/or downcast to a specifically desired update type, likeMessageContentUpdate forthread.message.delta events orRequiredActionUpdate for streaming tool calls.

foreach(StreamingUpdatestreamingUpdateinstreamingUpdates){if(streamingUpdate.UpdateKind==StreamingUpdateReason.RunCreated){Console.WriteLine($"--- Run started! ---");}if(streamingUpdateisMessageContentUpdatecontentUpdate){Console.Write(contentUpdate.Text);}}

This will yield streamed output from the run like the following:

--- Run started! ---The first image depicts a multicolored apple with a blend of red and green hues, while the second image shows an orange with a bright, textured orange peel; one might say it’s comparing apples to oranges!

How to work with Azure OpenAI

For Azure OpenAI scenarios use theAzure SDK and more specifically theAzure OpenAI client library for .NET.

The Azure OpenAI client library for .NET is a companion to this library and all common capabilities between OpenAI and Azure OpenAI share the same scenario clients, methods, and request/response types. It is designed to make Azure specific scenarios straightforward, with extensions for Azure-specific concepts like Responsible AI content filter results and On Your Data integration.

AzureOpenAIClientazureClient=new(newUri("https://your-azure-openai-resource.com"),newDefaultAzureCredential());ChatClientchatClient=azureClient.GetChatClient("my-gpt-35-turbo-deployment");ChatCompletioncompletion=chatClient.CompleteChat([// System messages represent instructions or other guidance about how the assistant should behavenewSystemChatMessage("You are a helpful assistant that talks like a pirate."),// User messages represent user input, whether historical or the most recen tinputnewUserChatMessage("Hi, can you help me?"),// Assistant messages in a request represent conversation history for responsesnewAssistantChatMessage("Arrr! Of course, me hearty! What can I do for ye?"),newUserChatMessage("What's the best way to train a parrot?"),]);Console.WriteLine($"{completion.Role}:{completion.Content[0].Text}");

Advanced scenarios

Using protocol methods

In addition to the client methods that use strongly-typed request and response objects, the .NET library also providesprotocol methods that enable more direct access to the REST API. Protocol methods are "binary in, binary out" acceptingBinaryContent as request bodies and providingBinaryData as response bodies.

For example, to use the protocol method variant of theChatClient'sCompleteChat method, pass the request body asBinaryContent:

ChatClientclient=new("gpt-4o",Environment.GetEnvironmentVariable("OPENAI_API_KEY"));BinaryDatainput=BinaryData.FromBytes("""    {       "model": "gpt-4o",       "messages": [           {               "role": "user",               "content": "Say 'this is a test.'"           }       ]    }    """u8.ToArray());usingBinaryContentcontent=BinaryContent.Create(input);ClientResultresult=client.CompleteChat(content);BinaryDataoutput=result.GetRawResponse().Content;usingJsonDocumentoutputAsJson=JsonDocument.Parse(output.ToString());stringmessage=outputAsJson.RootElement.GetProperty("choices"u8)[0].GetProperty("message"u8).GetProperty("content"u8).GetString();Console.WriteLine($"[ASSISTANT]:{message}");

Notice how you can then call the resultingClientResult'sGetRawResponse method and retrieve the response body asBinaryData via thePipelineResponse'sContent property.

Mock a client for testing

The OpenAI .NET library has been designed to support mocking, providing key features such as:

  • Client methods made virtual to allow overriding.
  • Model factories to assist in instantiating API output models that lack public constructors.

To illustrate how mocking works, suppose you want to validate the behavior of the following method using theMoq library. Given the path to an audio file, it determines whether it contains a specified secret word:

publicboolContainsSecretWord(AudioClientclient,stringaudioFilePath,stringsecretWord){AudioTranscriptiontranscription=client.TranscribeAudio(audioFilePath);returntranscription.Text.Contains(secretWord);}

Create mocks ofAudioClient andClientResult<AudioTranscription>, set up methods and properties that will be invoked, then test the behavior of theContainsSecretWord method. Since theAudioTranscription class does not provide public constructors, it must be instantiated by theOpenAIAudioModelFactory static class:

// Instantiate mocks and the AudioTranscription object.Mock<AudioClient>mockClient=new();Mock<ClientResult<AudioTranscription>>mockResult=new(null,Mock.Of<PipelineResponse>());AudioTranscriptiontranscription=OpenAIAudioModelFactory.AudioTranscription(text:"I swear I saw an apple flying yesterday!");// Set up mocks' properties and methods.mockResult.SetupGet(result=>result.Value).Returns(transcription);mockClient.Setup(client=>client.TranscribeAudio(It.IsAny<string>(),It.IsAny<AudioTranscriptionOptions>())).Returns(mockResult.Object);// Perform validation.AudioClientclient=mockClient.Object;boolcontainsSecretWord=ContainsSecretWord(client,"<audioFilePath>","apple");Assert.That(containsSecretWord,Is.True);

All namespaces have their corresponding model factory to support mocking with the exception of theOpenAI.Assistants andOpenAI.VectorStores namespaces, for which model factories are coming soon.

Automatically retrying errors

By default, the client classes will automatically retry the following errors up to three additional times using exponential backoff:

  • 408 Request Timeout
  • 429 Too Many Requests
  • 500 Internal Server Error
  • 502 Bad Gateway
  • 503 Service Unavailable
  • 504 Gateway Timeout

Observability

OpenAI .NET library supports experimental distributed tracing and metrics with OpenTelemetry. Check outObservability with OpenTelemetry for more details.

About

The official .NET library for the OpenAI API

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors44

Languages


[8]ページ先頭

©2009-2025 Movatter.jp