Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The easiest way to use Ollama in .NET

License

NotificationsYou must be signed in to change notification settings

awaescher/OllamaSharp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

645 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nuget versionnuget downloadsApi docs

OllamaSharp 🦙

OllamaSharp provides .NET bindings for theOllama API, simplifying interactions with Ollama both locally and remotely.

🏆Recommended by Microsoft

Features

Usage

OllamaSharp wraps each Ollama API endpoint in awaitable methods that fully support response streaming.

The following list shows a few simple code examples.

Try our full featureddemo application that's included in this repository

Initializing

// set up the clientvaruri=newUri("http://localhost:11434");varollama=newOllamaApiClient(uri);// select a model which should be used for further operationsollama.SelectedModel="qwen3:4b";

Native AOT Support

For .NET Native AOT scenarios, create a custom JsonSerializerContext with your types and pass it into the constructor.

[JsonSerializable(typeof(MyCustomType))]publicpartialclassMyJsonContext:JsonSerializerContext{}// Use the static factory method for NativeAOTvarollama=newOllamaApiClient(uri,"qwen3:4b",MyJsonContext.Default);

See theNative AOT documentation for detailed guidance.

Listing all models that are available locally

varmodels=awaitollama.ListLocalModelsAsync();

Pulling a model and reporting progress

awaitforeach(varstatusinollama.PullModelAsync("qwen3:32b"))Console.WriteLine($"{status.Percent}%{status.Status}");

Generating a completion directly into the console

awaitforeach(varstreaminollama.GenerateAsync("How are you today?"))Console.Write(stream.Response);

Building interactive chats

// messages including their roles and tool calls will automatically be tracked within the chat object// and are accessible via the Messages propertyvarchat=newChat(ollama);while(true){varmessage=Console.ReadLine();awaitforeach(varanswerTokeninchat.SendAsync(message))Console.Write(answerToken);}

Usage with Microsoft.Extensions.AI

Microsoft built an abstraction library to streamline the usage of different AI providers. This is a really interesting concept if you plan to build apps that might use different providers, like ChatGPT, Claude and local models with Ollama.

I encourage you to read their accouncementIntroducing Microsoft.Extensions.AI Preview – Unified AI Building Blocks for .NET.

OllamaSharp is the first full implementation of theirIChatClient andIEmbeddingGenerator that makes it possible to use Ollama just like every other chat provider.

To do this, simply use theOllamaApiClient asIChatClient instead ofIOllamaApiClient.

// install package Microsoft.Extensions.AI.AbstractionsprivatestaticIChatClientCreateChatClient(Argumentsarguments){if(arguments.Provider.Equals("ollama",StringComparison.OrdinalIgnoreCase))returnnewOllamaApiClient(arguments.Uri,arguments.Model);elsereturnnewOpenAIChatClient(newOpenAI.OpenAIClient(arguments.ApiKey),arguments.Model);// ChatGPT or compatible}

TheOllamaApiClient implements both interfaces from Microsoft.Extensions.AI, you just need to cast it accordingly:

  • IChatClient for model inference
  • IEmbeddingGenerator<string, Embedding<float>> for embedding generation

Cloud models aka Ollama Turbo

OllamaSharp can be used withOllama cloud models as well. Use the constructor that takes anHttpClient and set it up to send the api key as default request header.

varclient=newHttpClient();client.BaseAddress=newUri("http://localhost:11434");client.DefaultRequestHeaders.Add(/* your api key here */);varollama=newOllamaApiClient(client);

OllamaSharp vs. Microsoft.Extensions.AI vs. Semantic Kernel

It can be confusing which library to use with AI in C#. The following paragraph should help you decide which library to start with.

Prefer OllamaSharp if ...

  • you plan to use Ollama models only
  • you want to use the native Ollama API, not only chats and embeddings but model management, usage information and more

Prefer Microsoft.Extensions.AI if ...

  • you only need chat and embedding functionality
  • you want to be able to use different providers like Ollama, OpenAI, Hugging Face, etc.

Prefer Semantic Kernel if ...

  • you need the highest flexibility with different providers, plugins, middlewares, caching, memory and more
  • you need advanced prompt techniques like variable substitution and templating
  • you want to build agentic systems

No matter which one you choose, OllamaSharp should always be the bridge to Ollama behind the scenes as recommended by Microsoft(1)(2)(3).

Thanks

I would like to thank all the contributors who take the time to improve OllamaSharp. First and foremostmili-tan, who always keeps OllamaSharp in sync with the Ollama API.

The icon and name were reused from the amazingOllama project.


[8]ページ先頭

©2009-2026 Movatter.jp