Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

docs.flutter.dev uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic.

Learn more

Flutter 3.41 is live! Check out theFlutter 3.41 blog post!

Custom LLM providers

How to integrate with other Flutter features.

The protocol connecting an LLM and theLlmChatView is expressed in theLlmProvider interface:

dart
abstractclassLlmProviderimplementsListenable{Stream<String>generateStream(Stringprompt,{Iterable<Attachment>attachments});Stream<String>sendMessageStream(Stringprompt,{Iterable<Attachment>attachments});Iterable<ChatMessage>gethistory;sethistory(Iterable<ChatMessage>history);}

The LLM could be in the cloud or local, it could be hosted in the Google Cloud Platform or on some other cloud provider, it could be a proprietary LLM or open source. Any LLM or LLM-like endpoint that can be used to implement this interface can be plugged into the chat view as an LLM provider. The AI Toolkit comes with two providers out of the box, both of which implement theLlmProvider interface that is required to plug the provider into the following:

Implementation

#

To build your own provider, you need to implement theLlmProvider interface with these things in mind:

  1. Providing for full configuration support

  2. Handling history

  3. Translating messages and attachments to the underlying LLM

  4. Calling the underlying LLM

  5. Configuration To support full configurability in your custom provider, you should allow the user to create the underlying model and pass that in as a parameter, as the Firebase provider does:

dart
classFirebaseProviderextendsLlmProvider...{@immutableFirebaseProvider({requiredGenerativeModelmodel,...}):_model=model,...finalGenerativeModel_model;...}

In this way, no matter what changes come to the underlying model in the future, the configuration knobs will all be available to the user of your custom provider.

  1. History History is a big part of any provider—not only does the provider need to allow history to be manipulated directly, but it has to notify listeners as it changes. In addition, to support serialization and changing provider parameters, it must also support saving history as part of the construction process.

The Firebase provider handles this as shown:

dart
classFirebaseProviderextendsLlmProviderwithChangeNotifier{@immutableFirebaseProvider({requiredGenerativeModelmodel,Iterable<ChatMessage>?history,...}):_model=model,_history=history?.toList()??[],...{...}finalGenerativeModel_model;finalList<ChatMessage>_history;...@overrideStream<String>sendMessageStream(Stringprompt,{Iterable<Attachment>attachments=const[],})async*{finaluserMessage=ChatMessage.user(prompt,attachments);finalllmMessage=ChatMessage.llm();_history.addAll([userMessage,llmMessage]);finalresponse=_generateStream(prompt:prompt,attachments:attachments,contentStreamGenerator:_chat!.sendMessageStream,);yield*response.map((chunk){llmMessage.append(chunk);returnchunk;});notifyListeners();}@overrideIterable<ChatMessage>gethistory=>_history;@overridesethistory(Iterable<ChatMessage>history){_history.clear();_history.addAll(history);_chat=_startChat(history);notifyListeners();}...}

You'll notice several things in this code:

  • The use ofChangeNotifier to implement theListenable method requirements from theLlmProvider interface
  • The ability to pass initial history in as a constructor parameter
  • Notifying listeners when there's a new user prompt/LLM response pair
  • Notifying listeners when the history is changed manually
  • Creating a new chat when the history changes, using the new history

Essentially, a custom provider manages the history for a single chat session with the underlying LLM. As the history changes, the underlying chat either needs to be kept up to date automatically (as the Firebase provider does when you call the underlying chat-specific methods) or manually recreated (as the Firebase provider does whenever the history is set manually).

  1. Messages and attachments

Attachments must be mapped from the standardChatMessage class exposed by theLlmProvider type to whatever is handled by the underlying LLM. For example, the Firebase provider maps from theChatMessage class from the AI Toolkit to theContent type provided by the Firebase Logic AI SDK, as shown in the following example:

dart
import'package:firebase_ai/firebase_ai.dart';...classFirebaseProviderextendsLlmProviderwithChangeNotifier{...staticPart_partFrom(Attachmentattachment)=>switch(attachment){(finalFileAttachmenta)=>DataPart(a.mimeType,a.bytes),(finalLinkAttachmenta)=>FilePart(a.url),};staticContent_contentFrom(ChatMessagemessage)=>Content(message.origin.isUser?'user':'model',[TextPart(message.text??''),...message.attachments.map(_partFrom),],);}

The_contentFrom method is called whenever a user prompt needs to be sent to the underlying LLM. Every provider needs to provide for its own mapping.

  1. Calling the LLM

How you call the underlying LLM to implementgenerateStream andsendMessageStream methods depends on the protocol it exposes. The Firebase provider in the AI Toolkit handles configuration and history but calls togenerateStream andsendMessageStream each end up in a call to an API from the Firebase Logic AI SDK:

dart
classFirebaseProviderextendsLlmProviderwithChangeNotifier{...@overrideStream<String>generateStream(Stringprompt,{Iterable<Attachment>attachments=const[],})=>_generateStream(prompt:prompt,attachments:attachments,contentStreamGenerator:(c)=>_model.generateContentStream([c]),);@overrideStream<String>sendMessageStream(Stringprompt,{Iterable<Attachment>attachments=const[],})async*{finaluserMessage=ChatMessage.user(prompt,attachments);finalllmMessage=ChatMessage.llm();_history.addAll([userMessage,llmMessage]);finalresponse=_generateStream(prompt:prompt,attachments:attachments,contentStreamGenerator:_chat!.sendMessageStream,);yield*response.map((chunk){llmMessage.append(chunk);returnchunk;});notifyListeners();}Stream<String>_generateStream({requiredStringprompt,requiredIterable<Attachment>attachments,requiredStream<GenerateContentResponse>Function(Content)contentStreamGenerator,})async*{finalcontent=Content('user',[TextPart(prompt),...attachments.map(_partFrom),]);finalresponse=contentStreamGenerator(content);yield*response.map((chunk)=>chunk.text).where((text)=>text!=null).cast<String>();}@overrideIterable<ChatMessage>gethistory=>_history;@overridesethistory(Iterable<ChatMessage>history){_history.clear();_history.addAll(history);_chat=_startChat(history);notifyListeners();}}

Examples

#

TheFirebase provider implementation provides a good starting point for your own custom provider. If you'd like to see an example provider implementation with all of the calls to the underlying LLM stripped away, check out theEcho example app, which simply formats the user's prompt and attachments as Markdown to send back to the user as its response.

Was this page's content helpful?

Unless stated otherwise, the documentation on this site reflects Flutter 3.38.6. Page last updated on 2026-01-07.View source orreport an issue.


[8]ページ先頭

©2009-2026 Movatter.jp