Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Cover image for Firebase hybrid on-device with Angular
This is Angular profile imageGiorgio Boa
Giorgio Boa forThis is Angular

Posted on

     

Firebase hybrid on-device with Angular

Some of you asked me to create an example of integration betweenAngular and one of Firebase's latest features: "hybrid on-device".

The core idea is to leverage the power of both cloud-based AI models and on-device (local) AI processing within a single application. TheFirebase AI SDK provides this capability, allowing you to prioritize running AI tasks directly on the user's device whenever possible, while seamlessly falling back to cloud processing if the on-device model is unavailable or insufficient.

The Advantages

  • Reduced Latency: On-device processing eliminates network latency, leading to faster response times and a more responsive user experience. Imagine a situation where the user has a slow or unstable internet connection. On-device AI ensures the application can still function and provide value.

  • Offline Functionality: When the application is entirely offline, on-device AI models can still operate, providing a core set of AI features.

  • Privacy: Processing data locally minimizes data transfer to the cloud, which can be beneficial in applications dealing with sensitive information.

  • Cost Savings: Offloading processing to the device can reduce cloud costs associated with AI inference.

Let's jump to the code

This is an example of an Angular service that will allow us to use the Firebase API.

Important: Replace the placeholder configuration with your actual Firebase project credentials.

import{Injectable}from'@angular/core';import{initializeApp}from'firebase/app';import{getAI,getGenerativeModel,GoogleAIBackend}from'firebase/ai';@Injectable({providedIn:'root',})exportclassAiService{privatemodel:any;constructor(){constfirebaseConfig={// your Firebase config here};constfirebaseApp=initializeApp(firebaseConfig);constai=getAI(firebaseApp,{backend:newGoogleAIBackend()});this.model=getGenerativeModel(ai,{mode:'prefer_on_device',model:'gemini-2.5-flash',});}asyncgenerateTextFromImage(prompt:string,file:File):Promise<string>{try{constimagePart=awaitthis.fileToGenerativePart(file);constresult=awaitthis.model.generateContentStream([prompt,imagePart,]);letaggregatedResponse='';forawait(constchunkofresult.stream){constchunkText=chunk.text();aggregatedResponse+=chunkText;}returnaggregatedResponse;}catch(err:any){console.error(err.name,err.message);throwerr;}}privateasyncfileToGenerativePart(file:File):Promise<any>{constbase64EncodedDataPromise=newPromise<string>((resolve)=>{constreader=newFileReader();reader.onloadend=()=>resolve((reader.resultasstring).split(',')[1]||'');reader.readAsDataURL(file);});return{inlineData:{data:awaitbase64EncodedDataPromise,mimeType:file.type},};}}
Enter fullscreen modeExit fullscreen mode

Usage

Here is the code to use the AI service.

[...]<inputtype="file"(change)="imageRecognition($event)"/>[...]
Enter fullscreen modeExit fullscreen mode
[...]asyncimageRecognition(event:any){this.imageResponse='';constfile:File=event.target.files[0];if(file){try{this.imageResponse=awaitthis.aiService.generateTextFromImage("Can you describe this image?",file);}catch(error:any){this.imageResponse=`Error:${error.message}`;}}}[...]
Enter fullscreen modeExit fullscreen mode

The service initializes Firebase with your project's configuration.getAI() andgetGenerativeModel() are the entry points to Firebase AI functionality.getAI() initializes the AI service with options to specify the backend, andgetGenerativeModel() creates a model instance. Themode: 'prefer_on_device' configuration iscrucial. It instructs the SDK to try to use the on-device model if it's available.
generateTextFromImage() wraps the calls to the GenerativeModel (Gemini) to process text and images, respectively. They handle the core AI processing logic and manage the streaming responses from the model.

How it Works in Practice

WhenimageRecognition() is called, theAiService attempts to use the on-device Gemini model first. If the model is available and meets the processing requirements, the AI generation happens locally. If the on-device model is not available (e.g., due to device limitations, model not yet downloaded, or the feature being unavailable), the SDK automatically falls back to using the cloud-based Gemini model.


This example provides a foundational understanding of hybrid on-device AI with Firebase and Angular. By combining the strengths of both cloud and on-device processing, you can create more responsive, robust, and user-friendly applications. Remember to handle API keys securely, implement thorough error handling, and monitor resource usage to optimize performance and costs.


You can follow me on GitHub, where I'm creating cool projects.

I hope you enjoyed this article, don't forget to give ❤️.
Until next time 👋

Top comments(0)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

Free, open and honest Angular education.

Read our welcome letter which is an open invitation for you to join.

More fromThis is Angular

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp