Detect objects in images with an AutoML-trained model on Android Stay organized with collections Save and categorize content based on your preferences.
After youtrain your own model using AutoML Vision Edge,you can use it in your app to detect objects in images.
Firebase ML's AutoML Vision Edge features are deprecated. Consider usingVertex AI to automatically train ML models, which you can eitherexport as TensorFlow Lite models for on-device use ordeploy for cloud-based inference.There are two ways to integrate models trained from AutoML Vision Edge: You canbundle the model by putting it inside your app’s asset folder, or you candynamically download it from Firebase.
| Model bundling options | |
|---|---|
| Bundled in your app |
|
| Hosted with Firebase |
|
Before you begin
If you want to download a model, make sure youadd Firebase to your Android project,if you have not already done so. This is not required when you bundle the model.
Add the dependencies for the TensorFlow Lite Task library to your module'sapp-level gradle file, which is usually
app/build.gradle:For bundling a model with your app:
dependencies{// ...// Object detection with a bundled Auto ML modelimplementation'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly-SNAPSHOT'}For dynamically downloading a model from Firebase, also add the Firebase MLdependency:
dependencies{// ...// Object detection with an Auto ML model deployed to Firebaseimplementationplatform('com.google.firebase:firebase-bom:26.1.1')implementation'com.google.firebase:firebase-ml-model-interpreter'implementation'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly'}
1. Load the model
Configure a local model source
To bundle the model with your app:
- Extract the model from the zip archive you downloaded from theGoogle Cloud console.
- Include your model in your app package:
- If you don't have an assets folder in your project, create one byright-clicking the
app/folder, then clickingNew > Folder > Assets Folder. - Copy your
tflitemodel file with embedded metadata to the assetsfolder.
- If you don't have an assets folder in your project, create one byright-clicking the
Add the following to your app's
build.gradlefile to ensureGradle doesn’t compress the model file when building the app:android { // ... aaptOptions { noCompress "tflite" }}The model file will be included in the app package and availableas a raw asset.
Note: starting from version 4.1 of the Android Gradle plugin, .tflite will beadded to the noCompress list by default and the above is not needed anymore.
Configure a Firebase-hosted model source
To use the remotely-hosted model, create aRemoteModel object,specifying the name you assigned the model when you published it:
Java
// Specify the name you assigned when you deployed the model.FirebaseCustomRemoteModelremoteModel=newFirebaseCustomRemoteModel.Builder("your_model").build();Kotlin
// Specify the name you assigned when you deployed the model.valremoteModel=FirebaseCustomRemoteModel.Builder("your_model_name").build()Then, start the model download task, specifying the conditions under whichyou want to allow downloading. If the model isn't on the device, or if a newerversion of the model is available, the task will asynchronously download themodel from Firebase:
Java
DownloadConditionsdownloadConditions=newDownloadConditions.Builder().requireWifi().build();RemoteModelManager.getInstance().download(remoteModel,downloadConditions).addOnSuccessListener(newOnSuccessListener<Void>(){@OverridepublicvoidonSuccess(@NonNullTask<Void>task){// Success.}});Kotlin
valdownloadConditions=DownloadConditions.Builder().requireWifi().build()RemoteModelManager.getInstance().download(remoteModel,downloadConditions).addOnSuccessListener{// Success.}Many apps start the download task in their initialization code, but youcan do so at any point before you need to use the model.
Create an object detector from your model
After you configure your model sources, create aObjectDetector object from oneof them.
If you only have a locally-bundled model, just create an object detector from yourmodel file and configure the confidence scorethreshold you want to require (seeEvaluate your model):
Java
// InitializationObjectDetectorOptionsoptions=ObjectDetectorOptions.builder().setScoreThreshold(0)// Evaluate your model in theGoogle Cloud console// to determine an appropriate value..build();ObjectDetectorobjectDetector=ObjectDetector.createFromFileAndOptions(context,modelFile,options);Kotlin
// Initializationvaloptions=ObjectDetectorOptions.builder().setScoreThreshold(0)// Evaluate your model in theGoogle Cloud console// to determine an appropriate value..build()valobjectDetector=ObjectDetector.createFromFileAndOptions(context,modelFile,options)If you have a remotely-hosted model, you will have to check that it has beendownloaded before you run it. You can check the status of the model downloadtask using the model manager'sisModelDownloaded() method.
Although you only have to confirm this before running the object detector, if youhave both a remotely-hosted model and a locally-bundled model, it might makesense to perform this check when instantiating the object detector: create anobject detector from the remote model if it's been downloaded, and from the localmodel otherwise.
Java
FirebaseModelManager.getInstance().isModelDownloaded(remoteModel).addOnSuccessListener(newOnSuccessListener<Boolean>(){@OverridepublicvoidonSuccess(BooleanisDownloaded){}});Kotlin
FirebaseModelManager.getInstance().isModelDownloaded(remoteModel).addOnSuccessListener{success->}If you only have a remotely-hosted model, you should disable model-relatedfunctionality—for example, grey-out or hide part of your UI—untilyou confirm the model has been downloaded. You can do so by attaching a listenerto the model manager'sdownload() method.
Once you know your model has been downloaded, create an object detector from themodel file:
Java
FirebaseModelManager.getInstance().getLatestModelFile(remoteModel).addOnCompleteListener(newOnCompleteListener<File>(){@OverridepublicvoidonComplete(@NonNullTask<File>task){FilemodelFile=task.getResult();if(modelFile!=null){ObjectDetectorOptionsoptions=ObjectDetectorOptions.builder().setScoreThreshold(0).build();objectDetector=ObjectDetector.createFromFileAndOptions(getApplicationContext(),modelFile.getPath(),options);}}});Kotlin
FirebaseModelManager.getInstance().getLatestModelFile(remoteModel).addOnSuccessListener{modelFile->valoptions=ObjectDetectorOptions.builder().setScoreThreshold(0f).build()objectDetector=ObjectDetector.createFromFileAndOptions(applicationContext,modelFile.path,options)}2. Prepare the input image
Then, for each image you want to label, create aTensorImage object from yourimage. You can create aTensorImage object from aBitmap using thefromBitmap method:
Java
TensorImageimage=TensorImage.fromBitmap(bitmap);Kotlin
valimage=TensorImage.fromBitmap(bitmap)If your image data isn't in aBitmap, you can load a pixel array as shown intheTensorFlow Lite docs.
3. Run the object detector
To detect objects in an image, pass theTensorImage object to theObjectDetector'sdetect() method.
Java
List<Detection>results=objectDetector.detect(image);Kotlin
valresults=objectDetector.detect(image)4. Get information about labeled objects
If the object detection operation succeeds, it returns a list ofDetectionobjects. EachDetection object represents something that was detected in theimage. You can get each object's bounding box and its labels.
For example:
Java
for(Detectionresult:results){RectFbounds=result.getBoundingBox();List<Category>labels=result.getCategories();}Kotlin
for(resultinresults){valbounds=result.getBoundingBox()vallabels=result.getCategories()}Tips to improve real-time performance
If you want to label images in a real-time application, follow theseguidelines to achieve the best framerates:
- Throttle calls to the image labeler. If a new video frame becomes available while the image labeler is running, drop the frame. See the
VisionProcessorBaseclass in the quickstart sample app for an example. - If you are using the output of the image labeler to overlay graphics on the input image, first get the result, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each input frame. See the
CameraSourcePreviewandGraphicOverlayclasses in the quickstart sample app for an example. If you use the Camera2 API, capture images in
ImageFormat.YUV_420_888format.If you use the older Camera API, capture images in
ImageFormat.NV21format.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-17 UTC.