Migrate from the legacy custom model API Stay organized with collections Save and categorize content based on your preferences.
Version 22.0.2 of thefirebase-ml-model-interpreter library introduces a newgetLatestModelFile() method, which gets the location on the device of custommodels. You can use this method to directly instantiate a TensorFlow LiteInterpreter object, which you can use instead of theFirebaseModelInterpreter wrapper.
Going forward, this is the preferred approach. Because the TensorFlow Liteinterpreter version is no longer coupled with the Firebase library version, youhave more flexibility to upgrade to new versions of TensorFlow Lite when youwant, or more easily use custom TensorFlow Lite builds.
This page shows how you can migrate from usingFirebaseModelInterpreter to theTensorFlow LiteInterpreter.
1. Update project dependencies
Update your project's dependencies to include version 22.0.2 of thefirebase-ml-model-interpreter library (or newer) and thetensorflow-litelibrary:
Before
implementation("com.google.firebase:firebase-ml-model-interpreter:22.0.1")After
implementation("com.google.firebase:firebase-ml-model-interpreter:22.0.2")implementation("org.tensorflow:tensorflow-lite:2.0.0")2. Create a TensorFlow Lite interpreter instead of a FirebaseModelInterpreter
Instead of creating aFirebaseModelInterpreter, get the model's location ondevice withgetLatestModelFile() and use it to create a TensorFlow LiteInterpreter.
Before
Kotlin
valremoteModel=FirebaseCustomRemoteModel.Builder("your_model").build()valoptions=FirebaseModelInterpreterOptions.Builder(remoteModel).build()valinterpreter=FirebaseModelInterpreter.getInstance(options)Java
FirebaseCustomRemoteModelremoteModel=newFirebaseCustomRemoteModel.Builder("your_model").build();FirebaseModelInterpreterOptionsoptions=newFirebaseModelInterpreterOptions.Builder(remoteModel).build();FirebaseModelInterpreterinterpreter=FirebaseModelInterpreter.getInstance(options);After
Kotlin
valremoteModel=FirebaseCustomRemoteModel.Builder("your_model").build()FirebaseModelManager.getInstance().getLatestModelFile(remoteModel).addOnCompleteListener{task->valmodelFile=task.getResult()if(modelFile!=null){// Instantiate an org.tensorflow.lite.Interpreter object.interpreter=Interpreter(modelFile)}}Java
FirebaseCustomRemoteModelremoteModel=newFirebaseCustomRemoteModel.Builder("your_model").build();FirebaseModelManager.getInstance().getLatestModelFile(remoteModel).addOnCompleteListener(newOnCompleteListener<File>(){@OverridepublicvoidonComplete(@NonNullTask<File>task){FilemodelFile=task.getResult();if(modelFile!=null){// Instantiate an org.tensorflow.lite.Interpreter object.Interpreterinterpreter=newInterpreter(modelFile);}}});3. Update input and output preparation code
WithFirebaseModelInterpreter, you specify the model's input and output shapesby passing aFirebaseModelInputOutputOptions object to the interpreter whenyou run it.
For the TensorFlow Lite interpreter, you instead allocateByteBuffer objectswith the right size for your model's input and output.
For example, if your model has an input shape of [1 224 224 3]float valuesand an output shape of [1 1000]float values, make these changes:
Before
Kotlin
valinputOutputOptions=FirebaseModelInputOutputOptions.Builder().setInputFormat(0,FirebaseModelDataType.FLOAT32,intArrayOf(1,224,224,3)).setOutputFormat(0,FirebaseModelDataType.FLOAT32,intArrayOf(1,1000)).build()valinput=ByteBuffer.allocateDirect(224*224*3*4).order(ByteOrder.nativeOrder())// Then populate with input data.valinputs=FirebaseModelInputs.Builder().add(input).build()interpreter.run(inputs,inputOutputOptions).addOnSuccessListener{outputs->// ...}.addOnFailureListener{// Task failed with an exception.// ...}Java
FirebaseModelInputOutputOptionsinputOutputOptions=newFirebaseModelInputOutputOptions.Builder().setInputFormat(0,FirebaseModelDataType.FLOAT32,newint[]{1,224,224,3}).setOutputFormat(0,FirebaseModelDataType.FLOAT32,newint[]{1,1000}).build();float[][][][]input=newfloat[1][224][224][3];// Then populate with input data.FirebaseModelInputsinputs=newFirebaseModelInputs.Builder().add(input).build();interpreter.run(inputs,inputOutputOptions).addOnSuccessListener(newOnSuccessListener<FirebaseModelOutputs>(){@OverridepublicvoidonSuccess(FirebaseModelOutputsresult){// ...}}).addOnFailureListener(newOnFailureListener(){@OverridepublicvoidonFailure(@NonNullExceptione){// Task failed with an exception// ...}});After
Kotlin
valinBufferSize=1*224*224*3*java.lang.Float.SIZE/java.lang.Byte.SIZEvalinputBuffer=ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder())// Then populate with input data.valoutBufferSize=1*1000*java.lang.Float.SIZE/java.lang.Byte.SIZEvaloutputBuffer=ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder())interpreter.run(inputBuffer,outputBuffer)Java
intinBufferSize=1*224*224*3*java.lang.Float.SIZE/java.lang.Byte.SIZE;ByteBufferinputBuffer=ByteBuffer.allocateDirect(inBufferSize).order(ByteOrder.nativeOrder());// Then populate with input data.intoutBufferSize=1*1000*java.lang.Float.SIZE/java.lang.Byte.SIZE;ByteBufferoutputBuffer=ByteBuffer.allocateDirect(outBufferSize).order(ByteOrder.nativeOrder());interpreter.run(inputBuffer,outputBuffer);4. Update output handling code
Finally, instead of getting the model's output with theFirebaseModelOutputsobject'sgetOutput() method, convert theByteBuffer output to whateverstructure is convenient for your use case.
For example, if you're doing classification, you might make changes like thefollowing:
Before
Kotlin
valoutput=result.getOutput(0)valprobabilities=output[0]try{valreader=BufferedReader(InputStreamReader(assets.open("custom_labels.txt")))for(probabilityinprobabilities){vallabel:String=reader.readLine()println("$label:$probability")}}catch(e:IOException){// File not found?}Java
float[][]output=result.getOutput(0);float[]probabilities=output[0];try{BufferedReaderreader=newBufferedReader(newInputStreamReader(getAssets().open("custom_labels.txt")));for(floatprobability:probabilities){Stringlabel=reader.readLine();Log.i(TAG,String.format("%s: %1.4f",label,probability));}}catch(IOExceptione){// File not found?}After
Kotlin
modelOutput.rewind()valprobabilities=modelOutput.asFloatBuffer()try{valreader=BufferedReader(InputStreamReader(assets.open("custom_labels.txt")))for(iinprobabilities.capacity()){vallabel:String=reader.readLine()valprobability=probabilities.get(i)println("$label:$probability")}}catch(e:IOException){// File not found?}Java
modelOutput.rewind();FloatBufferprobabilities=modelOutput.asFloatBuffer();try{BufferedReaderreader=newBufferedReader(newInputStreamReader(getAssets().open("custom_labels.txt")));for(inti=0;i <probabilities.capacity();i++){Stringlabel=reader.readLine();floatprobability=probabilities.get(i);Log.i(TAG,String.format("%s: %1.4f",label,probability));}}catch(IOExceptione){// File not found?}Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-17 UTC.