Recognize Landmarks with ML Kit on Android Stay organized with collections Save and categorize content based on your preferences.
This page is about an old version of the Landmark Recognition API, which was part of ML Kit for Firebase. For the latest docs, seethe latest version in theFirebase ML section.
You can use ML Kit to recognize well-known landmarks in an image.
Use of ML Kit to access Cloud ML functionality is subject to theGoogle Cloud Platform LicenseAgreement andServiceSpecific Terms, and billed accordingly. For billing information, see theFirebasePricing page.Before you begin
- If you haven't already,add Firebase to your Android project.
- Add the dependencies for the ML Kit Android libraries to your module (app-level) Gradle file (usually
app/build.gradle):applyplugin:'com.android.application'applyplugin:'com.google.gms.google-services'dependencies{// ...implementation'com.google.firebase:firebase-ml-vision:24.0.3'}
If you have not already enabled Cloud-based APIs for your project, do so now:
- Open theML Kit APIs page of theFirebase console.
If you have not already upgraded your project to a Blaze pricing plan, clickUpgrade to do so. (You will be prompted to upgrade only if your project isn't on the Blaze plan.)
Only Blaze-level projects can use Cloud-based APIs.
- If Cloud-based APIs aren't already enabled, clickEnable Cloud-based APIs.
Configure the landmark detector
By default, the Cloud detector uses theSTABLE version of themodel and returns up to 10 results. If you want to change either of thesesettings, specify them with aFirebaseVisionCloudDetectorOptionsobject.
For example, to change both of the default settings, build aFirebaseVisionCloudDetectorOptions object as in the followingexample:
Java
FirebaseVisionCloudDetectorOptionsoptions=newFirebaseVisionCloudDetectorOptions.Builder().setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL).setMaxResults(15).build();
Kotlin
valoptions=FirebaseVisionCloudDetectorOptions.Builder().setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL).setMaxResults(15).build()
To use the default settings, you can useFirebaseVisionCloudDetectorOptions.DEFAULT in the next step.
Run the landmark detector
To recognize landmarks in an image, create aFirebaseVisionImage objectfrom either aBitmap,media.Image,ByteBuffer, byte array, or a file onthe device. Then, pass theFirebaseVisionImage object to theFirebaseVisionCloudLandmarkDetector'sdetectInImage method.Create a
FirebaseVisionImageobject from your image.To create a
FirebaseVisionImageobject from amedia.Imageobject, such as when capturing an image from a device's camera, pass themedia.Imageobject and the image's rotation toFirebaseVisionImage.fromMediaImage().If you use the CameraX library, the
OnImageCapturedListenerandImageAnalysis.Analyzerclasses calculate the rotation value for you, so you just need to convert the rotation to one of ML Kit'sROTATION_constants before callingFirebaseVisionImage.fromMediaImage():Java
privateclassYourAnalyzerimplementsImageAnalysis.Analyzer{privateintdegreesToFirebaseRotation(intdegrees){switch(degrees){case0:returnFirebaseVisionImageMetadata.ROTATION_0;case90:returnFirebaseVisionImageMetadata.ROTATION_90;case180:returnFirebaseVisionImageMetadata.ROTATION_180;case270:returnFirebaseVisionImageMetadata.ROTATION_270;default:thrownewIllegalArgumentException("Rotation must be 0, 90, 180, or 270.");}}@Overridepublicvoidanalyze(ImageProxyimageProxy,intdegrees){if(imageProxy==null||imageProxy.getImage()==null){return;}ImagemediaImage=imageProxy.getImage();introtation=degreesToFirebaseRotation(degrees);FirebaseVisionImageimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation);// Pass image to an ML Kit Vision API// ...}}
Kotlin
privateclassYourImageAnalyzer:ImageAnalysis.Analyzer{privatefundegreesToFirebaseRotation(degrees:Int):Int=when(degrees){0->FirebaseVisionImageMetadata.ROTATION_090->FirebaseVisionImageMetadata.ROTATION_90180->FirebaseVisionImageMetadata.ROTATION_180270->FirebaseVisionImageMetadata.ROTATION_270else->throwException("Rotation must be 0, 90, 180, or 270.")}overridefunanalyze(imageProxy:ImageProxy?,degrees:Int){valmediaImage=imageProxy?.imagevalimageRotation=degreesToFirebaseRotation(degrees)if(mediaImage!=null){valimage=FirebaseVisionImage.fromMediaImage(mediaImage,imageRotation)// Pass image to an ML Kit Vision API// ...}}}
If you don't use a camera library that gives you the image's rotation, you can calculate it from the device's rotation and the orientation of camera sensor in the device:
Java
privatestaticfinalSparseIntArrayORIENTATIONS=newSparseIntArray();static{ORIENTATIONS.append(Surface.ROTATION_0,90);ORIENTATIONS.append(Surface.ROTATION_90,0);ORIENTATIONS.append(Surface.ROTATION_180,270);ORIENTATIONS.append(Surface.ROTATION_270,180);}/** * Get the angle by which an image must be rotated given the device's current * orientation. */@RequiresApi(api=Build.VERSION_CODES.LOLLIPOP)privateintgetRotationCompensation(StringcameraId,Activityactivity,Contextcontext)throwsCameraAccessException{// Get the device's current rotation relative to its "native" orientation.// Then, from the ORIENTATIONS table, look up the angle the image must be// rotated to compensate for the device's rotation.intdeviceRotation=activity.getWindowManager().getDefaultDisplay().getRotation();introtationCompensation=ORIENTATIONS.get(deviceRotation);// On most devices, the sensor orientation is 90 degrees, but for some// devices it is 270 degrees. For devices with a sensor orientation of// 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.CameraManagercameraManager=(CameraManager)context.getSystemService(CAMERA_SERVICE);intsensorOrientation=cameraManager.getCameraCharacteristics(cameraId).get(CameraCharacteristics.SENSOR_ORIENTATION);rotationCompensation=(rotationCompensation+sensorOrientation+270)%360;// Return the corresponding FirebaseVisionImageMetadata rotation value.intresult;switch(rotationCompensation){case0:result=FirebaseVisionImageMetadata.ROTATION_0;break;case90:result=FirebaseVisionImageMetadata.ROTATION_90;break;case180:result=FirebaseVisionImageMetadata.ROTATION_180;break;case270:result=FirebaseVisionImageMetadata.ROTATION_270;break;default:result=FirebaseVisionImageMetadata.ROTATION_0;Log.e(TAG,"Bad rotation value: "+rotationCompensation);}returnresult;}
Kotlin
privatevalORIENTATIONS=SparseIntArray()init{ORIENTATIONS.append(Surface.ROTATION_0,90)ORIENTATIONS.append(Surface.ROTATION_90,0)ORIENTATIONS.append(Surface.ROTATION_180,270)ORIENTATIONS.append(Surface.ROTATION_270,180)}/** * Get the angle by which an image must be rotated given the device's current * orientation. */@RequiresApi(api=Build.VERSION_CODES.LOLLIPOP)@Throws(CameraAccessException::class)privatefungetRotationCompensation(cameraId:String,activity:Activity,context:Context):Int{// Get the device's current rotation relative to its "native" orientation.// Then, from the ORIENTATIONS table, look up the angle the image must be// rotated to compensate for the device's rotation.valdeviceRotation=activity.windowManager.defaultDisplay.rotationvarrotationCompensation=ORIENTATIONS.get(deviceRotation)// On most devices, the sensor orientation is 90 degrees, but for some// devices it is 270 degrees. For devices with a sensor orientation of// 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.valcameraManager=context.getSystemService(CAMERA_SERVICE)asCameraManagervalsensorOrientation=cameraManager.getCameraCharacteristics(cameraId).get(CameraCharacteristics.SENSOR_ORIENTATION)!!rotationCompensation=(rotationCompensation+sensorOrientation+270)%360// Return the corresponding FirebaseVisionImageMetadata rotation value.valresult:Intwhen(rotationCompensation){0->result=FirebaseVisionImageMetadata.ROTATION_090->result=FirebaseVisionImageMetadata.ROTATION_90180->result=FirebaseVisionImageMetadata.ROTATION_180270->result=FirebaseVisionImageMetadata.ROTATION_270else->{result=FirebaseVisionImageMetadata.ROTATION_0Log.e(TAG,"Bad rotation value:$rotationCompensation")}}returnresult}
Then, pass the
media.Imageobject and the rotation value toFirebaseVisionImage.fromMediaImage():Java
FirebaseVisionImageimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation);
Kotlin
valimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation)
- To create a
FirebaseVisionImageobject from a file URI, pass the app context and file URI toFirebaseVisionImage.fromFilePath(). This is useful when you use anACTION_GET_CONTENTintent to prompt the user to select an image from their gallery app.Java
FirebaseVisionImageimage;try{image=FirebaseVisionImage.fromFilePath(context,uri);}catch(IOExceptione){e.printStackTrace();}
Kotlin
valimage:FirebaseVisionImagetry{image=FirebaseVisionImage.fromFilePath(context,uri)}catch(e:IOException){e.printStackTrace()}
- To create a
FirebaseVisionImageobject from aByteBufferor a byte array, first calculate the image rotation as described above formedia.Imageinput.Then, create a
FirebaseVisionImageMetadataobject that contains the image's height, width, color encoding format, and rotation:Java
FirebaseVisionImageMetadatametadata=newFirebaseVisionImageMetadata.Builder().setWidth(480)// 480x360 is typically sufficient for.setHeight(360)// image recognition.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21).setRotation(rotation).build();
Kotlin
valmetadata=FirebaseVisionImageMetadata.Builder().setWidth(480)// 480x360 is typically sufficient for.setHeight(360)// image recognition.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21).setRotation(rotation).build()
Use the buffer or array, and the metadata object, to create a
FirebaseVisionImageobject:Java
FirebaseVisionImageimage=FirebaseVisionImage.fromByteBuffer(buffer,metadata);// Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
Kotlin
valimage=FirebaseVisionImage.fromByteBuffer(buffer,metadata)// Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
- To create a
FirebaseVisionImageobject from aBitmapobject:The image represented by theJava
FirebaseVisionImageimage=FirebaseVisionImage.fromBitmap(bitmap);
Kotlin
valimage=FirebaseVisionImage.fromBitmap(bitmap)
Bitmapobject must be upright, with no additional rotation required.
Get an instance of
FirebaseVisionCloudLandmarkDetector:Java
FirebaseVisionCloudLandmarkDetectordetector=FirebaseVision.getInstance().getVisionCloudLandmarkDetector();// Or, to change the default settings:// FirebaseVisionCloudLandmarkDetector detector = FirebaseVision.getInstance()// .getVisionCloudLandmarkDetector(options);
Kotlin
valdetector=FirebaseVision.getInstance().visionCloudLandmarkDetector// Or, to change the default settings:// val detector = FirebaseVision.getInstance()// .getVisionCloudLandmarkDetector(options)
Finally, pass the image to the
detectInImagemethod:Java
Task<List<FirebaseVisionCloudLandmark>>result=detector.detectInImage(image).addOnSuccessListener(newOnSuccessListener<List<FirebaseVisionCloudLandmark>>(){@OverridepublicvoidonSuccess(List<FirebaseVisionCloudLandmark>firebaseVisionCloudLandmarks){// Task completed successfully// ...}}).addOnFailureListener(newOnFailureListener(){@OverridepublicvoidonFailure(@NonNullExceptione){// Task failed with an exception// ...}});
Kotlin
valresult=detector.detectInImage(image).addOnSuccessListener{firebaseVisionCloudLandmarks->// Task completed successfully// ...}.addOnFailureListener{e->// Task failed with an exception// ...}
Get information about the recognized landmarks
If the landmark recognition operation succeeds, a list ofFirebaseVisionCloudLandmark objects will be passed to the success listener. EachFirebaseVisionCloudLandmark object represents a landmark that was recognized in theimage. For each landmark, you can get its bounding coordinates in the input image,the landmark's name, its latitude and longitude, itsKnowledge Graph entity ID(if available), and the confidence score of the match. For example:Java
for(FirebaseVisionCloudLandmarklandmark:firebaseVisionCloudLandmarks){Rectbounds=landmark.getBoundingBox();StringlandmarkName=landmark.getLandmark();StringentityId=landmark.getEntityId();floatconfidence=landmark.getConfidence();// Multiple locations are possible, e.g., the location of the depicted// landmark and the location the picture was taken.for(FirebaseVisionLatLngloc:landmark.getLocations()){doublelatitude=loc.getLatitude();doublelongitude=loc.getLongitude();}}
Kotlin
for(landmarkinfirebaseVisionCloudLandmarks){valbounds=landmark.boundingBoxvallandmarkName=landmark.landmarkvalentityId=landmark.entityIdvalconfidence=landmark.confidence// Multiple locations are possible, e.g., the location of the depicted// landmark and the location the picture was taken.for(locinlandmark.locations){vallatitude=loc.latitudevallongitude=loc.longitude}}
Next steps
- Before you deploy to production an app that uses a Cloud API, you should takesome additional steps toprevent and mitigate theeffect of unauthorized API access.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.