Detect Faces with ML Kit on Android

This page describes an old version of the Face Detection API, which was part of ML Kit for Firebase. Development of this API has been moved to the standalone ML Kit SDK, which you can use with or without Firebase.Learn more.

See Detect faces with ML Kit on Android for the latest documentation.

You can use ML Kit to detect faces in images and video.

Before you begin

  1. If you haven't already,add Firebase to your Android project.
  2. Add the dependencies for the ML Kit Android libraries to your module (app-level) Gradle file (usuallyapp/build.gradle):
    applyplugin:'com.android.application'applyplugin:'com.google.gms.google-services'dependencies{// ...implementation'com.google.firebase:firebase-ml-vision:24.0.3'// If you want to detect face contours (landmark detection and classification// don't require this additional model):implementation'com.google.firebase:firebase-ml-vision-face-model:20.0.1'}
  3. Optional but recommended: Configure your app to automatically download the ML model to the device after your app is installed from the Play Store.

    To do so, add the following declaration to your app'sAndroidManifest.xml file:

    <application...>...  <meta-dataandroid:name="com.google.firebase.ml.vision.DEPENDENCIES"android:value="face"/>  <!--Tousemultiplemodels:android:value="face,model2,model3"--></application>
    If you do not enable install-time model downloads, the model will be downloaded the first time you run the detector. Requests you make before the download has completed will produce no results.

Input image guidelines

For ML Kit to accurately detect faces, input images must contain faces that are represented by sufficient pixel data. In general, each face you want to detect in an image should be at least 100x100 pixels. If you want to detect the contours of faces, ML Kit requires higher resolution input: each face should be at least 200x200 pixels.

If you are detecting faces in a real-time application, you might also want to consider the overall dimensions of the input images. Smaller images can be processed faster, so to reduce latency, capture images at lower resolutions (keeping in mind the above accuracy requirements) and ensure that the subject's face occupies as much of the image as possible. Also seeTips to improve real-time performance.

Poor image focus can hurt accuracy. If you aren't getting acceptable results, try asking the user to recapture the image.

The orientation of a face relative to the camera can also affect what facial features ML Kit detects. SeeFace Detection Concepts.

1. Configure the face detector

Before you apply face detection to an image, if you want to change any of theface detector's default settings, specify those settings with aFirebaseVisionFaceDetectorOptions object.You can change the following settings:

Settings
Performance modeFAST (default) |ACCURATE

Favor speed or accuracy when detecting faces.

Detect landmarksNO_LANDMARKS (default) |ALL_LANDMARKS

Whether to attempt to identify facial "landmarks": eyes, ears, nose, cheeks, mouth, and so on.

Detect contoursNO_CONTOURS (default) |ALL_CONTOURS

Whether to detect the contours of facial features. Contours are detected for only the most prominent face in an image.

Classify facesNO_CLASSIFICATIONS (default) |ALL_CLASSIFICATIONS

Whether or not to classify faces into categories such as "smiling", and "eyes open".

Minimum face sizefloat (default:0.1f)

The minimum size, relative to the image, of faces to detect.

Enable face trackingfalse (default) |true

Whether or not to assign faces an ID, which can be used to track faces across images.

Note that when contour detection is enabled, only one face is detected, so face tracking doesn't produce useful results. For this reason, and to improve detection speed, don't enable both contour detection and face tracking.

For example:

Java

// High-accuracy landmark detection and face classificationFirebaseVisionFaceDetectorOptionshighAccuracyOpts=newFirebaseVisionFaceDetectorOptions.Builder().setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE).setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS).setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS).build();// Real-time contour detection of multiple facesFirebaseVisionFaceDetectorOptionsrealTimeOpts=newFirebaseVisionFaceDetectorOptions.Builder().setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS).build();

Kotlin

// High-accuracy landmark detection and face classificationvalhighAccuracyOpts=FirebaseVisionFaceDetectorOptions.Builder().setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE).setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS).setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS).build()// Real-time contour detection of multiple facesvalrealTimeOpts=FirebaseVisionFaceDetectorOptions.Builder().setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS).build()

2. Run the face detector

To detect faces in an image, create aFirebaseVisionImage objectfrom either aBitmap,media.Image,ByteBuffer, byte array, or a file onthe device. Then, pass theFirebaseVisionImage object to theFirebaseVisionFaceDetector'sdetectInImage method.

For face recognition, you should use an image with dimensions of at least480x360 pixels. If you are recognizing faces in real time, capturing framesat this minimum resolution can help reduce latency.

  1. Create aFirebaseVisionImage object from yourimage.

    • To create aFirebaseVisionImage object from amedia.Image object, such as when capturing an image from a device's camera, pass themedia.Image object and the image's rotation toFirebaseVisionImage.fromMediaImage().

      If you use the CameraX library, theOnImageCapturedListener andImageAnalysis.Analyzer classes calculate the rotation value for you, so you just need to convert the rotation to one of ML Kit'sROTATION_ constants before callingFirebaseVisionImage.fromMediaImage():

      Java

      privateclassYourAnalyzerimplementsImageAnalysis.Analyzer{privateintdegreesToFirebaseRotation(intdegrees){switch(degrees){case0:returnFirebaseVisionImageMetadata.ROTATION_0;case90:returnFirebaseVisionImageMetadata.ROTATION_90;case180:returnFirebaseVisionImageMetadata.ROTATION_180;case270:returnFirebaseVisionImageMetadata.ROTATION_270;default:thrownewIllegalArgumentException("Rotation must be 0, 90, 180, or 270.");}}@Overridepublicvoidanalyze(ImageProxyimageProxy,intdegrees){if(imageProxy==null||imageProxy.getImage()==null){return;}ImagemediaImage=imageProxy.getImage();introtation=degreesToFirebaseRotation(degrees);FirebaseVisionImageimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation);// Pass image to an ML Kit Vision API// ...}}

      Kotlin

      privateclassYourImageAnalyzer:ImageAnalysis.Analyzer{privatefundegreesToFirebaseRotation(degrees:Int):Int=when(degrees){0->FirebaseVisionImageMetadata.ROTATION_090->FirebaseVisionImageMetadata.ROTATION_90180->FirebaseVisionImageMetadata.ROTATION_180270->FirebaseVisionImageMetadata.ROTATION_270else->throwException("Rotation must be 0, 90, 180, or 270.")}overridefunanalyze(imageProxy:ImageProxy?,degrees:Int){valmediaImage=imageProxy?.imagevalimageRotation=degreesToFirebaseRotation(degrees)if(mediaImage!=null){valimage=FirebaseVisionImage.fromMediaImage(mediaImage,imageRotation)// Pass image to an ML Kit Vision API// ...}}}

      If you don't use a camera library that gives you the image's rotation, you can calculate it from the device's rotation and the orientation of camera sensor in the device:

      Java

      privatestaticfinalSparseIntArrayORIENTATIONS=newSparseIntArray();static{ORIENTATIONS.append(Surface.ROTATION_0,90);ORIENTATIONS.append(Surface.ROTATION_90,0);ORIENTATIONS.append(Surface.ROTATION_180,270);ORIENTATIONS.append(Surface.ROTATION_270,180);}/** * Get the angle by which an image must be rotated given the device's current * orientation. */@RequiresApi(api=Build.VERSION_CODES.LOLLIPOP)privateintgetRotationCompensation(StringcameraId,Activityactivity,Contextcontext)throwsCameraAccessException{// Get the device's current rotation relative to its "native" orientation.// Then, from the ORIENTATIONS table, look up the angle the image must be// rotated to compensate for the device's rotation.intdeviceRotation=activity.getWindowManager().getDefaultDisplay().getRotation();introtationCompensation=ORIENTATIONS.get(deviceRotation);// On most devices, the sensor orientation is 90 degrees, but for some// devices it is 270 degrees. For devices with a sensor orientation of// 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.CameraManagercameraManager=(CameraManager)context.getSystemService(CAMERA_SERVICE);intsensorOrientation=cameraManager.getCameraCharacteristics(cameraId).get(CameraCharacteristics.SENSOR_ORIENTATION);rotationCompensation=(rotationCompensation+sensorOrientation+270)%360;// Return the corresponding FirebaseVisionImageMetadata rotation value.intresult;switch(rotationCompensation){case0:result=FirebaseVisionImageMetadata.ROTATION_0;break;case90:result=FirebaseVisionImageMetadata.ROTATION_90;break;case180:result=FirebaseVisionImageMetadata.ROTATION_180;break;case270:result=FirebaseVisionImageMetadata.ROTATION_270;break;default:result=FirebaseVisionImageMetadata.ROTATION_0;Log.e(TAG,"Bad rotation value: "+rotationCompensation);}returnresult;}

      Kotlin

      privatevalORIENTATIONS=SparseIntArray()init{ORIENTATIONS.append(Surface.ROTATION_0,90)ORIENTATIONS.append(Surface.ROTATION_90,0)ORIENTATIONS.append(Surface.ROTATION_180,270)ORIENTATIONS.append(Surface.ROTATION_270,180)}/** * Get the angle by which an image must be rotated given the device's current * orientation. */@RequiresApi(api=Build.VERSION_CODES.LOLLIPOP)@Throws(CameraAccessException::class)privatefungetRotationCompensation(cameraId:String,activity:Activity,context:Context):Int{// Get the device's current rotation relative to its "native" orientation.// Then, from the ORIENTATIONS table, look up the angle the image must be// rotated to compensate for the device's rotation.valdeviceRotation=activity.windowManager.defaultDisplay.rotationvarrotationCompensation=ORIENTATIONS.get(deviceRotation)// On most devices, the sensor orientation is 90 degrees, but for some// devices it is 270 degrees. For devices with a sensor orientation of// 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.valcameraManager=context.getSystemService(CAMERA_SERVICE)asCameraManagervalsensorOrientation=cameraManager.getCameraCharacteristics(cameraId).get(CameraCharacteristics.SENSOR_ORIENTATION)!!rotationCompensation=(rotationCompensation+sensorOrientation+270)%360// Return the corresponding FirebaseVisionImageMetadata rotation value.valresult:Intwhen(rotationCompensation){0->result=FirebaseVisionImageMetadata.ROTATION_090->result=FirebaseVisionImageMetadata.ROTATION_90180->result=FirebaseVisionImageMetadata.ROTATION_180270->result=FirebaseVisionImageMetadata.ROTATION_270else->{result=FirebaseVisionImageMetadata.ROTATION_0Log.e(TAG,"Bad rotation value:$rotationCompensation")}}returnresult}

      Then, pass themedia.Image object and the rotation value toFirebaseVisionImage.fromMediaImage():

      Java

      FirebaseVisionImageimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation);

      Kotlin

      valimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation)
    • To create aFirebaseVisionImage object from a file URI, pass the app context and file URI toFirebaseVisionImage.fromFilePath(). This is useful when you use anACTION_GET_CONTENT intent to prompt the user to select an image from their gallery app.

      Java

      FirebaseVisionImageimage;try{image=FirebaseVisionImage.fromFilePath(context,uri);}catch(IOExceptione){e.printStackTrace();}

      Kotlin

      valimage:FirebaseVisionImagetry{image=FirebaseVisionImage.fromFilePath(context,uri)}catch(e:IOException){e.printStackTrace()}
    • To create aFirebaseVisionImage object from aByteBuffer or a byte array, first calculate the image rotation as described above formedia.Image input.

      Then, create aFirebaseVisionImageMetadata object that contains the image's height, width, color encoding format, and rotation:

      Java

      FirebaseVisionImageMetadatametadata=newFirebaseVisionImageMetadata.Builder().setWidth(480)// 480x360 is typically sufficient for.setHeight(360)// image recognition.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21).setRotation(rotation).build();

      Kotlin

      valmetadata=FirebaseVisionImageMetadata.Builder().setWidth(480)// 480x360 is typically sufficient for.setHeight(360)// image recognition.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21).setRotation(rotation).build()

      Use the buffer or array, and the metadata object, to create aFirebaseVisionImage object:

      Java

      FirebaseVisionImageimage=FirebaseVisionImage.fromByteBuffer(buffer,metadata);// Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);

      Kotlin

      valimage=FirebaseVisionImage.fromByteBuffer(buffer,metadata)// Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
    • To create aFirebaseVisionImage object from aBitmap object:

      Java

      FirebaseVisionImageimage=FirebaseVisionImage.fromBitmap(bitmap);

      Kotlin

      valimage=FirebaseVisionImage.fromBitmap(bitmap)
      The image represented by theBitmap object must be upright, with no additional rotation required.
  2. Get an instance ofFirebaseVisionFaceDetector:

    Java

    FirebaseVisionFaceDetectordetector=FirebaseVision.getInstance().getVisionFaceDetector(options);

    Kotlin

    valdetector=FirebaseVision.getInstance().getVisionFaceDetector(options)
    Note: Check the console for errors generated by the constructor.
  3. Finally, pass the image to thedetectInImage method:

    Note: Check the console for errors generated by the detector.

3. Get information about detected faces

If the face recognition operation succeeds, a list ofFirebaseVisionFace objects will be passed to the successlistener. EachFirebaseVisionFace object represents a face that was detectedin the image. For each face, you can get its bounding coordinates in the inputimage, as well as any other information you configured the face detector tofind. For example:

Java

for(FirebaseVisionFaceface:faces){Rectbounds=face.getBoundingBox();floatrotY=face.getHeadEulerAngleY();// Head is rotated to the right rotY degreesfloatrotZ=face.getHeadEulerAngleZ();// Head is tilted sideways rotZ degrees// If landmark detection was enabled (mouth, ears, eyes, cheeks, and// nose available):FirebaseVisionFaceLandmarkleftEar=face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EAR);if(leftEar!=null){FirebaseVisionPointleftEarPos=leftEar.getPosition();}// If contour detection was enabled:List<FirebaseVisionPoint>leftEyeContour=face.getContour(FirebaseVisionFaceContour.LEFT_EYE).getPoints();List<FirebaseVisionPoint>upperLipBottomContour=face.getContour(FirebaseVisionFaceContour.UPPER_LIP_BOTTOM).getPoints();// If classification was enabled:if(face.getSmilingProbability()!=FirebaseVisionFace.UNCOMPUTED_PROBABILITY){floatsmileProb=face.getSmilingProbability();}if(face.getRightEyeOpenProbability()!=FirebaseVisionFace.UNCOMPUTED_PROBABILITY){floatrightEyeOpenProb=face.getRightEyeOpenProbability();}// If face tracking was enabled:if(face.getTrackingId()!=FirebaseVisionFace.INVALID_ID){intid=face.getTrackingId();}}

Kotlin

for(faceinfaces){valbounds=face.boundingBoxvalrotY=face.headEulerAngleY// Head is rotated to the right rotY degreesvalrotZ=face.headEulerAngleZ// Head is tilted sideways rotZ degrees// If landmark detection was enabled (mouth, ears, eyes, cheeks, and// nose available):valleftEar=face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EAR)leftEar?.let{valleftEarPos=leftEar.position}// If contour detection was enabled:valleftEyeContour=face.getContour(FirebaseVisionFaceContour.LEFT_EYE).pointsvalupperLipBottomContour=face.getContour(FirebaseVisionFaceContour.UPPER_LIP_BOTTOM).points// If classification was enabled:if(face.smilingProbability!=FirebaseVisionFace.UNCOMPUTED_PROBABILITY){valsmileProb=face.smilingProbability}if(face.rightEyeOpenProbability!=FirebaseVisionFace.UNCOMPUTED_PROBABILITY){valrightEyeOpenProb=face.rightEyeOpenProbability}// If face tracking was enabled:if(face.trackingId!=FirebaseVisionFace.INVALID_ID){valid=face.trackingId}}

Example of face contours

When you have face contour detection enabled, you get a list of points for each facial feature that was detected. These points represent the shape of the feature. See theFace Detection Concepts Overview for details about how contours are represented.

The following image illustrates how these points map to a face (click the image to enlarge):

Real-time face detection

If you want to use face detection in a real-time application, follow theseguidelines to achieve the best framerates:

  • Configure the face detector to use eitherface contour detection or classification and landmark detection, but not both:

    Contour detection
    Landmark detection
    Classification
    Landmark detection and classification
    Contour detection and landmark detection
    Contour detection and classification
    Contour detection, landmark detection, and classification

  • EnableFAST mode (enabled by default).

  • Consider capturing images at a lower resolution. However, also keep in mindthis API's image dimension requirements.

  • Throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame.
  • If you are using the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each input frame.
  • If you use the Camera2 API, capture images inImageFormat.YUV_420_888 format.

    If you use the older Camera API, capture images inImageFormat.NV21 format.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.