Scan Barcodes with ML Kit on Android Stay organized with collections Save and categorize content based on your preferences.
This page describes an old version of the Barcode Scanning API, which was part of ML Kit for Firebase. Development of this API has been moved to the standalone ML Kit SDK, which you can use with or without Firebase.Learn more.
SeeScan Barcodes with ML Kit on Android for the latest documentation.
You can use ML Kit to recognize and decode barcodes.
Version 24.0.0 offirebase-ml-vision introduces a new barcode scanning model, which comes with significant improvements in both latency and accuracy over the older model. In addition, with the latest API, you now can access the raw bytes for non UTF-8 encoded barcode data.
Be sure to add the newfirebase-ml-vision-barcode-model module to your project dependencies to use the new model.
Before you begin
- If you haven't already,add Firebase to your Android project.
- Add the dependencies for the ML Kit Android libraries to your module (app-level) Gradle file (usually
app/build.gradle):applyplugin:'com.android.application'applyplugin:'com.google.gms.google-services'dependencies{// ...implementation'com.google.firebase:firebase-ml-vision:24.0.3'implementation'com.google.firebase:firebase-ml-vision-barcode-model:16.0.1'}
Input image guidelines
For ML Kit to accurately read barcodes, input images must contain barcodes that are represented by sufficient pixel data.
The specific pixel data requirements are dependent on both the type of barcode and the amount of data that is encoded in it (since most barcodes support a variable length payload). In general, the smallest meaningful unit of the barcode should be at least 2 pixels wide (and for 2-dimensional codes, 2 pixels tall).
For example, EAN-13 barcodes are made up of bars and spaces that are 1, 2, 3, or 4 units wide, so an EAN-13 barcode image ideally has bars and spaces that are at least 2, 4, 6, and 8 pixels wide. Because an EAN-13 barcode is 95 units wide in total, the barcode should be at least 190 pixels wide.
Denser formats, such as PDF417, need greater pixel dimensions for ML Kit to reliably read them. For example, a PDF417 code can have up to 34 17-unit wide "words" in a single row, which would ideally be at least 1156 pixels wide.
Poor image focus can hurt scanning accuracy. If you aren't getting acceptable results, try asking the user to recapture the image.
For typical applications, it is recommended to provide a higher resolution image (such as 1280x720 or 1920x1080), which makes barcodes detectable from a larger distance away from the camera.
However, in applications where latency is critical, you can improve performance by capturing images at a lower resolution, but requiring that the barcode make up the majority of the input image. Also seeTips to improve real-time performance.
1. Configure the barcode detector
If you know which barcode formats you expect to read, you can improve the speedof the barcode detector by configuring it to only detect those formats.For example, to detect only Aztec code and QR codes, build aFirebaseVisionBarcodeDetectorOptions object as in the following example:
Java
FirebaseVisionBarcodeDetectorOptionsoptions=newFirebaseVisionBarcodeDetectorOptions.Builder().setBarcodeFormats(FirebaseVisionBarcode.FORMAT_QR_CODE,FirebaseVisionBarcode.FORMAT_AZTEC).build();
Kotlin
valoptions=FirebaseVisionBarcodeDetectorOptions.Builder().setBarcodeFormats(FirebaseVisionBarcode.FORMAT_QR_CODE,FirebaseVisionBarcode.FORMAT_AZTEC).build()
The following formats are supported:
- Code 128 (
FORMAT_CODE_128) - Code 39 (
FORMAT_CODE_39) - Code 93 (
FORMAT_CODE_93) - Codabar (
FORMAT_CODABAR) - EAN-13 (
FORMAT_EAN_13) - EAN-8 (
FORMAT_EAN_8) - ITF (
FORMAT_ITF) - UPC-A (
FORMAT_UPC_A) - UPC-E (
FORMAT_UPC_E) - QR Code (
FORMAT_QR_CODE) - PDF417 (
FORMAT_PDF417) - Aztec (
FORMAT_AZTEC) - Data Matrix (
FORMAT_DATA_MATRIX)
2. Run the barcode detector
To recognize barcodes in an image, create aFirebaseVisionImage objectfrom either aBitmap,media.Image,ByteBuffer, byte array, or a file onthe device. Then, pass theFirebaseVisionImage object to theFirebaseVisionBarcodeDetector'sdetectInImage method.Create a
FirebaseVisionImageobject from your image.To create a
FirebaseVisionImageobject from amedia.Imageobject, such as when capturing an image from a device's camera, pass themedia.Imageobject and the image's rotation toFirebaseVisionImage.fromMediaImage().If you use the CameraX library, the
OnImageCapturedListenerandImageAnalysis.Analyzerclasses calculate the rotation value for you, so you just need to convert the rotation to one of ML Kit'sROTATION_constants before callingFirebaseVisionImage.fromMediaImage():Java
privateclassYourAnalyzerimplementsImageAnalysis.Analyzer{privateintdegreesToFirebaseRotation(intdegrees){switch(degrees){case0:returnFirebaseVisionImageMetadata.ROTATION_0;case90:returnFirebaseVisionImageMetadata.ROTATION_90;case180:returnFirebaseVisionImageMetadata.ROTATION_180;case270:returnFirebaseVisionImageMetadata.ROTATION_270;default:thrownewIllegalArgumentException("Rotation must be 0, 90, 180, or 270.");}}@Overridepublicvoidanalyze(ImageProxyimageProxy,intdegrees){if(imageProxy==null||imageProxy.getImage()==null){return;}ImagemediaImage=imageProxy.getImage();introtation=degreesToFirebaseRotation(degrees);FirebaseVisionImageimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation);// Pass image to an ML Kit Vision API// ...}}
Kotlin
privateclassYourImageAnalyzer:ImageAnalysis.Analyzer{privatefundegreesToFirebaseRotation(degrees:Int):Int=when(degrees){0->FirebaseVisionImageMetadata.ROTATION_090->FirebaseVisionImageMetadata.ROTATION_90180->FirebaseVisionImageMetadata.ROTATION_180270->FirebaseVisionImageMetadata.ROTATION_270else->throwException("Rotation must be 0, 90, 180, or 270.")}overridefunanalyze(imageProxy:ImageProxy?,degrees:Int){valmediaImage=imageProxy?.imagevalimageRotation=degreesToFirebaseRotation(degrees)if(mediaImage!=null){valimage=FirebaseVisionImage.fromMediaImage(mediaImage,imageRotation)// Pass image to an ML Kit Vision API// ...}}}
If you don't use a camera library that gives you the image's rotation, you can calculate it from the device's rotation and the orientation of camera sensor in the device:
Java
privatestaticfinalSparseIntArrayORIENTATIONS=newSparseIntArray();static{ORIENTATIONS.append(Surface.ROTATION_0,90);ORIENTATIONS.append(Surface.ROTATION_90,0);ORIENTATIONS.append(Surface.ROTATION_180,270);ORIENTATIONS.append(Surface.ROTATION_270,180);}/** * Get the angle by which an image must be rotated given the device's current * orientation. */@RequiresApi(api=Build.VERSION_CODES.LOLLIPOP)privateintgetRotationCompensation(StringcameraId,Activityactivity,Contextcontext)throwsCameraAccessException{// Get the device's current rotation relative to its "native" orientation.// Then, from the ORIENTATIONS table, look up the angle the image must be// rotated to compensate for the device's rotation.intdeviceRotation=activity.getWindowManager().getDefaultDisplay().getRotation();introtationCompensation=ORIENTATIONS.get(deviceRotation);// On most devices, the sensor orientation is 90 degrees, but for some// devices it is 270 degrees. For devices with a sensor orientation of// 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.CameraManagercameraManager=(CameraManager)context.getSystemService(CAMERA_SERVICE);intsensorOrientation=cameraManager.getCameraCharacteristics(cameraId).get(CameraCharacteristics.SENSOR_ORIENTATION);rotationCompensation=(rotationCompensation+sensorOrientation+270)%360;// Return the corresponding FirebaseVisionImageMetadata rotation value.intresult;switch(rotationCompensation){case0:result=FirebaseVisionImageMetadata.ROTATION_0;break;case90:result=FirebaseVisionImageMetadata.ROTATION_90;break;case180:result=FirebaseVisionImageMetadata.ROTATION_180;break;case270:result=FirebaseVisionImageMetadata.ROTATION_270;break;default:result=FirebaseVisionImageMetadata.ROTATION_0;Log.e(TAG,"Bad rotation value: "+rotationCompensation);}returnresult;}
Kotlin
privatevalORIENTATIONS=SparseIntArray()init{ORIENTATIONS.append(Surface.ROTATION_0,90)ORIENTATIONS.append(Surface.ROTATION_90,0)ORIENTATIONS.append(Surface.ROTATION_180,270)ORIENTATIONS.append(Surface.ROTATION_270,180)}/** * Get the angle by which an image must be rotated given the device's current * orientation. */@RequiresApi(api=Build.VERSION_CODES.LOLLIPOP)@Throws(CameraAccessException::class)privatefungetRotationCompensation(cameraId:String,activity:Activity,context:Context):Int{// Get the device's current rotation relative to its "native" orientation.// Then, from the ORIENTATIONS table, look up the angle the image must be// rotated to compensate for the device's rotation.valdeviceRotation=activity.windowManager.defaultDisplay.rotationvarrotationCompensation=ORIENTATIONS.get(deviceRotation)// On most devices, the sensor orientation is 90 degrees, but for some// devices it is 270 degrees. For devices with a sensor orientation of// 270, rotate the image an additional 180 ((270 + 270) % 360) degrees.valcameraManager=context.getSystemService(CAMERA_SERVICE)asCameraManagervalsensorOrientation=cameraManager.getCameraCharacteristics(cameraId).get(CameraCharacteristics.SENSOR_ORIENTATION)!!rotationCompensation=(rotationCompensation+sensorOrientation+270)%360// Return the corresponding FirebaseVisionImageMetadata rotation value.valresult:Intwhen(rotationCompensation){0->result=FirebaseVisionImageMetadata.ROTATION_090->result=FirebaseVisionImageMetadata.ROTATION_90180->result=FirebaseVisionImageMetadata.ROTATION_180270->result=FirebaseVisionImageMetadata.ROTATION_270else->{result=FirebaseVisionImageMetadata.ROTATION_0Log.e(TAG,"Bad rotation value:$rotationCompensation")}}returnresult}
Then, pass the
media.Imageobject and the rotation value toFirebaseVisionImage.fromMediaImage():Java
FirebaseVisionImageimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation);
Kotlin
valimage=FirebaseVisionImage.fromMediaImage(mediaImage,rotation)
- To create a
FirebaseVisionImageobject from a file URI, pass the app context and file URI toFirebaseVisionImage.fromFilePath(). This is useful when you use anACTION_GET_CONTENTintent to prompt the user to select an image from their gallery app.Java
FirebaseVisionImageimage;try{image=FirebaseVisionImage.fromFilePath(context,uri);}catch(IOExceptione){e.printStackTrace();}
Kotlin
valimage:FirebaseVisionImagetry{image=FirebaseVisionImage.fromFilePath(context,uri)}catch(e:IOException){e.printStackTrace()}
- To create a
FirebaseVisionImageobject from aByteBufferor a byte array, first calculate the image rotation as described above formedia.Imageinput.Then, create a
FirebaseVisionImageMetadataobject that contains the image's height, width, color encoding format, and rotation:Java
FirebaseVisionImageMetadatametadata=newFirebaseVisionImageMetadata.Builder().setWidth(480)// 480x360 is typically sufficient for.setHeight(360)// image recognition.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21).setRotation(rotation).build();
Kotlin
valmetadata=FirebaseVisionImageMetadata.Builder().setWidth(480)// 480x360 is typically sufficient for.setHeight(360)// image recognition.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21).setRotation(rotation).build()
Use the buffer or array, and the metadata object, to create a
FirebaseVisionImageobject:Java
FirebaseVisionImageimage=FirebaseVisionImage.fromByteBuffer(buffer,metadata);// Or: FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(byteArray, metadata);
Kotlin
valimage=FirebaseVisionImage.fromByteBuffer(buffer,metadata)// Or: val image = FirebaseVisionImage.fromByteArray(byteArray, metadata)
- To create a
FirebaseVisionImageobject from aBitmapobject:The image represented by theJava
FirebaseVisionImageimage=FirebaseVisionImage.fromBitmap(bitmap);
Kotlin
valimage=FirebaseVisionImage.fromBitmap(bitmap)
Bitmapobject must be upright, with no additional rotation required.
Get an instance of
FirebaseVisionBarcodeDetector:Java
FirebaseVisionBarcodeDetectordetector=FirebaseVision.getInstance().getVisionBarcodeDetector();// Or, to specify the formats to recognize:// FirebaseVisionBarcodeDetector detector = FirebaseVision.getInstance()// .getVisionBarcodeDetector(options);
Kotlin
valdetector=FirebaseVision.getInstance().visionBarcodeDetector// Or, to specify the formats to recognize:// val detector = FirebaseVision.getInstance()// .getVisionBarcodeDetector(options)
Finally, pass the image to the
detectInImagemethod:Java
Task<List<FirebaseVisionBarcode>>result=detector.detectInImage(image).addOnSuccessListener(newOnSuccessListener<List<FirebaseVisionBarcode>>(){@OverridepublicvoidonSuccess(List<FirebaseVisionBarcode>barcodes){// Task completed successfully// ...}}).addOnFailureListener(newOnFailureListener(){@OverridepublicvoidonFailure(@NonNullExceptione){// Task failed with an exception// ...}});
Kotlin
valresult=detector.detectInImage(image).addOnSuccessListener{barcodes->// Task completed successfully// ...}.addOnFailureListener{// Task failed with an exception// ...}
3. Get information from barcodes
If the barcode recognition operation succeeds, a list ofFirebaseVisionBarcode objects will be passed to the success listener. EachFirebaseVisionBarcode object represents a barcode that was detected in theimage. For each barcode, you can get its bounding coordinates in the inputimage, as well as the raw data encoded by the barcode. Also, if the barcodedetector was able to determine the type of data encoded by the barcode, you canget an object containing parsed data.For example:
Java
for(FirebaseVisionBarcodebarcode:barcodes){Rectbounds=barcode.getBoundingBox();Point[]corners=barcode.getCornerPoints();StringrawValue=barcode.getRawValue();intvalueType=barcode.getValueType();// See API reference for complete list of supported typesswitch(valueType){caseFirebaseVisionBarcode.TYPE_WIFI:Stringssid=barcode.getWifi().getSsid();Stringpassword=barcode.getWifi().getPassword();inttype=barcode.getWifi().getEncryptionType();break;caseFirebaseVisionBarcode.TYPE_URL:Stringtitle=barcode.getUrl().getTitle();Stringurl=barcode.getUrl().getUrl();break;}}
Kotlin
for(barcodeinbarcodes){valbounds=barcode.boundingBoxvalcorners=barcode.cornerPointsvalrawValue=barcode.rawValuevalvalueType=barcode.valueType// See API reference for complete list of supported typeswhen(valueType){FirebaseVisionBarcode.TYPE_WIFI->{valssid=barcode.wifi!!.ssidvalpassword=barcode.wifi!!.passwordvaltype=barcode.wifi!!.encryptionType}FirebaseVisionBarcode.TYPE_URL->{valtitle=barcode.url!!.titlevalurl=barcode.url!!.url}}}
Tips to improve real-time performance
If you want to scan barcodes in a real-time application, follow these guidelines to achieve the best framerates:
Don't capture input at the camera’s native resolution. On some devices, capturing input at the native resolution produces extremely large (10+ megapixels) images, which results in very poor latency with no benefit to accuracy. Instead, only request the size from the camera that is required for barcode detection: usually no more than 2 megapixels.
If scanning speed is important, you can further lower the image capture resolution. However, bear in mind the minimum barcode size requirements outlined above.
- Throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame.
- If you are using the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each input frame.
If you use the Camera2 API, capture images in
ImageFormat.YUV_420_888format.If you use the older Camera API, capture images in
ImageFormat.NV21format.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.