Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

The most advanced and modern NativeScript Camera module for Android and iOS with a lot of awesome features

License

NotificationsYou must be signed in to change notification settings

Yoonit-Labs/nativescript-yoonit-camera

Repository files navigation

The most advanced and complete NativeScript Camera plugin

◻ Fully iOS and Android integration

◻ VueJS Plugin

◻ Modern Android Camera API [Camera X](https://developer.android.com/training/camerax)

◻ Camera preview (Front & Back)

◻ [PyTorch](https://pytorch.org/mobile/home/) integration (Android only)

◻ Computer vision pipeline

◻ Face detection with ROI, capture and crop, automagically

◻ Face features understanding, face analysis, smiling, head position, blinking eyes

◻ Image quality control

◻ Full frame capture

◻ Continuous image capture, frame by frame with time interval

◻ QR Code scanning

◻ Torch control

All these info it's coming from the demo app on the project. The module is only the camera stream into the view.

More about...

The plugin's core is the native layer. Every change in the native layer, reflects here. This plugin, the Yoonit Camera, we can say that is an aggregation of many Yoonit's native libs:

All this native libs can be used independently. Help us to improve it!

Sponsors

Platinum

Table Of Contents

Installation

npmi-s @yoonit/nativescript-camera

Usage

All the functionalities that the@yoonit/nativescript-camera provides is accessed through theYoonitCamera component, that includes the camera preview. Below we have the basic usage code, for more details, your can see theMethods,Events or theDemo Vue.

VueJS Plugin

main.js

importVuefrom'nativescript-vue'importYoonitCamerafrom'@yoonit/nativescript-camera/vue'Vue.use(YoonitCamera)

After that, you can access the camera object in your entire project usingthis.$yoo.camera

Vue Component

App.vue

<template>  <Page@loaded="onLoaded">    <YoonitCameraref="yooCamera"lens="front"captureType="face"imageCapture=trueimageCaptureAmount=10imageCaptureInterval=500detectionBox=true@faceDetected="doFaceDetected"@imageCaptured="doImageCaptured"@endCapture="doEndCapture"@qrCodeContent="doQRCodeContent"@status="doStatus"@permissionDenied="doPermissionDenied"    />  </Page></template><script>exportdefault {data: ()=> ({}),    methods: {asynconLoaded() {console.log('[YooCamera] Getting Camera view')this.$yoo.camera.registerElement(this.$refs.yooCamera)console.log('[YooCamera] Getting permission')if (awaitthis.$yoo.camera.requestPermission()) {console.log('[YooCamera] Permission granted, start preview')this.$yoo.camera.preview()        }      },doFaceDetected({        x,        y,        width,        height,        leftEyeOpenProbability,        rightEyeOpenProbability,        smilingProbability,        headEulerAngleX,        headEulerAngleY,        headEulerAngleZ      }) {console.log('[YooCamera] doFaceDetected',`          x:${x}          y:${y}          width:${width}          height:${height}          leftEyeOpenProbability:${leftEyeOpenProbability}          rightEyeOpenProbability:${rightEyeOpenProbability}          smilingProbability:${smilingProbability}          headEulerAngleX:${headEulerAngleX}          headEulerAngleY:${headEulerAngleY}          headEulerAngleZ:${headEulerAngleZ}`        )if (!x||!y||!width||!height) {this.imagePath=null        }      },doImageCaptured({        type,        count,        total,        image: {          path,          source        },        inferences,        darkness,        lightness,        sharpness      }) {if (total===0) {console.log('[YooCamera] doImageCreated',`${type}: [${count}]${path}`)this.imageCreated=`${count}`        }else {console.log('[YooCamera] doImageCreated',`${type}: [${count}] of [${total}] -${path}`)this.imageCreated=`${count} de${total}`        }console.log('[YooCamera] Mask Pytorch', inferences)console.log('[YooCamera] Image Quality Darkness:', darkness)console.log('[YooCamera] Image Quality Lightness', lightness)console.log('[YooCamera] Image Quality Sharpness', sharpness)this.imagePath= source      },doEndCapture() {console.log('[YooCamera] doEndCapture')      },doQRCodeContent({ content }) {console.log('[YooCamera] doQRCodeContent', content)      },doStatus({ status }) {console.log('[YooCamera] doStatus', status)      },doPermissionDenied() {console.log('[YooCamera] doPermissionDenied')      }    }  }</script>

Angular, React, Svelte or any other framework

Currently we can't offer any other integration with other frameworks that works on top of NativeScript beyond VueJS, you are totaly open to create and send to us a PR. But, this is a pure NativeScript plugin, if you know how to manipulate your preferred platform you will be capable to include it in your project.

API

Props

PropsInput/FormatDefault valueDescription
lens"front" or"back""front"The camera lens to use "front" or "back".
captureType"none","front","frame" or"qrcode""none"The capture type of the camera.
imageCapturebooleanfalseEnable/disabled save image capture.
imageCaptureAmountnumber0The image capture amount goal.
imageCaptureIntervalnumber1000The image capture time interval in milliseconds.
imageCaptureWidth"NNpx""200px"The image capture width in pixels.
imageCaptureHeight"NNpx""200px"The image capture height in pixels.
colorEncoding"RGB" or"YUV""RGB"Only for android. The image capture color encoding type:"RGB" or"YUV".
detectionBoxbooleanfalseShow/hide the face detection box.
detectionBoxColorstring#ffffffSet detection box color.
detectionMinSize"NN%""0%"The face minimum size percentage to capture.
detectionMaxSize"NN%""100%"The face maximum size percentage to capture.
detectionTopSize"NN%""100%"Represents the percentage. Positive value enlarges and negative value reduce the top side of the detection. Use thedetectionBox to have a visual result.
detectionRightSize"NN%""100%"Represents the percentage. Positive value enlarges and negative value reduce the right side of the detection. Use thedetectionBox to have a visual result.
detectionBottomSize"NN%""100%"Represents the percentage. Positive value enlarges and negative value reduce the bottom side of the detection. Use thedetectionBox to have a visual result.
detectionLeftSize"NN%""100%"Represents the percentage. Positive value enlarges and negative value reduce the left side of the detection. Use thedetectionBox to have a visual result.
roibooleanfalseEnable/disable the region of interest capture.
roiTopOffset"NN%""0%"Distance in percentage of the top face bounding box with the top of the camera preview.
roiRightOffset"NN%""0%"Distance in percentage of the right face bounding box with the right of the camera preview.
roiBottomOffset"NN%""0%"Distance in percentage of the bottom face bounding box with the bottom of the camera preview.
roiLeftOffset"NN%""0%"Distance in percentage of the left face bounding box with the left of the camera preview.
roiAreaOffsetbooleanfalseEnable/disable display of the region of interest area offset.
roiAreaOffsetColorstring'#ffffff73'Set display of the region of interest area offset color.
faceContoursbooleanfalseEnable/disable display list of points on a detected face.
faceContoursColorstring'#FFFFFF'Set face contours color.
computerVision (Android Only)booleanfalseEnable/disable computer vision model.
torchbooleanfalseEnable/disable device torch. Available only to camera lens"back".

Methods

FunctionParametersValid valuesReturn TypeDescription
requestPermission--promiseAsk the user to give the permission to access camera.
hasPermission--booleanReturn if application has camera permission.
preview--voidStart camera preview if has permission.
startCapturetype: string
  • "none"
  • "face"
  • "qrcode"
  • "frame"
voidSet capture type "none", "face", "qrcode" or "frame". Default value is"none".
stopCapture--voidStop any type of capture.
destroy--voidDestroy preview.
toggleLens--voidToggle camera lens facing "front"/"back".
setCameraLenslens: string"front" or"back"voidSet camera to use "front" or "back" lens. Default value is"front".
getLens--stringReturn "front" or "back".
setImageCaptureenable: booleantrue orfalsevoidEnable/disabled save image capture. Default value isfalse.
setImageCaptureAmountamount: IntAny positiveInt valuevoidFor value0, save infinity images. When the capture image amount is reached, the eventonEndCapture is triggered. Default value is0.
setImageCaptureIntervalinterval: numberAny positive number that represent time in millisecondsvoidSet the image capture time interval in milliseconds.
setImageCaptureWidthwidth: stringValue format must be inNNpxvoidSet the image capture width in pixels.
setImageCaptureHeightheight: stringValue format must be inNNpxvoidSet the image capture height in pixels.
setImageCaptureColorEncodingcolorEncoding: string"YUV" or"RGB"voidOnly for android. Set the image capture color encoding type:"RGB" or"YUV".
setDetectionBoxenable: booleantrue orfalsevoidSet to show/hide the face detection box.
setDetectionBoxColorcolor: stringhexadecimalvoidSet detection box color.
setDetectionMinSizepercentage: stringValue format must be inNN%voidSet the face minimum size percentage to capture.
setDetectionMaxSizepercentage: stringValue format must be inNN%voidSet the face maximum size percentage to capture.
setDetectionTopSizepercentage: stringValue format must be inNN%voidRepresents the percentage. Positive value enlarges and negative value reduce the top side of the detection. Use thesetDetectionBox to have a visual result.
setDetectionRightSizepercentage: stringValue format must be inNN%voidRepresents the percentage. Positive value enlarges and negative value reduce the right side of the detection. Use thesetDetectionBox to have a visual result.
setDetectionBottomSizepercentage: stringValue format must be inNN%voidRepresents the percentage. Positive value enlarges and negative value reduce the bottom side of the detection. Use thesetDetectionBox to have a visual result.
setDetectionLeftSizepercentage: stringValue format must be inNN%voidRepresents the percentage. Positive value enlarges and negative value reduce the left side of the detection. Use thesetDetectionBox to have a visual result.
setROIenable: booleantrue orfalsevoidEnable/disable face region of interest capture.
setROITopOffsetpercentage: stringValue format must be inNN%voidDistance in percentage of the top face bounding box with the top of the camera preview.
setROIRightOffsetpercentage: stringValue format must be inNN%voidDistance in percentage of the right face bounding box with the right of the camera preview.
setROIBottomOffsetpercentage: stringValue format must be inNN%voidDistance in percentage of the bottom face bounding box with the bottom of the camera preview.
setROILeftOffsetpercentage: stringValue format must be inNN%voidDistance in percentage of the left face bounding box with the left of the camera preview.
setROIMinSizepercentage: stringValue format must be inNN%voidSet the minimum face size related within the ROI.
setROIAreaOffsetenable: booleantrue orfalsevoidEnable/disable display of the region of interest area offset.
setROIAreaOffsetColorcolor: stringHexadecimal colorvoidSet display of the region of interest area offset color.
setFaceContoursenable: booleantrue orfalsevoidEnable/disable display list of points on a detected face.
setFaceContoursColorcolor: stringHexadecimal colorvoidSet face contours color.
setComputerVision (Android Only)enable: booleantrue orfalsevoidEnable/disable computer vision model.
setComputerVisionLoadModels (Android Only)modelPaths: Array<string>Valid system path file to a PyTorch computer vision modelvoidSet model to be used when image is captured. To se more about it,Click Here.
computerVisionClearModels (Android Only)--voidClear models that was previous added usingsetComputerVisionLoadModels.
setTorchenable: booleantrue orfalsevoidEnable/disable device torch. Available only to camera lens"back".

Events

EventParametersDescription
imageCaptured{ type: string, count: number, total: number, image: object = { path: string, source: any, binary: any }, inferences: [{ ['model name']: model output }], darkness: number, lightness: number, sharpness: number }Must have started capture type of face/frame. Emitted when the face image file saved:
  • type: "face" or "frame"
  • count: current index
  • total: total to create
  • image.path: the face/frame image path
  • image.source: the blob file
  • image.binary: the blob file
  • inferences: An Array with models output
  • darkness: image darkness classification.
  • lightness: image lightness classification.
  • sharpness: image sharpness classification.
    faceDetected{ x: number, y: number, width: number, height: number, leftEyeOpenProbability: number, rightEyeOpenProbability: number, smilingProbability: number, headEulerAngleX: number, headEulerAngleY: number, headEulerAngleZ: number }Must have started capture type of face. Emit theface analysis, all parameters are null if no more face detecting.
    endCapture-Must have started capture type of face/frame. Emitted when the number of image files created is equal of the number of images set (see the methodsetImageCaptureAmount).
    qrCodeContent{ content: string }Must have started capture type of qrcode (seestartCapture). Emitted when the camera read a QR Code.
    status{ type: 'error'/'message', status: string }Emit message error from native. Used more often for debug purpose.
    permissionDenied-Emit when try topreview but there is no camera permission.

    Face Analysis

    The face analysis is the response send by theonFaceDetected. Here we specify all the parameters.

    AttributeTypeDescription
    xnumberThex position of the face in the screen.
    ynumberThey position of the face in the screen.
    widthnumberThewidth position of the face in the screen.
    heightnumberTheheight position of the face in the screen.
    leftEyeOpenProbabilitynumberThe left eye open probability.
    rightEyeOpenProbabilitynumberThe right eye open probability.
    smilingProbabilitynumberThe smiling probability.
    headEulerAngleXnumberThe angle in degrees that indicate the vertical head direction. SeeHead Movements
    headEulerAngleYnumberThe angle in degrees that indicate the horizontal head direction. SeeHead Movements
    headEulerAngleZnumberThe angle in degrees that indicate the tilt head direction. SeeHead Movements

    Head Movements

    Here we're explaining the above gif and how reached the "results". Each "movement" (vertical, horizontal and tilt) is a state, based in the angle in degrees that indicate head direction;

    Head DirectionAttributev < -36°-36° <v < -12°-12° <v < 12°12° <v < 36°36° <v
    VerticalheadEulerAngleXSuper DownDownFrontalUpSuper Up
    HorizontalheadEulerAngleYSuper LeftLeftFrontalRightSuper Right
    TiltheadEulerAngleZSuper RightRightFrontalLeftSuper Left

    Image Quality

    The image quality is the classification of the three attributes: darkness, lightness and sharpness. Result available in theimageCapture event. Let's see each parameter specifications:

    ThresholdClassification
    Darkness
    darkness > 0.7Too dark
    darkness <= 0.7Acceptable
    Lightness
    lightness > 0.65Too light
    lightness <= 0.65Acceptable
    Sharpness
    sharpness >= 0.1591Blurred
    sharpness < 0.1591Acceptable

    Messages

    Pre-define message constants used by thestatus event.

    MessageDescription
    INVALID_MINIMUM_SIZEFace/QRCode width percentage in relation of the screen width is less than the set.
    INVALID_MAXIMUM_SIZEFace/QRCode width percentage in relation of the screen width is more than the set.
    INVALID_OUT_OF_ROIFace bounding box is out of the set region of interest.

    To contribute and make it better

    Clone the repo, change what you want and send PR.For commit messages we useConventional Commits.

    Contributions are always welcome!


    Code with ❤ by theYoonit Team

    About

    The most advanced and modern NativeScript Camera module for Android and iOS with a lot of awesome features

    Topics

    Resources

    License

    Stars

    Watchers

    Forks

    Packages

    No packages published

    Contributors7


    [8]ページ先頭

    ©2009-2025 Movatter.jp