Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouchedThe specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order toConvenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phaseMutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for detecting living body of the disclosure or the implementation of the device for detecting living bodyThe exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be withIncluding various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send outSend message etc..Various telecommunication customer end applications, such as payment class software, purchase can be installed on terminal device 101,102,103Species application, video processing class application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hardWhen part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading(Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 playerQuasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compressionStandard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 isWhen software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as withTo provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do hereinIt is specific to limit.
Server 105 can be to provide the server of various services, such as shoot and obtain to terminal device 101,102,103The video processing service device that is handled of target face video.Video processing service device can regard the target face receivedThe data such as frequency carry out the processing such as analyzing, and obtain processing result and (such as be used to indicate face corresponding to target face video and beThe no testing result for living body faces).
It should be noted that can be by terminal device for detecting the method for living body provided by embodiment of the disclosure101, it 102,103 executes, can also be executed by server 105, correspondingly, the device for detecting living body can be set in terminalIn equipment 101,102,103, also it can be set in server 105.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implementedAt the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is softwareIt, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization needIt wants, can have any number of terminal device, network and server.Used data during generating testing resultIt does not need in the case where long-range obtain, above system framework can not include network, and only include terminal device or server.
With continued reference to Fig. 2, the process of one embodiment of the method for detecting living body according to the disclosure is shown200.The method for being used to detect living body, comprising the following steps:
Step 201, adjacent video frame is extracted from sequence of frames of video corresponding to target face video as the firstFace image and the second facial image.
It in the present embodiment, can be with for detecting the executing subject (such as server 105 shown in FIG. 1) of the method for living bodyIt is regarded by wired connection mode or radio connection from remotely-or locally acquisition target face video, and from target faceAdjacent video frame is extracted in sequence of frames of video corresponding to frequency as the first facial image and the second facial image.Wherein, meshMarking face video can be the face video for carrying out In vivo detection.Specifically, target face video can for face intoRow shoots video obtained.Here the face shot can be true face (i.e. living body faces), or virtualFace (i.e. non-living body face, such as face sculpture, facial image etc.).
In practice, sequence of frames of video is arranged according to the sequencing of play time.Since target face video is to faceShooting acquisition is carried out, so the video frame in sequence of frames of video includes facial image region corresponding to the face of shooting.
Specifically, above-mentioned executing subject can extract adjacent two from sequence of frames of video corresponding to target face videoA video frame, and extracted two video frames are identified as the first facial image and the second facial image.Here, firstThe method of determination of facial image and the second facial image can be arbitrary.For example, being extracted video from target face videoFrame A and video frame B, then video frame A can be determined as the first facial image by above-mentioned executing subject, and video frame B is determined asTwo facial images;Alternatively, video frame B can also be determined as to the first facial image, video frame A is determined as the second face figurePicture.
Particularly, with two adjacent video frames for one group, above-mentioned executing subject can also be right from target face video instituteAt least two groups video frame is extracted in the sequence of frames of video answered, and for every group of video in extracted at least two groups video frameTwo adjacent video frames in this group of video frame can be identified as the first facial image and by frame, above-mentioned executing subjectTwo facial images.Similar, here, the determination side of the first facial image and the second facial image corresponding to each group video frameFormula is also possible to arbitrarily.
Step 202, determine face key point as the first face key point from the first facial image.
In the present embodiment, based on the first facial image obtained in step 201, above-mentioned executing subject can be from the firstDetermine face key point as the first face key point in face image.
In practice, face key point can be point crucial in face (conjecture face or real human face), specifically, can be withFor the point for influencing face mask or face shape.As an example, face key point can be the point corresponding to nose, eyes institutePoint corresponding to corresponding point, the corners of the mouth etc..
In the present embodiment, the first face key point can characterize in a variety of manners, such as can use in the first faceThe point characterization marked out in image;Alternatively, can be characterized in the form of coordinate, coordinate can serve to indicate that the first face key pointPosition in the first facial image.
Specifically, above-mentioned executing subject can determine the conduct of face key point using various methods from the first facial imageFirst face key point.As an example, above-mentioned executing subject can show the first facial image, and then user is obtained from the firstThe face key point chosen in face image is as the first face key point.
In some optional implementations of the present embodiment, above-mentioned executing subject can input the first facial image pre-First trained face key point identification model obtains face key point as the first face key point.
In this implementation, face key point identification model can be used for characterizing corresponding to facial image and facial imageFace key point corresponding relationship.Specifically, as an example, face key point identification model can be the preparatory base of technical staffIn being pre-established to the statistics of face key point corresponding to a large amount of facial image and facial image, be stored with multiple peopleThe mapping table of face image and corresponding face key point;Or it is based on preset training sample, utilize machine learningThe model that method obtains after being trained to initial model (such as neural network).
Step 203, determine corresponding with the first face key point face key point as the from the second facial imageTwo face key points.
In the present embodiment, based on the second facial image obtained in step 201, above-mentioned executing subject can be from the second peopleDetermine face key point corresponding with the first face key point as the second face key point in face image.Wherein, with firstThe corresponding face key point of face key point is face corresponding to corresponding face position and the first face key pointThe identical face key point in position, such as face position corresponding to the first face key point is the corners of the mouth, then it is crucial with the first faceFace position corresponding to the corresponding face key point of point is also the corners of the mouth.
Specifically, above-mentioned executing subject can be determining crucial with the first face from the second facial image using various methodsThe corresponding second face key point of point.As an example, above-mentioned executing subject can be using existing optical flow method to the first faceKey point is tracked, and then determines that the second face corresponding with the first face key point is crucial from the second facial imagePoint.
In some optional implementations of the present embodiment, above-mentioned executing subject can be inputted the second facial imageFace key point identification model is stated, obtains face key point as the second face key point.
Herein, it should be noted that since the model that the second facial image is inputted is closed for obtaining the first faceThe face key point identification model of key point, and the face key point that face key point identification model can identify is obtained by trainingTraining sample when face key point identification model determines, is predetermined (such as can identify point corresponding to eyes),In turn, the second facial image is inputted into the above-mentioned face key point identification model for being used to obtain the first face key point, can obtainedObtain the second face key point corresponding with the first face key point.
Step 204, based on the first face key point at a distance from the second face key point, generation is used to indicate target faceFace corresponding to video whether be living body faces testing result.
In the present embodiment, based on the second people obtained in the first face key point obtained in step 202 and step 203Face key point, above-mentioned executing subject can determine the distance of the first face key point and the second face key point, and be based on instituteDetermining distance, generate be used to indicate face corresponding to target face video whether be living body faces testing result.Wherein,Testing result can include but is not limited at least one of following: number, text, symbol, image, audio.
In the present embodiment, the first face key point refers to the first face key point at a distance from the second face key pointWith distance of the second face key point under the same coordinate system.Specifically, due to the first facial image and the second facial imageShape, size difference it is identical, so above-mentioned executing subject can be based on the first facial image or the second facial image, establish sitSecond face key point or the first face key point, are then mapped in established coordinate system, in turn, determine first by mark systemThe distance of face key point and the second face key point.
It should be noted that above-mentioned executing subject can be based on facial image (the first facial image or the second face figurePicture), establish coordinate system using various methods, for example, can in facial image face key point (the first face key point orSecond face key point) it is that origin using mutually perpendicular any two reference axis as x-axis and y-axis establishes rectangular coordinate system.
Herein, above-mentioned to hold for the first face key point and the second face key point under the same coordinate systemRow main body can determine distance between the two using various methods.For example, can be to the first face key point and the second faceKey point carries out line, obtains line segment, determines the length of line segment obtained, as the first face key point and corresponding secondThe distance of face key point;Or can determine the coordinate of the first face key point and corresponding second face key point, finallyBased on identified coordinate, using range formula determine between the first face key point and corresponding second face key point away fromFrom.
Specifically, above-mentioned executing subject can use based on the first face key point at a distance from the second face key pointVarious methods generate be used to indicate face corresponding to target face video whether be living body faces testing result.On for example,Stating executing subject can determine whether the first face key point is 0 at a distance from the second face key point;If it is not, can then give birth toAt being used to indicate the testing result (such as " 1 ") that face corresponding to target face video is living body faces;If so, can give birth toAt being used to indicate the testing result (such as " -1 ") that face corresponding to target face video is non-living body faces.
In some optional implementations of the present embodiment, based on the first face key point and the second face key pointDistance, above-mentioned executing subject can be generated by following steps and be used to indicate whether face corresponding to target face video is livingThe testing result of body face: firstly, above-mentioned executing subject can determine the first face key point and the second face key point away fromFrom whether more than or equal to preset threshold.In turn, above-mentioned executing subject can be more than or equal to above-mentioned preset threshold in response to determining, rawAt being used to indicate the testing result that face corresponding to target face video is living body faces.Wherein, preset threshold can be skillArt personnel are pre-set apart from minimum value.
In some optional implementations of the present embodiment, above-mentioned executing subject may also respond to determine the first faceKey point is less than above-mentioned preset threshold at a distance from the second face key point, and generation is used to indicate corresponding to target face videoFace is the testing result of non-living body face.
It should be strongly noted that when extracting at least two groups video frame based on step 201, at least two groups videoEvery group of video frame in frame, above-mentioned executing subject can be determined corresponding to this group of video frame by step 202, step 203First face key point and the second face key point, and then based on the first face key point and second corresponding to this group of video frameTesting result corresponding to this group of video frame can be generated in the distance of face key point, above-mentioned executing subject.In turn, at leastAt least two testing results can be generated in two groups of video frames, above-mentioned executing subject.
In some optional implementations of the present embodiment, after obtaining testing result, above-mentioned executing subject can also be rungIt should be based on optical flow method, to target face in determining that face corresponding to testing result instruction target face video is living body facesSequence of frames of video corresponding to video is detected, and generation is used to indicate whether face corresponding to target face video is living bodyThe final result of face.Wherein, final result can be used for presenting, and can include but is not limited at least one of following: number, textWord, symbol, image, audio.
In practice, according to optical flow method, using the pixel intensity data of the video frame in sequence of frames of video time domain variation andCorrelation can determine " movement " of pixel, so using Difference of Gaussian filter or support vector machines etc. to motion information intoRow data statistic analysis, it can whether face corresponding to sequence of frames of video is living body faces.
In this implementation, when corresponding testing result instruction face video is living body faces, it is just based on light streamMethod is detected, and when testing result instruction face video is non-living body face, then can be not based on optical flow method is detected,With this, the face video detected based on optical flow method can be treated and screened, the efficiency of In vivo detection is helped to improve.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for detecting living body of the present embodimentFigure.In the application scenarios of Fig. 3, server 301 can be mentioned from sequence of frames of video corresponding to target face video 302 firstTwo adjacent video frames are taken to be used as the first facial image 3031 and the second facial image 3032.Then, server 301 can be withFace key point is determined from the first facial image 3031 as the first face key point 3041, and from the second facial imageDetermine face key point corresponding with the first face key point 3041 as the second face key point 3042 in 3032.Finally,Server 301 can determine the distance 305 of the first face key point 3041 and the second face key point 3042, and be based on distance305, generate be used to indicate face corresponding to target face video 302 whether be living body faces testing result 306.
Currently, In vivo detection in the prior art usually requires to analyze each pixel of the video frame in videoThe testing cost of operation, this method is high, low efficiency.
The method provided by the above embodiment of the disclosure can be based in facial image adjacent corresponding to face video, the distance of corresponding face key point determine whether face corresponding to face video is living body faces, it will be understood thatWhen generating distance between corresponding face key point or distance is more than or equal to preset threshold, can determine corresponding to face videoFace perform movement, and then can determine that face corresponding to face video is living body faces, with this, may be implemented moreEasy In vivo detection, helps to improve the efficiency of In vivo detection;Also, the face key point based on facial image carries out living bodyDetection, can reduce the complexity of detection, facilitate the consumption of CPU during reduction In vivo detection;Furthermore, it is possible to utilize thisThe open method provided verifies the initial results for first passing through the generation of other biopsy methods in advance, with this, helps to mentionThe accuracy of high In vivo detection;Furthermore it is also possible to using the disclosure as the pre-treatment step of In vivo detection in the prior art, withThis, can screen the face video of pending In vivo detection in the prior art, help to improve the effect of In vivo detectionRate.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for detecting living body.The useIn the process 400 of the method for detection living body, comprising the following steps:
Step 401, adjacent video frame is extracted from sequence of frames of video corresponding to target face video as the firstFace image and the second facial image.
It in the present embodiment, can be with for detecting the executing subject (such as server 105 shown in FIG. 1) of the method for living bodyTarget face video is obtained by wired connection mode or radio connection, and the view corresponding to the target face videoAdjacent video frame is extracted in frequency frame sequence as the first facial image and the second facial image.Wherein, target face video canThink the face video for carrying out In vivo detection.Specifically, target face video can be obtained to be shot to faceVideo.
Step 402, determine face key point as the first face key point from the first facial image.
In the present embodiment, based on the first facial image obtained in step 401, above-mentioned executing subject can be from the firstDetermine face key point as the first face key point in face image.
Step 403, determine corresponding with the first face key point face key point as the from the second facial imageTwo face key points.
In the present embodiment, based on the second facial image obtained in step 401, above-mentioned executing subject can be from the second peopleDetermine face key point corresponding with the first face key point as the second face key point in face image.Wherein, with firstThe corresponding face key point of face key point is face corresponding to corresponding face position and the first face key pointThe identical face key point in position.
Step 404, based on the first face key point at a distance from the second face key point, generation is used to indicate target faceFace corresponding to video whether be living body faces testing result.
In the present embodiment, based on the second people obtained in the first face key point obtained in step 402 and step 403Face key point, above-mentioned executing subject can determine the distance of the first face key point and the second face key point, and be based on instituteDetermining distance, generate be used to indicate face corresponding to target face video whether be living body faces testing result.Wherein,Testing result can include but is not limited at least one of following: number, text, symbol, image, audio.
In the present embodiment, the first face key point is referred at a distance from the second face key point when the first face is crucialPoint is at a distance from the second face key point is both when on the same image.
Above-mentioned steps 401, step 402, step 403, step 404 respectively with step 201, the step in previous embodiment202, step 203, step 204 are consistent, and the description above with respect to step 201, step 202, step 203 and step 204 is also suitableIn step 401, step 402 step 403 and step 404, details are not described herein again.
Step 405, the initial results first generated for target face video preprocessor are obtained.
In the present embodiment, above-mentioned executing subject can by wired connection mode or radio connection from long-range orIt is local to obtain the initial results first generated for target face video preprocessor.Wherein, initial results are to advance with existing living bodyIt is that detection method generates, whether be used to indicate face corresponding to target face video be living body faces as a result, may includeBut it is not limited at least one of following: number, text, symbol, image, audio.
Step 406, it is based on initial results and testing result, generation, which is used to indicate face corresponding to target face video, isThe no final result for living body faces.
In the present embodiment, it is based on testing result obtained in initial results obtained in step 405 and step 404, onState executing subject can be generated be used to indicate face corresponding to target face video whether be living body faces final result.ItsIn, final result can be used for presenting, and can include but is not limited at least one of following: number, text, symbol, image, soundFrequently.
Specifically, being based on initial results and testing result, above-mentioned executing subject can most be terminated using the generation of various methodsFruit.For example, above-mentioned executing subject can be in response to determining that face corresponding to initial results instruction target face video is living bodyFace, testing result indicate that face corresponding to target face video is non-living body face, and generation is used to indicate target face viewFace corresponding to frequency is the final result of non-living body face, in response to determining that initial results and testing result indicate target personFace corresponding to face video is living body faces, and generating and being used to indicate face corresponding to target face video is living body facesFinal result;Alternatively, above-mentioned executing subject can be in response to determining that initial results and testing result indicate target person face videoCorresponding face be non-living body face, generation be used to indicate face corresponding to target face video be non-living body face mostEventually as a result, including indicating that face corresponding to target face video is living body people in response to determining in initial results and testing resultFace as a result, generate be used to indicate face corresponding to target face video be living body faces final result.
In some optional implementations of the present embodiment, after obtaining final result, above-mentioned executing subject can also be incited somebody to actionFinal result is sent to the electronic equipment of communication connection and final result is presented in controlling electronic devices.
Herein, electronic equipment can be terminal, or server.Specifically, above-mentioned executing subject can be to electricitySub- equipment sends control signal, and then final result is presented in controlling electronic devices.Here, the form of presentation can basisThe form of final result determines, for example, the form of final result is audio, then the form presented can be broadcasting;Final resultForm be image or text, then the form presented can for display.
In this implementation, since final result is to be generated based on initial results and testing result as a result, so phaseCompared with the scheme in the prior art that initial results are presented, this implementation can control electronic equipment and more accurate knot be presentedFruit improves the accuracy of In vivo detection.
Figure 4, it is seen that the method for detecting living body compared with the corresponding embodiment of Fig. 2, in the present embodimentProcess 400 highlight the initial results for obtaining and first generating for target face video preprocessor, and be based on initial results and detectionAs a result, generating step corresponding to target face video.The scheme of the present embodiment description can use detection obtained as a result,As a result to it is pre-generated, for characterizing whether face corresponding to target face video is that living body faces initial results are testedCard, so as to improve the accuracy of final result corresponding to target face video, helps to show more accurate living bodyTesting result.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides one kind for detecting workOne embodiment of the device of body, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answerFor in various electronic equipments.
As shown in figure 5, the device 500 for detecting living body of the present embodiment includes: that extraction unit 501, first determines listFirst 502, second determination unit 503 and the first generation unit 504.Wherein, extraction unit 501 is configured to from target face videoAdjacent video frame is extracted in corresponding sequence of frames of video as the first facial image and the second facial image;First determines listMember 502 is configured to determine face key point as the first face key point from the first facial image;Second determination unit 503It is configured to determine that face key point corresponding with the first face key point is closed as the second face from the second facial imageKey point;First generation unit 504 is configured to based on the first face key point at a distance from the second face key point, and generation is used forIndicate target face video corresponding to face whether be living body faces testing result.
In the present embodiment, for detect living body device 500 extraction unit 501 can by wired connection mode orPerson's radio connection is from remotely-or locally obtaining target face video, and the video frame sequence corresponding to the target face videoAdjacent video frame is extracted in column as the first facial image and the second facial image.Wherein, target face video can be useIn the face video for carrying out In vivo detection.Specifically, target face video can be to carry out shooting video obtained to face.
In the present embodiment, the first facial image obtained based on extraction unit 501, the first determination unit 502 can be fromDetermine face key point as the first face key point in first facial image.
In the present embodiment, the second facial image obtained based on extraction unit 501, the second determination unit 502 can be fromDetermine face key point corresponding with the first face key point as the second face key point in second facial image.Wherein,Face key point corresponding with the first face key point is corresponding to corresponding face position and the first face key pointThe identical face key point in face position, such as face position corresponding to the first face key point is the corners of the mouth, then with it is the firstFace position corresponding to the corresponding face key point of face key point is also the corners of the mouth.
In the present embodiment, the first face key point and the second determination unit 503 obtained based on the first determination unit 502The second obtained face key point, the first generation unit 504 can determine the first face key point and the second face key pointDistance, and based on identified distance, generation is used to indicate whether face corresponding to target face video is living body facesTesting result.Wherein, testing result can include but is not limited at least one of following: number, text, symbol, image, soundFrequently.
In the present embodiment, the first face key point is referred at a distance from the second face key point when the first face is crucialPoint is at a distance from the second face key point is both when on the same image.
In some optional implementations of the present embodiment, the first determination unit 502 can be further configured to: willFirst facial image input face key point identification model trained in advance obtains face key point as the first face keyPoint.
In some optional implementations of the present embodiment, the second determination unit 503 can be further configured to: willSecond facial image inputs face key point identification model, obtains face key point as the second face key point.
In some optional implementations of the present embodiment, the first generation unit 504 may include: determining module (figureIn be not shown), be configured to determine whether the first face key point is more than or equal to default threshold at a distance from the second face key pointValue;First generation module (not shown) is configured in response to determine that generation is used to indicate mesh more than or equal to preset thresholdFace corresponding to mark face video is the testing result of living body faces.
In some optional implementations of the present embodiment, the first generation unit 504 can also include: the second generation mouldBlock (not shown) is configured in response to determine the first face key point at a distance from the second face key point less than defaultThreshold value generates the testing result for being used to indicate that face corresponding to target face video is non-living body face.
In some optional implementations of the present embodiment, device 500 can also include: that acquiring unit (is not shown in figureOut), it is configured to obtain the initial results first generated for target face video preprocessor, wherein initial results are used to indicate targetWhether face corresponding to face video is living body faces;Second generation unit (not shown) is configured to based on initialAs a result and testing result, generate be used to indicate face corresponding to target face video whether be living body faces final result.
In some optional implementations of the present embodiment, device 500 can also include: that transmission unit (does not show in figureOut), the electronic equipment and controlling electronic devices for being configured to for final result being sent to communication connection carry out final resultIt presents.
In some optional implementations of the present embodiment, device 500 can also include: third generation unit (in figureIt is not shown), it is configured in response to determine that face corresponding to testing result instruction target face video is living body faces, be based onOptical flow method detects sequence of frames of video corresponding to target face video, and it is right that generation is used to indicate target face video instituteThe face answered whether be living body faces final result.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 descriptionIt is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and itsIn include unit, details are not described herein.
The device provided by the above embodiment 500 of the disclosure can be based on facial image adjacent corresponding to face videoIn, the distance of corresponding face key point determine whether face corresponding to face video is living body faces, Ke YiliSolution can determine face video institute when generating distance between corresponding face key point or distance is more than or equal to preset thresholdCorresponding face performs movement, and then can determine that face corresponding to face video is living body faces, with this, may be implementedMore easy In vivo detection helps to improve the efficiency of In vivo detection;Also, the face key point based on facial image carries outIn vivo detection can reduce the complexity of detection, facilitate the consumption of CPU during reduction In vivo detection;Furthermore, it is possible to sharpThe initial results for first passing through the generation of other biopsy methods in advance are verified with the method that the disclosure provides, with this, are helpedIn the accuracy for improving In vivo detection;Furthermore it is also possible to be walked the disclosure as the pretreatment of In vivo detection in the prior artSuddenly, with this, the face video of pending In vivo detection in the prior art can be screened, helps to improve In vivo detectionEfficiency.
Below with reference to Fig. 6, it illustrates the electronic equipment (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosureEnd equipment 101,102,103 or server 105) 600 structural schematic diagram.Terminal device in the embodiment of the present disclosure may includeBut it is not limited to such as mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (plate electricityBrain), PMP (portable media player), the mobile terminal of car-mounted terminal (such as vehicle mounted guidance terminal) etc. and such asThe fixed terminal of digital TV, desktop computer etc..Electronic equipment shown in Fig. 6 is only an example, should not be to the disclosureThe function and use scope of embodiment bring any restrictions.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.)601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipmentVarious programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photographAs the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibrationThe output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows toolThere is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be withAlternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart descriptionSoftware program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable mediumOn computer program, which includes the program code for method shown in execution flow chart.In such realityIt applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the embodiment of the present disclosure is executedMethod in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meterCalculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but notBe limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.MeterThe more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wiresTaking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storageDevice (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journeyThe tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at thisIn open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited toElectromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and depositAny computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used forBy the use of instruction execution system, device or device or program in connection.Include on computer-readable mediumProgram code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentionedAny appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and notIt is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or moreWhen a program is executed by the electronic equipment, so that the electronic equipment: being mentioned from sequence of frames of video corresponding to target face videoTake adjacent video frame as the first facial image and the second facial image;Determine that face key point is made from the first facial imageFor the first face key point;Determine corresponding with the first face key point face key point as the from the second facial imageTwo face key points;Based on the first face key point at a distance from the second face key point, generation is used to indicate target face viewFace corresponding to frequency whether be living body faces testing result.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereofMachine program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code canFully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet serviceProvider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journeyThe architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generationA part of one module, program segment or code of table, a part of the module, program segment or code include one or more useThe executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in boxThe function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actuallyIt can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuseMeaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holdingThe dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instructionCombination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hardThe mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, theOne generation unit is also described as " generating the unit of testing result ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the artMember is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristicScheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent featureAny combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosureCan technical characteristic replaced mutually and the technical solution that is formed.