Movatterモバイル変換


[0]ホーム

URL:


CN115457664A - Living body face detection method and device - Google Patents

Living body face detection method and device
Download PDF

Info

Publication number
CN115457664A
CN115457664ACN202211019029.5ACN202211019029ACN115457664ACN 115457664 ACN115457664 ACN 115457664ACN 202211019029 ACN202211019029 ACN 202211019029ACN 115457664 ACN115457664 ACN 115457664A
Authority
CN
China
Prior art keywords
face
action
image
face image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211019029.5A
Other languages
Chinese (zh)
Inventor
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co LtdfiledCriticalAdvanced New Technologies Co Ltd
Priority to CN202211019029.5ApriorityCriticalpatent/CN115457664A/en
Publication of CN115457664ApublicationCriticalpatent/CN115457664A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

The method comprises the steps that a server collects video images in real time, detects face images in the video images, generates a three-dimensional face model of the face images according to the motion postures of the face images in the video images, judges whether the motion postures and the generated three-dimensional face model meet preset results or not, determines that a target corresponding to the face images is a living face if the motion postures and the generated three-dimensional face model meet the preset results, and otherwise determines that the target corresponding to the face images is not the living face. By the method, the server can perform living body face detection on the target before executing the identity authentication process based on face matching, and the identity authentication process is executed only when the target is determined to be the living body face, so that the reliability of the identity authentication result is improved.

Description

Living body face detection method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for detecting a human face of a living body.
Background
At present, biometric technology is widely applied to the security field, and is one of the main means for authenticating the identity of a user. The human face is a biological feature commonly used in the biometric technology.
In the prior art, a user may register a face image of the user in an authentication server in advance, the authentication server stores the face image and a corresponding relationship between the face image and an identity of the user, and after the registration is completed, the user becomes a valid user.
Correspondingly, when the authentication server authenticates the identity of the user, the face of the user can be shot through the camera so as to collect the face image of the user, then the collected face image is respectively matched with the face images registered by the legal users, if the matching is successful, the user can be determined to be the legal user, the identity of the user can be further determined, and if the matching is failed, the user can be determined not to be registered and not to be the legal user.
However, in order to impersonate a legitimate user, an attacker may place a picture, a recorded video, or a wax image of a legitimate user in front of a camera for acquiring a face image by an authentication server during identity authentication, and in this case, the face image acquired by the authentication server may be successfully matched with a face image registered by the legitimate user, so that the authentication server determines that the attacker is the legitimate user.
Disclosure of Invention
The embodiment of the application provides a living body face detection method and a living body face detection device, which are used for solving the problem that in the prior art, when an attacker uses photos, recorded videos or wax images of a legal user in an identity authentication process based on face matching, the obtained identity authentication result is unreliable.
The embodiment of the application provides a method for detecting a human face of a living body, which comprises the following steps:
acquiring a video image in real time, and detecting a face image in the video image;
generating a three-dimensional face model of the face image according to the motion posture of the face image in the video image;
judging whether the motion posture and the three-dimensional face model accord with a preset result or not;
if so, determining that the target corresponding to the face image is a living face;
otherwise, determining that the target corresponding to the face image is not the living body face.
The embodiment of the application provides a living body face detection device, includes:
the detection module is used for acquiring a video image in real time and detecting a face image in the video image;
the generating module is used for generating a three-dimensional face model of the face image according to the motion posture of the face image in the video image;
and the judging module is used for judging whether the motion posture and the three-dimensional face model accord with a preset result, if so, determining that the target corresponding to the face image is a living face, otherwise, determining that the target corresponding to the face image is not the living face.
The embodiment of the application provides a method and a device for detecting a human face in vivo, wherein a server collects a video image in real time, detects a human face image in the video image, generates a three-dimensional human face model of the human face image according to the motion posture of the human face image in the video image, judges whether the motion posture and the generated three-dimensional human face model accord with a preset result, if so, determines that a target corresponding to the human face image is a living human face, otherwise, determines that the target corresponding to the human face image is not the living human face. By the method, the server can perform living body face detection on the target before performing the identity authentication process based on face matching, and the identity authentication process is performed only when the target is determined to be the living body face, so that even if an attacker wants to impersonate a legal user by using a photo, a recorded video or a wax image of the legal user, the action prompt corresponding to the action sent by the server cannot be performed due to the photo or the wax image, and a reasonable three-dimensional face model cannot be generated according to the recorded video, so that the impersonation mode used by the attacker cannot pass the verification of the living body face detection method, further, the attacker cannot pass the identity authentication, and the reliability of the identity authentication result is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a living human face detection process provided in an embodiment of the present application;
fig. 2 is a detailed process of living human face detection provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a living human face detection device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a process of detecting a living human face provided in an embodiment of the present application, which specifically includes the following steps:
s101: the server collects video images in real time and detects face images in the video images.
In the embodiment of the application, the server may be an authentication server for performing identity authentication on a user based on face matching. The video images can be acquired in real time through a camera of the server, and the video images can also be acquired in real time through a camera of a terminal (such as a mobile phone, a tablet computer, a digital camera and the like) and uploaded to the server.
Generally, in the process of acquiring a video image in real time, a user to be authenticated can place the face of the user in front of a camera, so that a server can detect a face image in the video image and execute subsequent processes.
In practical applications, the server may use existing methods to detect a face image in a video image, and these methods include, but are not limited to: a face detection method based on a cascade classifier, a face detection method based on Histogram of Oriented Gradient (HOG) features and a Support Vector Machine (SVM), and the like.
Further, when the server detects the face image, an action prompt can be sent out to prompt a target corresponding to the face image to execute an action corresponding to the action prompt.
In the embodiment of the application, when the server detects a face image, it may be considered that a target corresponding to the face image before the camera is ready to receive live face detection, and since the target may be a live face or a static object such as a photograph or a wax image used by an attacker, in order to prevent malicious impersonation of the attacker, the server may request the target to perform a specific action, and then may determine whether the target is a static object according to a process of the target performing the specific action.
Specifically, when the server detects a face image, the server may send an action prompt according to a preset policy, where the action prompt may be sent by using voice broadcast, text prompt, pattern prompt on a screen, and the like, and related information of the action prompt may be defined in the preset policy, for example, the type and explanation of the action prompt, the format of the action prompt, the selection mode of the action prompt, the sending mode of the action prompt, which action prompts are sent for different application scenes, and the like.
S102: and generating a three-dimensional face model of the face image according to the motion posture of the face image in the video image.
In the embodiment of the present application, in addition to possibly using the photo and wax image of the legitimate user, the attacker may also use a pre-shot video of the legitimate user as a target to be placed in front of the camera to impersonate the legitimate user, in this case, since the legitimate user in the video may have performed the action corresponding to the action prompt, in order to prevent the attacker from using the video to verify the action prompt sent by the server, the server may further verify whether the target is a two-dimensional object such as a video, and a possible verification method is: because the video is a two-dimensional object and the living human face is a three-dimensional object, after the server sends an action prompt, a three-dimensional human face model of the human face image can be generated according to the motion posture of the human face image in the video image acquired in real time.
S103: and judging whether the motion posture and the three-dimensional face model accord with a preset result, if so, executing the step S104, otherwise, executing the step S105.
According to the above description, after the server sends the action prompt, when the motion pose of the face image in the video image and the generated three-dimensional face model both conform to the preset result, the target may be determined to be the living face, and when the motion pose of the face image in the video image or the generated three-dimensional face model do not conform to the preset result, the target may be determined not to be the living face.
In practical applications, in order to enhance the fault tolerance of the living human face detection method provided by the embodiment of the present application, after sending the action prompt, the server may also allow the target to execute an action corresponding to the action prompt within a specified time, that is, the target may be given multiple opportunities to retry and correct its own action within the specified time.
S104: and determining that the target corresponding to the face image is a living body face.
S105: and determining that the target corresponding to the face image is not a living face.
By the method, the server can perform living body face detection on the target before performing the identity authentication process based on face matching, and the identity authentication process is performed only when the target is determined to be the living body face, so that even if an attacker wants to impersonate a legal user by using a photo, a recorded video or a wax image of the legal user, the action prompt corresponding to the action sent by the server cannot be performed due to the photo or the wax image, and a reasonable three-dimensional face model cannot be generated according to the recorded video, so that the impersonation mode used by the attacker cannot pass the verification of the living body face detection method, further, the attacker cannot pass the identity authentication, and the reliability of the identity authentication result is improved.
In the embodiment of the present application, the server may define various actions and corresponding action prompts in advance. Generally, the motions related to the human face mainly include a head motion and a face motion, wherein the head motion may include at least one of a head shaking motion such as a head shaking motion to the left, a head shaking motion to the right, a head raising motion, a head lowering motion, etc., and the face motion may include at least one of a blinking motion, a mouth opening motion, a eyebrow creasing motion, etc. Correspondingly, for step S101, the sending, by the server, an action prompt may specifically include: and the server selects n actions according to a preset strategy in the head action and the face action, and sends out action prompts corresponding to the n actions, wherein n is a positive integer. For example, when n =2, the server may select two motions, and assuming that the server selects a leftward shaking motion in the head motion and an open mouth motion in the face motion, the issued motion cues corresponding to the two motions may be "shake the head and open the mouth leftward".
It should be noted that the preset policy may include a selection manner in which the server selects an action from various predefined actions, for example, a random selection manner, a sequential selection manner, and the like. Of course, the above is only one specific execution process listing the sub-step "the server sends out the action prompt", obviously, the server may also adopt other specific execution processes for the sub-step, for example, the server may also select one action at random among all the predefined actions instead of selecting multiple actions, and send out the action prompt corresponding to the action, so as to speed up the processing speed of the server on the subsequent processes.
In this embodiment of the application, for step S102, generating a three-dimensional face model of the face image according to the motion pose of the face image in the video image specifically includes: and positioning key pixels in the face image, wherein the key pixels comprise pixels at the eyes, nose, mouth and eyebrows in the face image, tracking image coordinates of the key pixels according to the motion posture of the face image in the video image, and generating a three-dimensional face model of the face image according to the change state of the image coordinates of the key pixels in the tracking process. There may be one or more key pixels in each of the above-mentioned parts.
Specifically, the existing method can be adopted to locate the key pixels in the face image and track the image coordinates of the key pixels. For example, a large number of face picture samples can be used to pre-train a plurality of classifiers, which can be a left eye classifier, a right eye classifier, a left eyebrow classifier, a right eyebrow classifier, a nose classifier, a mouth classifier, a chin classifier, etc., and then used to locate and track key pixels in the face image. In addition, in an application scenario with a high requirement on the processing speed of the server, a method based on a cascade regressor can be adopted to position and track key pixels in the face image.
Further, generating a three-dimensional face model of the face image according to the change state of the image coordinates of the key pixels in the tracking process, specifically comprising: and determining the optical flow values of all pixels in the face image in real time according to the change state of the image coordinates of the key pixels in the tracking process, determining the sum of the optical flow values of all key pixels in real time, and generating the three-dimensional face model of the face image according to the optical flow values of all key pixels when the sum of the optical flow values is not increased within a specified time.
The optical flow is a vector having a magnitude and a direction, and reflects a motion state of a corresponding pixel in a continuous image, and the optical flow value indicates the magnitude of the optical flow. When the sum of the optical flow values does not increase within a specified time, the target is considered to have performed the action corresponding to the action prompt according to the action prompt sent by the server, and the optical flow values of the pixels in the face image are relatively stable and accumulate enough information, which can be used for generating the three-dimensional face model of the face image.
Furthermore, generating a three-dimensional face model of the face image according to the optical flow value of each key pixel specifically includes: and converting the optical flow value of each key pixel into a depth coordinate value, and generating a three-dimensional face model of the face image according to the depth coordinate value and the image coordinate of each key pixel.
When the action prompt is executed, generally, the closer the target is to the camera, the larger the optical flow value of the pixel corresponding to the face image is, and the linear proportional relationship between the two values is. Therefore, the optical flow value of each key pixel can be converted into a depth coordinate value according to the linear proportional relationship, and then, after coordinate normalization is performed on each key pixel according to the image coordinate and the depth coordinate value of a certain key pixel, the three-dimensional face model of the face image can be generated.
The method is a feasible method for generating the three-dimensional face model of the face image when the common camera or the monocular camera is used for collecting the video image. In practical application, a binocular camera can be used for collecting video images, three-dimensional images of targets can be directly obtained and used as the generated three-dimensional face model, and therefore the processing speed of the server can be improved.
In this embodiment of the present application, determining whether the motion gesture meets a preset result specifically includes: determining key pixels corresponding to the sent action prompts, judging whether the displacement values of the determined key pixels in the appointed time are within a preset value range, if so, determining that the motion postures accord with a preset result, otherwise, determining that the motion postures do not accord with the preset result. The following illustrates a method for verifying a motion gesture with respect to some specific motion cues.
When the motion corresponding to the sent motion prompt is a blinking motion, the server may determine a displacement value of a key pixel of an eye (e.g., a key pixel of an upper eyelid part and a lower eyelid part) in the face image within a specified time after the motion prompt is sent, may determine that an eye of a target corresponding to the face image is open when the displacement value is greater than a first set threshold, may determine that the eye of the target corresponding to the face image is closed when the displacement value is less than a second set threshold, and may determine that a motion posture of blinking matches a preset result if an alternating change of motions of opening and closing the eye is detected in a video image within the specified time.
When the action corresponding to the sent action prompt is a mouth opening action, the server can determine the displacement value of the key pixels (such as the key pixels at the upper lip part and the lower lip part) of the mouth part in the face image within the designated time after the action prompt is sent, and when the displacement value is larger than a third set threshold value, the target mouth opening corresponding to the face image can be considered, and then the motion posture of mouth opening is determined to accord with a preset result.
When the action corresponding to the sent action prompt is a frown action, the server can determine the displacement value of the key pixel of the eyebrow part in the face image within the appointed time after the action prompt is sent, and when the displacement value is larger than a fourth set threshold value, the target frown corresponding to the face image can be considered to be frown, and the motion posture of the frown is determined to accord with the preset result. Or, the distance between the key pixels of the left and right eyebrows can be judged, when the distance is smaller than a fifth set threshold, the frown of the target corresponding to the face image is considered to be frown, and the motion posture of the frown is determined to accord with the preset result.
In this embodiment of the present application, determining whether the three-dimensional face model meets a preset result specifically includes: determining the Euclidean distance between the three-dimensional face model and a preset three-dimensional model, judging whether the Euclidean distance is smaller than a preset distance threshold value, if so, determining that the three-dimensional face model accords with a preset result, otherwise, determining that the three-dimensional face model does not accord with the preset result.
Specifically, the three-dimensional face model and the preset three-dimensional model can be placed in the same three-dimensional coordinate system, and a plurality of key pixel pairs are determined for the three-dimensional face model and the preset three-dimensional model, wherein two key pixels in each key pixel pair respectively belong to the three-dimensional face model and the preset three-dimensional model, and the two key pixels represent similar parts. For example, a key pixel of the nose part on the three-dimensional face model and a key pixel of the nose part on the preset three-dimensional model can form a key pixel pair. And then, aiming at each determined key pixel pair, calculating Euclidean distances of two key pixels in the key pixel pair, and determining the mean value of the calculated Euclidean distances as the Euclidean distance between the three-dimensional face model and a preset three-dimensional model.
The euclidean distance between two key pixels can be calculated using the following formula:
Figure BDA0003813395320000091
wherein d (R, S) represents the euclidean distance between the key pixel R and the key pixel S;
the key pixel R and the key pixel S are located in a three-dimensional coordinate system (x-y-z coordinate system), Rx 、Ry 、Rz Coordinate value, S, representing key pixel Rx 、Sy 、Sz The coordinate value of the key pixel S is represented.
In the embodiment of the application, for the target determined as the living body face, the server can perform a subsequent identity authentication process on the target, and for the target determined not as the living body face, the server can directly determine the target as an illegal user, so that the processing efficiency of the server is improved, and the reliability of a subsequently obtained identity authentication result is also improved.
In practical application, after the server detects the face image, the server can prompt that a target corresponding to the face image is always kept in front of the camera in the processes of living body face detection and identity authentication, and otherwise, the living body face detection result or the identity authentication result can be directly judged to be invalid. Therefore, an attacker can be prevented from using the own living body face to cheat the living body face detection, and then the picture, the recorded video or the wax image of the legal user is utilized to pass the subsequent identity authentication.
According to the above description, fig. 2 shows a detailed process of the living human face detection provided by the embodiment of the present application, which specifically includes the following steps:
s201: and acquiring a video image in real time, and detecting a face image in the video image.
S202: and when the face image is detected, sending an action prompt to prompt a target corresponding to the face image to execute an action corresponding to the action prompt.
S203: the key pixels in the face image are located.
Wherein the key pixels comprise pixels of eyes, nose, mouth and eyebrow parts in the face image.
S204: and tracking the image coordinates of the key pixels according to the motion attitude of the face image in the video image.
S205: and determining the optical flow value of each pixel in the face image in real time according to the change state of the image coordinates of the key pixels in the tracking process, and determining the sum of the optical flow values of the key pixels in real time.
S206: and converting the optical flow value of each key pixel into a depth coordinate value when the sum of the optical flow values is not increased within a specified time.
S207: and generating a three-dimensional face model of the face image according to the depth coordinate value and the image coordinate of each key pixel.
S208: and verifying the motion posture and the three-dimensional face model.
S209: and when the motion posture or the three-dimensional face model does not accord with the preset result, determining that the target is not the living body face.
Of course, the living human face detection method provided in the embodiment of the present application may also detect faces of other living beings, which is not described herein again.
Based on the same idea, the living body face detection method provided by the embodiment of the present application further provides a corresponding living body face detection device, as shown in fig. 3.
Fig. 3 is a schematic structural diagram of a living body face detection device provided in an embodiment of the present application, which specifically includes:
thedetection module 301 is configured to acquire a video image in real time and detect a face image in the video image;
agenerating module 302, configured to generate a three-dimensional face model of a face image according to a motion pose of the face image in the video image;
a judgingmodule 303, configured to judge whether the motion pose and the three-dimensional face model meet a preset result, if so, determine that the target corresponding to the face image is a living body face, otherwise, determine that the target corresponding to the face image is not a living body face.
The device further comprises:
aprompting module 304, configured to, before thegenerating module 302 generates the three-dimensional face model of the face image according to the motion pose of the face image in the video image, send an action prompt when the detectingmodule 301 detects the face image, so as to prompt a target corresponding to the face image to execute an action corresponding to the action prompt.
The actions comprise head actions and face actions, wherein the head actions comprise at least one of leftward shaking actions, rightward shaking actions, head raising actions and head lowering actions, and the face actions comprise at least one of blinking actions, mouth opening actions and eyebrow crumpling actions;
theprompt module 304 is specifically configured to, in the head action and the face action, select n actions according to a preset policy, and send out an action prompt corresponding to the n selected actions, where n is a positive integer.
Thegenerating module 302 is specifically configured to locate key pixels in the face image, where the key pixels include pixels at positions of eyes, a nose, a mouth, and eyebrows in the face image, track image coordinates of the key pixels according to a motion pose of the face image in the video image, and generate a three-dimensional face model of the face image according to a change state of the image coordinates of the key pixels in a tracking process.
Thegenerating module 302 is specifically configured to determine, in real time, optical flow values of pixels in the face image according to a change state of image coordinates of the key pixels in a tracking process, determine, in real time, a sum of the optical flow values of the key pixels, and generate, in a specified time, a three-dimensional face model of the face image according to the optical flow values of the key pixels when the sum of the optical flow values does not increase.
Thegenerating module 302 is specifically configured to convert the optical flow value of each key pixel into a depth coordinate value, and generate a three-dimensional face model of the face image according to the depth coordinate value and the image coordinate of each key pixel.
The determiningmodule 303 is specifically configured to determine a key pixel corresponding to the sent action prompt, determine whether a displacement value of the determined key pixel in a specified time is within a preset value range, determine that the motion posture meets a preset result if the displacement value of the determined key pixel is within the preset value range, and otherwise determine that the motion posture does not meet the preset result.
The determiningmodule 303 is specifically configured to determine an euclidean distance between the three-dimensional face model and a preset three-dimensional model, determine whether the euclidean distance is smaller than a preset distance threshold, determine that the three-dimensional face model meets a preset result if the euclidean distance is smaller than the preset distance threshold, and otherwise determine that the three-dimensional face model does not meet the preset result.
The apparatus shown in fig. 3 may be located on a server.
The embodiment of the application provides a method and a device for detecting a human face in a living body, wherein a server collects a video image in real time, detects the human face image in the video image, generates a three-dimensional human face model of the human face image according to the motion posture of the human face image in the video image, judges whether the motion posture and the generated three-dimensional human face model meet a preset result, determines that a target corresponding to the human face image is a living body human face if the motion posture and the generated three-dimensional human face model meet the preset result, and otherwise determines that the target corresponding to the human face image is not the living body human face. By the method, the server can perform the living body face detection on the target before performing the identity authentication process based on the face matching, and the identity authentication process is performed only when the target is determined to be the living body face, so that even if an attacker wants to impersonate a legal user by using a photo, a recorded video or a wax image of the legal user, the action prompt corresponding to the action sent by the server cannot be performed by the photo or the wax image, and a reasonable three-dimensional face model cannot be generated according to the recorded video, so that the attacker cannot pass the verification of the living body face detection method in an impersonation mode, further, the attacker cannot pass the identity authentication, and the reliability of the identity authentication result is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (18)

1. A living body face detection method comprises the following steps:
acquiring a video image in real time, and detecting a face image in the video image;
generating a three-dimensional face model of the face image according to the motion posture of the face image in the video image;
judging whether the motion posture and the three-dimensional face model accord with a preset result or not;
and if so, determining that the target corresponding to the face image is a living face.
2. The method of claim 1, before generating the three-dimensional face model of the face image according to the motion pose of the face image in the video image, the method further comprising:
and when the face image is detected, sending an action prompt to prompt a target corresponding to the face image to execute an action corresponding to the action prompt.
3. The method of claim 2, the actions comprising a head action comprising at least one of a head shaking left action, a head shaking right action, a head raising action, a head lowering action, the facial action comprising at least one of a blinking action, a mouth opening action, a frowning action;
sending an action prompt, specifically comprising:
and in the head action and the face action, selecting n actions according to a preset strategy, and sending out action prompts corresponding to the selected n actions, wherein n is a positive integer.
4. The method according to claim 1, wherein generating a three-dimensional face model of the face image according to the motion pose of the face image in the video image specifically comprises:
locating key pixels in the face image, wherein the key pixels comprise pixels of eyes, a nose, a mouth and eyebrows in the face image;
tracking the image coordinates of the key pixels according to the motion posture of the face image in the video image;
and generating a three-dimensional face model of the face image according to the change state of the image coordinates of the key pixels in the tracking process.
5. The method according to claim 4, wherein generating a three-dimensional face model of the face image according to the change state of the image coordinates of the key pixels in the tracking process specifically comprises:
determining the optical flow value of each pixel in the face image in real time according to the change state of the image coordinates of the key pixels in the tracking process; and are
Determining the sum of optical flow values of all key pixels in real time;
and in a specified time, when the sum of the optical flow values is not increased, generating a three-dimensional face model of the face image according to the optical flow values of the key pixels.
6. The method according to claim 5, wherein generating a three-dimensional face model of the face image according to the optical flow value of each key pixel specifically comprises:
converting the optical flow value of each key pixel into a depth coordinate value;
and generating a three-dimensional face model of the face image according to the depth coordinate value and the image coordinate of each key pixel.
7. The method according to claim 4, wherein the step of judging whether the three-dimensional face model meets a preset result specifically comprises the following steps:
determining the Euclidean distance between the three-dimensional face model and a preset three-dimensional model;
judging whether the Euclidean distance is smaller than a preset distance threshold value or not;
if so, determining that the three-dimensional face model meets a preset result;
otherwise, determining that the three-dimensional face model does not accord with a preset result.
8. The method according to claim 1, wherein generating the three-dimensional face model of the face image specifically comprises: tracking the image coordinates of each key pixel in the face image, and determining the optical flow value of each key pixel in the face image; and generating a three-dimensional face model of the face image according to the optical flow value of each key pixel.
9. The method according to claim 1, wherein the step of determining whether the motion gesture meets a preset result specifically comprises:
determining key pixels in the motion pose corresponding to the issued action prompt;
judging whether the determined displacement value of the key pixel in the specified time is within a preset value range or not;
if so, determining that the motion posture accords with a preset result;
otherwise, determining that the motion attitude does not accord with a preset result.
10. The method of claim 9, the action prompt corresponding action comprising: at least one of a head motion and a face motion; the head movement includes at least one of a head lowering movement and a head raising movement.
11. The method of claim 1, further comprising:
and if not, determining that the target corresponding to the face image is not the living face.
12. A living body face detection apparatus comprising:
the detection module is used for acquiring a video image in real time and detecting a face image in the video image;
the generating module is used for generating a three-dimensional face model of the face image according to the motion posture of the face image in the video image;
and the judging module is used for judging whether the motion posture and the three-dimensional face model accord with a preset result, and if so, determining that the target corresponding to the face image is a living body face.
13. The apparatus of claim 12, the apparatus further comprising:
and the prompting module is used for sending an action prompt to prompt a target corresponding to the face image to execute an action corresponding to the action prompt when the detection module detects the face image before the generation module generates the three-dimensional face model of the face image according to the motion posture of the face image in the video image.
14. The apparatus of claim 13, the actions comprising a head action comprising at least one of a head shaking left action, a head shaking right action, a head raising action, a head lowering action, the facial action comprising at least one of a blinking action, a mouth opening action, a frowning action;
the prompt module is specifically configured to select n actions according to a preset policy from the head action and the face action, and send an action prompt corresponding to the n selected actions, where n is a positive integer.
15. The apparatus according to claim 14, wherein the generating module is specifically configured to locate key pixels in the face image, where the key pixels include pixels of eyes, nose, mouth, and eyebrows in the face image, track image coordinates of the key pixels according to a motion pose of the face image in the video image, and generate a three-dimensional face model of the face image according to a change state of the image coordinates of the key pixels during the tracking.
16. The apparatus according to claim 15, wherein the generating module is specifically configured to determine, in real time, optical flow values of pixels in the face image according to a change state of image coordinates of the key pixels during the tracking process, and determine, in real time, a sum of optical flow values of the key pixels, and generate the three-dimensional face model of the face image according to the optical flow values of the key pixels when the sum of the optical flow values does not increase within a specified time.
17. The apparatus according to claim 16, wherein the generating module is specifically configured to convert the optical flow value of each key pixel into a depth coordinate value, and generate the three-dimensional face model of the face image according to the depth coordinate value and the image coordinate of each key pixel.
18. The apparatus according to claim 15, wherein the determining module is specifically configured to determine a euclidean distance between the three-dimensional face model and a preset three-dimensional model, determine whether the euclidean distance is smaller than a preset distance threshold, determine that the three-dimensional face model meets a preset result if the euclidean distance is smaller than the preset distance threshold, and determine that the three-dimensional face model does not meet the preset result if the euclidean distance is smaller than the preset distance threshold.
CN202211019029.5A2015-01-192015-01-19Living body face detection method and devicePendingCN115457664A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211019029.5ACN115457664A (en)2015-01-192015-01-19Living body face detection method and device

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN201510025899.7ACN105868677B (en)2015-01-192015-01-19Living body face detection method and device
CN202211019029.5ACN115457664A (en)2015-01-192015-01-19Living body face detection method and device

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510025899.7ADivisionCN105868677B (en)2015-01-192015-01-19Living body face detection method and device

Publications (1)

Publication NumberPublication Date
CN115457664Atrue CN115457664A (en)2022-12-09

Family

ID=56623141

Family Applications (2)

Application NumberTitlePriority DateFiling Date
CN202211019029.5APendingCN115457664A (en)2015-01-192015-01-19Living body face detection method and device
CN201510025899.7AActiveCN105868677B (en)2015-01-192015-01-19Living body face detection method and device

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
CN201510025899.7AActiveCN105868677B (en)2015-01-192015-01-19Living body face detection method and device

Country Status (1)

CountryLink
CN (2)CN115457664A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116189317A (en)*2023-01-062023-05-30支付宝(杭州)信息技术有限公司 Liveness detection method and system

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB2560340A (en)*2017-03-072018-09-12Eyn LtdVerification method and system
CN107358154A (en)*2017-06-022017-11-17广州视源电子科技股份有限公司Head motion detection method and device and living body identification method and system
CN107679457A (en)*2017-09-062018-02-09阿里巴巴集团控股有限公司User identity method of calibration and device
CN108875497B (en)*2017-10-272021-04-27北京旷视科技有限公司Living body detection method, living body detection device and computer storage medium
CN108171109A (en)*2017-11-282018-06-15苏州市东皓计算机系统工程有限公司A kind of face identification system
CN108140123A (en)*2017-12-292018-06-08深圳前海达闼云端智能科技有限公司Face living body detection method, electronic device and computer program product
CN110032915A (en)*2018-01-122019-07-19杭州海康威视数字技术股份有限公司A kind of human face in-vivo detection method, device and electronic equipment
CN108319901B (en)*2018-01-172019-08-27百度在线网络技术(北京)有限公司Biopsy method, device, computer equipment and the readable medium of face
CN108171211A (en)*2018-01-192018-06-15百度在线网络技术(北京)有限公司Biopsy method and device
FR3077658B1 (en)*2018-02-062020-07-17Idemia Identity And Security METHOD FOR AUTHENTICATING A FACE
CN108830058A (en)*2018-05-232018-11-16平安科技(深圳)有限公司Safety certifying method, certificate server and computer readable storage medium
CN108805047B (en)*2018-05-252021-06-25北京旷视科技有限公司Living body detection method and device, electronic equipment and computer readable medium
CN109583170B (en)*2018-11-302020-11-13苏州东巍网络科技有限公司Slimming cloud data encryption storage system and method for intelligent terminal
CN109886697B (en)*2018-12-262023-09-08巽腾(广东)科技有限公司Operation determination method and device based on expression group and electronic equipment
CN109508702A (en)*2018-12-292019-03-22安徽云森物联网科技有限公司A kind of three-dimensional face biopsy method based on single image acquisition equipment
CN110163104B (en)*2019-04-182023-02-17创新先进技术有限公司Face detection method and device and electronic equipment
CN111860056B (en)*2019-04-292023-10-20北京眼神智能科技有限公司Blink-based living body detection method, blink-based living body detection device, readable storage medium and blink-based living body detection equipment
CN112861568A (en)*2019-11-122021-05-28Oppo广东移动通信有限公司Authentication method and device, electronic equipment and computer readable storage medium
CN114092984A (en)*2020-07-302022-02-25阿里巴巴集团控股有限公司Face recognition method and device, electronic equipment and storage medium
CN114998549A (en)*2022-06-012022-09-02南京万生华态科技有限公司 Method for making three-dimensional digital model of traditional Chinese medicinal materials and computer readable medium
CN115238256A (en)*2022-07-202022-10-25国网福建省电力有限公司Network security verification method and system based on distributed system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN100514353C (en)*2007-11-262009-07-15清华大学Living body detecting method and system based on human face physiologic moving
CN101908140A (en)*2010-07-292010-12-08中山大学 A Liveness Detection Method Applied in Face Recognition
CN102375970B (en)*2010-08-132016-03-30北京中星微电子有限公司A kind of identity identifying method based on face and authenticate device
CN102622588B (en)*2012-03-082013-10-09无锡中科奥森科技有限公司 Double verification face anti-counterfeiting method and device
CN103440479B (en)*2013-08-292016-12-28湖北微模式科技发展有限公司A kind of method and system for detecting living body human face
CN103593598B (en)*2013-11-252016-09-21上海骏聿数码科技有限公司User's on-line authentication method and system based on In vivo detection and recognition of face

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116189317A (en)*2023-01-062023-05-30支付宝(杭州)信息技术有限公司 Liveness detection method and system

Also Published As

Publication numberPublication date
CN105868677A (en)2016-08-17
CN105868677B (en)2022-08-30

Similar Documents

PublicationPublication DateTitle
CN105868677B (en)Living body face detection method and device
CN111788572B (en) Method and system for facial recognition
US10824849B2 (en)Method, apparatus, and system for resource transfer
JP6878572B2 (en) Authentication based on face recognition
CN108804884B (en)Identity authentication method, identity authentication device and computer storage medium
US9576121B2 (en)Electronic device and authentication system therein and method
JP2022071195A (en) Computing equipment and methods
WO2019075840A1 (en)Identity verification method and apparatus, storage medium and computer device
JP2018524654A (en) Activity detection method and device, and identity authentication method and device
KR20140026512A (en)Automatically optimizing capture of images of one or more subjects
JP7264308B2 (en) Systems and methods for adaptively constructing a three-dimensional face model based on two or more inputs of two-dimensional face images
CN111310512B (en) User identity authentication method and device
US20230419737A1 (en)Methods and systems for detecting fraud during biometric identity verification
JP6311237B2 (en) Collation device and collation method, collation system, and computer program
KR102215535B1 (en)Partial face image based identity authentication method using neural network and system for the method
KR101656212B1 (en)system for access control using hand gesture cognition, method thereof and computer recordable medium storing the method
US20220245963A1 (en)Method, apparatus and computer program for authenticating a user
CN109089102A (en)A kind of robotic article method for identifying and classifying and system based on binocular vision
US20250037509A1 (en)System and method for determining liveness using face rotation
KR102380426B1 (en)Method and apparatus for verifying face
KR102539533B1 (en)Method and apparatus for preventing other people from photographing identification
CN109063442B (en)Service implementation method and device and camera implementation method and device
CN110334495A (en) Mobile terminal unlocking method and device
CN116681443A (en)Payment method and device based on biological recognition
HK1228066A1 (en)Live human face detection method and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp