技术领域technical field
本发明涉及模式识别,计算机视觉,人脸检测和对齐等技术领域,具体涉及一种基于深度学习的交互式人脸活体检测方法和装置。The invention relates to the technical fields of pattern recognition, computer vision, face detection and alignment, etc., and in particular to an interactive face detection method and device based on deep learning.
背景技术Background technique
人脸识别技术在近年来得到快速的发展,由于其易用性越来越多的场合开始使用人脸识别技术进行身份认定。由于人脸识别系统极易受到照片和视频片段的攻击,因此在判断所采集的人脸图像和注册数据库中人脸图像进行比对的同时,还需要判定所采集的人脸图像是否为真人,即活体检测,活体检测就是用于确定目标为有生命的个体的。目前,对于人脸识别系统有三种较为常见的攻击方式:打印的人脸照片,显示屏上的人脸图像,人脸面具和三维模型。由于伪造方式各种各样,所采用的材料、设备和技术也不尽相同,给人脸活体检测带来了很大的挑战。Face recognition technology has developed rapidly in recent years. Due to its ease of use, more and more occasions have begun to use face recognition technology for identity verification. Since the face recognition system is extremely vulnerable to attacks from photos and video clips, it is necessary to determine whether the collected face images are real people while comparing the collected face images with the face images in the registration database. That is, liveness detection, which is used to determine that the target is a living individual. At present, there are three common attack methods for face recognition systems: printed face photos, face images on the display screen, face masks and 3D models. Due to the various ways of forgery, the materials, equipment and techniques used are also different, which brings great challenges to human face detection.
为了检测这些常用的攻击手段,人们提出了各种活体检测方法。常用的活体检测方法大致可以分为四类,第一类是检测人脸固有的特征,包括眨眼检测,频谱分析等。眨眼检测根据用户无意识的行为特征,但是不能抵抗视频攻击,准确度和鲁棒性都难以做的很好。依据照片的高频成分低于活体人脸图像的假设,频谱分析也是进行活体检测的一种方法。只是此方法需要高分辨率的图像。第二类是利用光源或者传感设备等,热图像传感器在红外光下,通过探测活体人脸和虚假图像的反射区别检测欺骗攻击。但是这一类方法需要增加设备,加大花费的成本。第三类是从视频和音频中提取特征信息,人说话时嘴部运动和声音是同步的。最后一类需要用户的参与,如用户被要求做指定的动作,通过动作判定验证它们是否同步来进行活体检测,传统的交互式人脸活体检测需要单独进行人脸关键点定位、姿态估计、嘴巴张闭状态判断等各种任务,增加了模型的复杂度和计算时间,而且由于没有考虑任务之间的关联性导致模型通用性能下降。In order to detect these commonly used attack methods, various liveness detection methods have been proposed. Commonly used liveness detection methods can be roughly divided into four categories. The first category is to detect the inherent characteristics of human faces, including blink detection and spectrum analysis. Blink detection is based on the user's unconscious behavior characteristics, but it cannot resist video attacks, and the accuracy and robustness are difficult to do well. Based on the assumption that the high-frequency components of photos are lower than those of live face images, spectral analysis is also a method for live body detection. It's just that this method requires a high-resolution image. The second category is to use light sources or sensing devices, etc., under infrared light, the thermal image sensor detects spoofing attacks by detecting the reflection difference between living human faces and false images. However, this type of method needs to increase equipment and increase the cost of spending. The third category is to extract feature information from video and audio. When a person speaks, the mouth movement and sound are synchronized. The last category requires the participation of the user. For example, the user is required to perform a specified action, and the liveness detection is performed by verifying whether they are synchronized through the action judgment. The traditional interactive face liveness detection requires separate face key point positioning, pose estimation, and mouth detection. Various tasks such as opening and closing state judgments increase the complexity and calculation time of the model, and the general performance of the model decreases because the correlation between tasks is not considered.
发明内容Contents of the invention
为了解决现有技术不足,本发明的目的是提供一种基于多任务自编码器的交互式人脸活体检测方法,该方法把人脸面部关键点定位、姿态估计、人脸面部状态融合为一个目标函数,用一个模型能够同时解决上述问题,以此来实现人脸活体检测。In order to solve the deficiencies in the prior art, the object of the present invention is to provide an interactive human face detection method based on a multi-task autoencoder, which integrates key point positioning, attitude estimation, and facial state into one The objective function is to use a model to solve the above problems at the same time, so as to realize the face detection.
根据本发明一方面,提供了一种基于多任务自编码器的人脸活体检测方法,其特征在于,包括:According to one aspect of the present invention, there is provided a face detection method based on a multi-task autoencoder, which is characterized in that it includes:
步骤S1,通过摄像头进行人脸检测并且跟踪,获得人脸图像;Step S1, perform face detection and tracking through the camera to obtain a face image;
步骤S2,按下预定按键,提示用户做指定动作;Step S2, pressing a predetermined button to prompt the user to perform a specified action;
步骤S3,根据所获得的人脸图像,通过多任务自编码器进行人脸关键点检测以及面部器官状态的判定;Step S3, according to the obtained face image, the multi-task autoencoder is used to detect the key points of the face and determine the state of the facial organs;
步骤S4,多任务自编码器进行人脸位置跟踪,并通过一段时间的视频判断用户是否做指定的动作,同时获取用户图片;Step S4, the multi-task self-encoder performs face position tracking, and judges whether the user has performed a specified action through a period of video, and obtains the user's picture at the same time;
步骤S5,重复步骤S2-S4,经过预定时间后,根据用户完成指定动作情况判断活体检测是否成功。In step S5, steps S2-S4 are repeated, and after a predetermined time, it is judged whether the living body detection is successful or not according to the completion of the specified action by the user.
根据本发明另一方面,提供了一种基于多任务自编码器的人脸活体检测装置,其特征在于,包括:According to another aspect of the present invention, there is provided a multi-task autoencoder-based human face detection device, characterized in that it includes:
人脸图像获取模块,用于通过摄像头进行人脸检测并且跟踪,获得人脸图像;The face image acquisition module is used to perform face detection and tracking through the camera to obtain a face image;
提示模块,用于提示用户做指定动作;Prompt module, used to prompt the user to do a specified action;
关键点检测模块,用于根据所获得的人脸图像,通过多任务自编码器进行人脸关键点检测以及面部器官状态的判定;The key point detection module is used to detect the key points of the face and determine the state of the facial organs through the multi-task self-encoder according to the obtained face image;
判断模块,利用多任务自编码器进行人脸位置跟踪,并通过一段时间的视频判断用户是否做指定的动作,同时获取用户图片;The judging module uses a multi-task self-encoder to track the position of the face, and judges whether the user has performed the specified action through a period of video, and obtains the user's picture at the same time;
结果判断模块,用于重复执行提示模块、关键点检测模块和判断模块,经过预定时间后,根据用户完成指定动作情况判断活体检测是否成功。The result judging module is used to repeatedly execute the prompt module, the key point detection module and the judging module. After a predetermined time, judge whether the living body detection is successful according to the user's completion of the specified action.
根据本发明的方法,可以把多种任务:人脸关键点检测、姿态估计、嘴巴张闭、眼睛张闭等融合为一个目标函数,通过一个目标函数求解所有相关的任务,提高了模型的通用性能和泛化性能,能够自然的应用于交互式活体检测上,减小了模型的复杂度和运算开销,增加了模型判断的精度,进而使得活体检测更加高效和鲁棒。According to the method of the present invention, a variety of tasks can be integrated into one objective function: face key point detection, attitude estimation, mouth opening and closing, eye opening and closing, etc., and all related tasks can be solved by one objective function, which improves the universality of the model. The performance and generalization performance can be naturally applied to interactive liveness detection, which reduces the complexity and computational overhead of the model, increases the accuracy of model judgment, and makes liveness detection more efficient and robust.
附图说明Description of drawings
图1是本发明基于多任务自编码器的人脸活体检测方法的流程图。Fig. 1 is the flow chart of the human face detection method based on the multi-task self-encoder of the present invention.
具体实施方式detailed description
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实例,并参照附图,对本发明进一步详细说明。所描述的实施例子仅旨在便于对本发明的理解,而对其不起任何限定作用。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in combination with specific examples and with reference to the accompanying drawings. The described implementation examples are only intended to facilitate the understanding of the present invention, and do not have any limiting effect on it.
本发明的目的是提供一种基于多任务自编码器的交互式人脸活体检测方法,该方法把人脸关键点定位、姿态估计、面部状态融合为一个目标函数,用一个模型能够同时解决上述问题,以此来实现人脸活体检测。The purpose of the present invention is to provide an interactive human face detection method based on a multi-task autoencoder, which combines the key point positioning of the human face, pose estimation, and facial state into an objective function, and can solve the above problems simultaneously with one model. problem, in order to realize face detection.
如图1所示,本发明提出了一种基于多任务自编码器的人脸活体检测方法,包括如下步骤:As shown in Figure 1, the present invention proposes a kind of human face detection method based on multi-task self-encoder, comprises the following steps:
步骤S1,通过摄像头进行人脸检测并且跟踪,获得人脸图像;Step S1, perform face detection and tracking through the camera to obtain a face image;
所述步骤S1包括:Said step S1 comprises:
步骤S11,在没有人脸图像情况下,开启人脸检测器,检测视频中的人脸;Step S11, when there is no face image, turn on the face detector to detect the face in the video;
步骤S2,按下特定按键,提示用户做相应的特定动作;用户特定动作有以下形式:左摇头,右摇头,点头,张嘴和眨眼,可以用键盘上指定的几个按键(比如数字键1、2、3、4、5等)对应相应的动作。Step S2, pressing a specific key, prompting the user to do a corresponding specific action; the specific action of the user has the following forms: shaking the head left, shaking the head right, nodding, opening the mouth and blinking, can use several keys designated on the keyboard (such as number keys 1, 2, 3, 4, 5, etc.) corresponds to the corresponding action.
步骤S3,根据步骤S1所获得人脸图像,通过多任务自编码器进行人脸关键点检测以及面部器官状态的判定,进而判定被测试对象是否做了步骤S2中指定的动作;其中,多任务自编码器如下训练得到:Step S3, according to the face image obtained in step S1, the multi-task autoencoder is used to detect the key points of the face and the judgment of the state of the facial organs, and then determine whether the tested object has performed the action specified in step S2; wherein, the multi-task The autoencoder is trained as follows:
步骤S31,收集人脸图像数据,人工标定关键点位置Sg,头部姿态信息Pg,嘴巴张闭状态Mg,眼睛张闭状态Eg;Step S31, collect face image data, manually calibrate key point position Sg , head posture information Pg , mouth opening and closing state Mg , and eye opening and closing state Eg ;
步骤S32,把人脸图像缩放到指定分辨率大小,如50x50;Step S32, scaling the face image to a specified resolution size, such as 50x50;
步骤S33,把缩放的人脸图像输入到第一阶段的多任务自编码器,检测初始的人脸关键点位置坐标S0,同时得到头部姿态信息P0,嘴巴张闭状态M0,眼睛张闭状态E0;Step S33, input the scaled face image to the multi-task self-encoder in the first stage, detect the initial position coordinates S0 of key points of the face, and obtain head posture information P0, mouth open and closed state M0, and eye open and closed state E0;
在介绍多任务自编码器目标函数之前,我们先介绍一些基本的公式符号含义。假设收集到N张人脸图像{I1,...,Ii,...,IN},人工标定关键点位置Sg可以表示为头部姿态信息嘴巴张闭状态眼睛张闭状态自编码器预测输出的关键点位置Sp可以表示为头部姿态信息嘴巴张闭状态眼睛张闭状态i表示第i张人脸图像。假设自编码器的层数一共是T层,其中第t(t=1,...,T-1)层输出作为第t+1(t=1,...,T-1)层输入,第t(t=1,...,T)层的第i个输入可以表示为例如第一层的第i张输入图像可以表示为Before introducing the multi-task autoencoder objective function, we first introduce some basic formula notation. Assuming that N face images {I1 ,..., Ii ,..., IN } are collected, the manually calibrated key point position Sg can be expressed as head pose information mouth open Eyes open and closed The key point position Sp predicted by the autoencoder can be expressed as head pose information mouth open Eyes open and closed i represents the i-th face image. Assume that the number of layers of the self-encoder is T layers in total, and the output of the t (t=1,...,T-1) layer is used as the t+1 (t=1,..., T-1) layer Input, the i-th input of layer t (t=1,...,T) can be expressed as For example, the i-th input image of the first layer can be expressed as
所述多任务自编码器为叠加的T个多任务自编码器,迭代的进行非线性映射,前T-1个自编码器形式如下,The multi-task auto-encoder is superimposed T multi-task auto-encoders, iteratively performs nonlinear mapping, and the first T-1 auto-encoders are in the following form,
xt=σ(Wtxt-1+bt),t=1,...,T-1xt = σ(Wt xt-1 +bt ), t=1, . . . , T-1
其中,σ(·)是自编码器激活函数,例如,Sigmoid函数,tanh函数等。Among them, σ( ) is the autoencoder activation function, for example, Sigmoid function, tanh function, etc.
Wt是第t层自编码器映射矩阵,其作用是对第t-1层输出xt-1(即第t层输入)进行线性映射。bt是第t层自编码器的偏置量。Wt is the t-th layer autoencoder mapping matrix, and its function is to linearly map the t-1-th layer output xt-1 (ie, the t-th layer input). bt is the bias of the t-th layer autoencoder.
第T层多任务自编码器形式如下,The T-layer multi-task autoencoder has the following form,
xT=WTxT-1+bTxT =WT xT-1 +bT
其中xT为最终T层自编码器输出。where xT is the final T-layer autoencoder output.
第一阶段多任务自编码器的目标函数为:The objective function of the first-stage multi-task autoencoder is:
其中表示关键点检测的损失函数,表示头部姿态估计的损失函数,表示嘴巴张闭损失函数,表示眼睛张闭损失函数,Sg,Pg,Mg,Eg分别表示步骤S31中人工标定的人脸关键点位置坐标、头部姿态信息、嘴巴张闭状态和眼睛张闭状态,I为输入图像,f(·)是第一阶段自编码器的非线性映射函数形式。Wr表示基于回归的映射矩阵,Wl表示基于分类的映射矩阵。其中,用回归形式的平方误差损失函数表示in Represents the loss function for keypoint detection, represents the loss function for head pose estimation, Represents the mouth opening and closing loss function, Represents the eye opening and closing loss function, Sg , Pg , Mg , Eg respectively represent the position coordinates of the key points of the face manually calibrated in step S31, the head posture information, the state of opening and closing the mouth and the state of opening and closing the eyes, and I is The input image, f( ) is the nonlinear mapping function form of the first-stage autoencoder. Wr represents the regression-based mapping matrix, and Wl represents the classification-based mapping matrix. in, Represented by the squared error loss function in regression form
其中N表示总共有N张人脸图像。in N means that there are N face images in total.
用分类形式的交叉熵损失函数表示, Represented by the cross-entropy loss function in categorical form,
其中N表示总共有N张人脸图像,K为分类类别,例如对嘴巴和眼睛状态有两种(K=2):张和闭。Among them, N indicates that there are N face images in total, and K is a classification category, for example, there are two kinds of mouth and eye states (K=2): open and closed.
嘴巴状态被判断成第k(k=1,...,K)类的概率为:The probability of the mouth state being judged as the kth (k=1,...,K) class is:
其中in
眼睛状态被判断成第k(k=1,...,K)类的概率为:The probability that the eye state is judged as the kth (k=1,...,K) class is:
多任务自编码器的目标函数用基于异质任务的梯度下降法求解。The objective function of the multi-task autoencoder is solved with heterogeneous task-based gradient descent.
步骤S34,第二阶段再次把图像缩放到预定分辨率大小,如80x80,同时根据第一阶段人脸关键点位置坐标S0,计算出缩放后的人脸图像关键点位置坐标S01;Step S34, in the second stage, the image is scaled to a predetermined resolution size again, such as 80x80, and at the same time, according to the position coordinates S0 of the key points of the face in the first stage, the position coordinates S01 of the key points of the face image after scaling are calculated;
步骤S35,根据缩放后的人脸图像和第一阶段自编码器输出的人脸关键点位置坐标S01,在每个关键点周围提取特征,把这些特征串联起来输入到第二阶段的多任务自编码器,得到最终的人脸关键点位置坐标S1、头部姿态信息P1、嘴巴张闭状态M1和眼睛张闭状态E1;其中,在每个关键点周围提取的是SIFT特征,把这些特征串联起来输入到第二阶段的自编码器,其目标函数为:Step S35, extracting features around each key point according to the scaled face image and the position coordinates S01 of key points of the face output by the first-stage autoencoder, and concatenating these features into the second-stage multi-task autoencoder The encoder obtains the final face key point position coordinates S1, head posture information P1, mouth opening and closing state M1, and eye opening and closing state E1; among them, SIFT features are extracted around each key point, and these features are connected in series Up and input to the autoencoder of the second stage, the objective function is:
其中S0,P0,M0,E0分别是第一阶段自编码器预测输出的人脸关键点坐标位置、头部姿态信息、嘴巴张闭状态和眼睛张闭状态,φ(S0)是在人脸图像关键点周围提取相应特征描述子SIFT,并且把这些描述子串联起来形成的特征,Among them, S0, P0, M0, and E0 are the coordinate positions of key points of the face predicted by the first-stage self-encoder, the head posture information, the state of mouth opening and closing, and the state of eye opening and closing, respectively, and φ(S0) is the face image The corresponding feature descriptor SIFT is extracted around the key point, and the features formed by connecting these descriptors in series,
ΔS=Sg-S0,ΔP=Pg-P0,ΔM=Mg-M0,ΔE=Eg-E0表示的是真实的人脸关键点坐标位置、头部姿态信息、嘴巴张闭状态和眼睛张闭状态和第一阶段自编码器预测输出的差值,f(·)是第二阶段自编码器的非线性映射函数形式,Wr表示基于回归的映射矩阵,Wl表示基于分类的映射矩阵。ΔS=Sg -S0, ΔP=Pg -P0, ΔM=Mg -M0, ΔE=Eg -E0 represent the real face key point coordinate position, head posture information, mouth opening and closing state and eyes The difference between the opening and closing state and the predicted output of the first-stage autoencoder, f( ) is the nonlinear mapping function form of the second-stage autoencoder, Wr represents the regression-based mapping matrix, and Wl represents the classification-based mapping matrix.
同步骤S33类似,求解上述目标函数可以得到第二阶段自编码器相应参数,通过自编码器输出可以得到最终的关键点位置坐标S1,头部姿态信息P1,嘴巴张闭状态M1,眼睛张闭状态E1;Similar to step S33, the corresponding parameters of the second-stage autoencoder can be obtained by solving the above objective function, and the final key point position coordinates S1, head posture information P1, mouth opening and closing state M1, and eyes opening and closing can be obtained through the output of the autoencoder state E1;
步骤S4,多任务自编码器进行人脸位置跟踪并输出关键点位置坐标,头部姿态信息,嘴巴张闭状态,眼睛张闭状态,通过一段时间的视频判断用户是否做指定的动作,同时获取用户图片;本发明所指左摇头,右摇头、点头、张嘴和眨眼动作均是一个序列。摇头表示从开始正面姿态到侧面姿态的过程,其中侧面姿态最大角度为35度。点头表示从正面姿态到向下侧姿态过程,其中向下侧最大角度为24度。Step S4, the multi-task self-encoder performs face position tracking and outputs key point position coordinates, head posture information, mouth opening and closing status, eyes opening and closing status, and judges whether the user is doing the specified action through a period of video, and at the same time obtains User picture; the present invention refers to shaking the head to the left, shaking the head to the right, nodding, opening the mouth and blinking are all a sequence. Shaking the head indicates the process from the initial frontal stance to the side stance, where the maximum angle of the side stance is 35 degrees. Nodding means the process from a frontal posture to a downward posture, where the maximum downward angle is 24 degrees.
张嘴动作指的是从开始的闭嘴状态到张嘴状态的过程,眨眼指的是从开始的睁眼状态到闭眼状态的过程。Mouth opening refers to the process from the initial state of closing the mouth to the state of opening the mouth, and blinking refers to the process from the initial state of opening the eyes to the state of closing the eyes.
其中,获取的图片为正面图片,要求姿态信息角度在-20度到20度范围之内。Wherein, the obtained picture is a frontal picture, and the attitude information angle is required to be within the range of -20 degrees to 20 degrees.
步骤S5,重复步骤S2-S4,在规定时间内,根据用户完成情况判断活体检测是否成功。所述规定时间为10-15s。Step S5, repeating steps S2-S4, and judging whether the living body detection is successful or not according to the completion status of the user within a specified time. The specified time is 10-15s.
步骤S6,若活体检测成功,则获取步骤S4中获取的照片,若活体检测失败则再次重复步骤S1-S5;Step S6, if the biopsy detection is successful, then obtain the photo obtained in step S4, if the biopsy detection fails, repeat steps S1-S5;
综上,本发明提出了一种基于多任务自编码器的交互式活体检测,其优点在于可以用一个统一模型描述关键点检测、姿态估计、面部器官状态等问题,可以更加准确快速的判断交互式活体检测各种动作。与其他传统的交互式活体检测相比,本发明更加鲁棒、准确、快速。In summary, the present invention proposes an interactive liveness detection based on a multi-task autoencoder, which has the advantage that a unified model can be used to describe issues such as key point detection, pose estimation, and facial organ status, and can more accurately and quickly judge interactive Liveness detection of various actions. Compared with other traditional interactive living body detection, the present invention is more robust, accurate and fast.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention, and are not intended to limit the present invention. Within the spirit and principles of the present invention, any modifications, equivalent replacements, improvements, etc., shall be included in the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610339145.3ACN106022264A (en) | 2016-05-19 | 2016-05-19 | Interactive face in vivo detection method and device based on multi-task self encoder |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610339145.3ACN106022264A (en) | 2016-05-19 | 2016-05-19 | Interactive face in vivo detection method and device based on multi-task self encoder |
| Publication Number | Publication Date |
|---|---|
| CN106022264Atrue CN106022264A (en) | 2016-10-12 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610339145.3APendingCN106022264A (en) | 2016-05-19 | 2016-05-19 | Interactive face in vivo detection method and device based on multi-task self encoder |
| Country | Link |
|---|---|
| CN (1) | CN106022264A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106529414A (en)* | 2016-10-14 | 2017-03-22 | 国政通科技股份有限公司 | Method for realizing result authentication through image comparison |
| CN107220590A (en)* | 2017-04-24 | 2017-09-29 | 广东数相智能科技有限公司 | A kind of anti-cheating network research method based on In vivo detection, apparatus and system |
| CN107423690A (en)* | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
| CN108140123A (en)* | 2017-12-29 | 2018-06-08 | 深圳前海达闼云端智能科技有限公司 | Face living body detection method, electronic device and computer program product |
| CN108304765A (en)* | 2017-12-11 | 2018-07-20 | 中国科学院自动化研究所 | Multitask detection device for face key point location and semantic segmentation |
| CN108875530A (en)* | 2018-01-12 | 2018-11-23 | 北京旷视科技有限公司 | Vivo identification method, vivo identification equipment, electronic equipment and storage medium |
| CN109492455A (en)* | 2017-09-12 | 2019-03-19 | 中国移动通信有限公司研究院 | Live subject detection and identity identifying method, medium, system and relevant apparatus |
| CN109858435A (en)* | 2019-01-29 | 2019-06-07 | 四川大学 | A kind of lesser panda individual discrimination method based on face image |
| CN109886087A (en)* | 2019-01-04 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of biopsy method neural network based and terminal device |
| CN110084221A (en)* | 2019-05-08 | 2019-08-02 | 南京云智控产业技术研究院有限公司 | A kind of serializing face critical point detection method of the tape relay supervision based on deep learning |
| CN111160251A (en)* | 2019-12-30 | 2020-05-15 | 支付宝实验室(新加坡)有限公司 | Living body identification method and device |
| CN111368601A (en)* | 2018-12-26 | 2020-07-03 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and computer-readable storage medium |
| CN113283370A (en)* | 2021-06-08 | 2021-08-20 | 深圳市街角电子商务有限公司 | Face living body detection method and device based on double-flow information |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104298973A (en)* | 2014-10-09 | 2015-01-21 | 北京工业大学 | Face image rotation method based on autoencoder |
| CN105260726A (en)* | 2015-11-11 | 2016-01-20 | 杭州海量信息技术有限公司 | Interactive video in vivo detection method based on face attitude control and system thereof |
| CN105335719A (en)* | 2015-10-29 | 2016-02-17 | 北京汉王智远科技有限公司 | Living body detection method and device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104298973A (en)* | 2014-10-09 | 2015-01-21 | 北京工业大学 | Face image rotation method based on autoencoder |
| CN105335719A (en)* | 2015-10-29 | 2016-02-17 | 北京汉王智远科技有限公司 | Living body detection method and device |
| CN105260726A (en)* | 2015-11-11 | 2016-01-20 | 杭州海量信息技术有限公司 | Interactive video in vivo detection method based on face attitude control and system thereof |
| Title |
|---|
| JIE ZHANG 等: "Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment", 《ECCV 2014》* |
| ZHANPENG ZHANG 等: "Facial Landmark Detection by Deep Multi-task Learning", 《ECCV 2014》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106529414A (en)* | 2016-10-14 | 2017-03-22 | 国政通科技股份有限公司 | Method for realizing result authentication through image comparison |
| CN107220590A (en)* | 2017-04-24 | 2017-09-29 | 广东数相智能科技有限公司 | A kind of anti-cheating network research method based on In vivo detection, apparatus and system |
| CN107220590B (en)* | 2017-04-24 | 2021-01-05 | 广东数相智能科技有限公司 | Anti-cheating network investigation method, device and system based on in-vivo detection |
| CN107423690A (en)* | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
| CN109492455A (en)* | 2017-09-12 | 2019-03-19 | 中国移动通信有限公司研究院 | Live subject detection and identity identifying method, medium, system and relevant apparatus |
| CN109492455B (en)* | 2017-09-12 | 2020-11-13 | 中国移动通信有限公司研究院 | Living object detection and identity authentication method, medium, system and related device |
| CN108304765A (en)* | 2017-12-11 | 2018-07-20 | 中国科学院自动化研究所 | Multitask detection device for face key point location and semantic segmentation |
| WO2019127365A1 (en)* | 2017-12-29 | 2019-07-04 | 深圳前海达闼云端智能科技有限公司 | Face living body detection method, electronic device and computer program product |
| CN108140123A (en)* | 2017-12-29 | 2018-06-08 | 深圳前海达闼云端智能科技有限公司 | Face living body detection method, electronic device and computer program product |
| CN108875530A (en)* | 2018-01-12 | 2018-11-23 | 北京旷视科技有限公司 | Vivo identification method, vivo identification equipment, electronic equipment and storage medium |
| CN111368601A (en)* | 2018-12-26 | 2020-07-03 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and computer-readable storage medium |
| CN111368601B (en)* | 2018-12-26 | 2021-11-16 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and computer-readable storage medium |
| CN109886087A (en)* | 2019-01-04 | 2019-06-14 | 平安科技(深圳)有限公司 | A kind of biopsy method neural network based and terminal device |
| CN109886087B (en)* | 2019-01-04 | 2023-10-20 | 平安科技(深圳)有限公司 | Living body detection method based on neural network and terminal equipment |
| CN109858435B (en)* | 2019-01-29 | 2020-12-01 | 四川大学 | A method for individual identification of red panda based on face image |
| CN109858435A (en)* | 2019-01-29 | 2019-06-07 | 四川大学 | A kind of lesser panda individual discrimination method based on face image |
| CN110084221A (en)* | 2019-05-08 | 2019-08-02 | 南京云智控产业技术研究院有限公司 | A kind of serializing face critical point detection method of the tape relay supervision based on deep learning |
| CN111160251A (en)* | 2019-12-30 | 2020-05-15 | 支付宝实验室(新加坡)有限公司 | Living body identification method and device |
| CN111160251B (en)* | 2019-12-30 | 2023-05-02 | 支付宝实验室(新加坡)有限公司 | Living body identification method and device |
| CN113283370A (en)* | 2021-06-08 | 2021-08-20 | 深圳市街角电子商务有限公司 | Face living body detection method and device based on double-flow information |
| Publication | Publication Date | Title |
|---|---|---|
| CN106022264A (en) | Interactive face in vivo detection method and device based on multi-task self encoder | |
| Materzynska et al. | The jester dataset: A large-scale video dataset of human gestures | |
| Abdullahi et al. | American sign language words recognition using spatio-temporal prosodic and angle features: A sequential learning approach | |
| Ko et al. | POPQORN: Quantifying robustness of recurrent neural networks | |
| JP6616017B2 (en) | Deep deformation network for object landmark detection | |
| Sheikh et al. | Exploring the space of a human action | |
| Ahmed et al. | DTW-based kernel and rank-level fusion for 3D gait recognition using Kinect | |
| Xu et al. | Human re-identification by matching compositional template with cluster sampling | |
| CN109800643B (en) | Identity recognition method for living human face in multiple angles | |
| CN100487720C (en) | Face comparison device | |
| Fong et al. | A biometric authentication model using hand gesture images | |
| Daoudi et al. | Emotion recognition by body movement representation on the manifold of symmetric positive definite matrices | |
| Srivastava et al. | Real time attendance system using face recognition technique | |
| Li et al. | Learning skeleton information for human action analysis using Kinect | |
| CN112183424A (en) | Real-time hand tracking method and system based on video | |
| Zhang et al. | Low-rank and joint sparse representations for multi-modal recognition | |
| CN106599810A (en) | Head pose estimation method based on stacked auto-encoding | |
| Parvini et al. | An approach to glove-based gesture recognition | |
| Kadhim et al. | A multimodal biometric database and case study for face recognition based deep learning | |
| Das | Activity recognition using histogram of oriented gradient pattern history | |
| NaliniPriya et al. | A face recognition security model using transfer learning technique | |
| Yao et al. | A type-2 fuzzy logic based system for linguistic summarization of video monitoring in indoor intelligent environments | |
| Charishma et al. | Smart Attendance System with and Without Mask using Face Recognition | |
| Talha et al. | Human action recognition from body-part directional velocity using hidden Markov models | |
| Gharghabi et al. | Person recognition based on face and body information for domestic service robots |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | Application publication date:20161012 | |
| RJ01 | Rejection of invention patent application after publication |