技术领域technical field
本发明涉及人脸识别技术领域,尤其涉及一种大规模人脸识别方法。The invention relates to the technical field of face recognition, in particular to a large-scale face recognition method.
背景技术Background technique
人脸识别身份认证技术以其方便快捷、非接触式等特点,被广泛应用在安防、门禁、考勤等领域。然而,传统的人脸识别系统虽能辨认出不同的人脸,却难判断出此人脸为活体还是照片,会给身份认证系统带来巨大安全隐患。因此如何检测活体人脸成为了研究的热门对象。Face recognition authentication technology is widely used in security, access control, attendance and other fields due to its convenience, speed and non-contact. However, although the traditional face recognition system can identify different faces, it is difficult to determine whether the face is a living body or a photo, which will bring huge security risks to the identity authentication system. Therefore, how to detect live faces has become a hot research topic.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提出一种人脸活体检测识别方法,以解决人脸识别方法中的照片欺骗行为,能够有效判断出活体人脸和图像视频人脸的区别。The purpose of the present invention is to propose a face living body detection and recognition method, so as to solve the photo deception in the face recognition method, and can effectively judge the difference between a living body face and an image video face.
为达此目的,本发明采用以下技术方案:For this purpose, the present invention adopts the following technical solutions:
一种人脸活体检测识别方法,包括以下步骤:A face liveness detection and recognition method, comprising the following steps:
步骤A:采集视频人脸图像;Step A: collect video face images;
步骤B:将视频帧图像中的人脸部分分割出来,得到光流模值序列和光流模值对应的方向序列;Step B: segment the face part in the video frame image to obtain an optical flow modulus value sequence and a direction sequence corresponding to the optical flow modulus value;
步骤C:从光流模值序列和光流模值对应的方向序列中提取出三类特征信息,包括序列频率、光流模值直方图和序列相关性;Step C: extracting three types of feature information from the sequence of optical flow modulus values and the direction sequence corresponding to the optical flow modulus values, including sequence frequency, optical flow modulus value histogram and sequence correlation;
步骤D:将步骤三提取到的特征进行训练得到一个SVM分类器,根据得分情况判断是否为活体人脸;Step D: train the features extracted in step 3 to obtain an SVM classifier, and judge whether it is a living face according to the score;
步骤E:根据红外图像检测到的活体人脸图像将彩色图像中人脸部分分割出来,利用PCA算法对彩色人脸图像进行降维特诊提取,采集到的人脸图像用一维向量表示,并得到该向量的协方差矩阵C;Step E: Segment the face part in the color image according to the living face image detected by the infrared image, and use the PCA algorithm to perform dimension reduction and special diagnosis extraction on the color face image, and the collected face image is represented by a one-dimensional vector, and Get the covariance matrix C of the vector;
步骤F:由协方差矩阵C得到特征脸空间W,并将特征脸空间W作为神经网络的输入;Step F: Obtain the eigenface space W from the covariance matrix C, and use the eigenface space W as the input of the neural network;
步骤G:基于GA算法对BP神经网络进行优化训练,训练完成后将待识别的人脸图像输入网络进行人脸识别。Step G: Optimize the training of the BP neural network based on the GA algorithm. After the training is completed, input the face image to be recognized into the network for face recognition.
优选的,步骤C中提取三类特诊信息包括以下步骤:Preferably, extracting three types of special medical information in step C includes the following steps:
步骤C1:提取序列频率包括对光流模值序列和光流模值对应的方向序列进行一维傅里叶变换,将光流模值序列和方向序列所对应的傅里叶系数的模值进行归一化拼接成一个特征向量;Step C1: Extracting the sequence frequency includes performing one-dimensional Fourier transform on the optical flow mode value sequence and the direction sequence corresponding to the optical flow mode value, and normalizing the mode values of the Fourier coefficients corresponding to the optical flow mode value sequence and the direction sequence. Unify and splicing into a feature vector;
步骤C2:提取光流模值直方图包括统计光流模值序列的直方图特征;Step C2: extracting the histogram of optical flow modulus values including statistical histogram features of the sequence of optical flow modulus values;
步骤C3:提取序列相关性包括根据步骤C2中得到的直方图分析光流模值序列的相关性。Step C3: Extracting the sequence correlation includes analyzing the correlation of the optical flow modulus value sequence according to the histogram obtained in Step C2.
优选的,在步骤E中,将采集到的第i幅人脸图像用一维向量表示,记为Xi,则Xi的协方差矩阵C可表示为:Preferably, in step E, the collected i-th face image is represented by a one-dimensional vector, denoted as Xi , then the covariance matrix C of Xi can be represented as:
其中:X为训练样本的均值向量,N为训练样本的总个数。Among them: X is the mean vector of training samples, and N is the total number of training samples.
优选的,步骤F1:由协方差矩阵C得到特征脸空间W包括以下步骤:Preferably, step F1: obtaining the eigenface space W from the covariance matrix C includes the following steps:
将协方差矩阵C转换成对角阵,转换后C的特征向量表示为:Convert the covariance matrix C into a diagonal matrix, and the eigenvectors of C after conversion are expressed as:
其中,P为特征向量,λi为特征值;Among them, P is the eigenvector, and λi is the eigenvalue;
步骤F2:根据公式1-3选取前K个特征值所对应的特征向量构成的投影空间,表示为特征脸空间W,并将此作为神经网络的输入;Step F2: according to formula 1-3, select the projection space that the corresponding eigenvectors of the first K eigenvalues are formed, be represented as eigenface space W, and use this as the input of the neural network;
公式1-3如下:Formulas 1-3 are as follows:
其中,λi为特征值,k表示特征数量。Among them, λi is the eigenvalue, and k represents the number of features.
优选的,步骤G中所述基于GA算法对BP神经网络进行优化训练包括以下步骤:Preferably, the optimization training of the BP neural network based on the GA algorithm described in step G includes the following steps:
步骤G1:初始化种群,将BP神经网络的参数构成种群,进行初始化;Step G1: Initialize the population, form the population with the parameters of the BP neural network, and initialize it;
步骤G2:构造适应度函数决定GA算法的搜索方向,并判断是否接收新解;Step G2: Construct a fitness function to determine the search direction of the GA algorithm, and determine whether to receive a new solution;
步骤G3:选择操作,采用轮盘赌法从步骤G1的种群中选出适应度函数值较高的个体,使其交配产生新个体i;Step G3: selection operation, using the roulette method to select individuals with higher fitness function values from the population in step G1, and make them mate to generate a new individual i;
步骤G4:交叉操作,按照交叉概率将配对个体的部分基因进行交叉,形成新个体;Step G4: Crossover operation, crossover some genes of the paired individuals according to the crossover probability to form a new individual;
步骤G5:变异操作;Step G5: mutation operation;
步骤G6:计算当前解得适应度函数值,判断是否满足GA算法的结束条件,如果满足,则输出最优权值和阈值,如不满足,则返回步骤G3重新计算;Step G6: Calculate the fitness function value obtained from the current solution, and judge whether the end condition of the GA algorithm is met. If so, output the optimal weight and threshold. If not, return to step G3 to recalculate;
步骤G7:将步骤G6中得出的最优权值和阈值输入到BP神经网络中,并进行网络训练,将待识别的人脸图像输入到BP神经网络中进行人脸识别。Step G7: Input the optimal weights and thresholds obtained in step G6 into the BP neural network, and perform network training, and input the face image to be recognized into the BP neural network for face recognition.
优选的,在步骤G2中,构造的适应度函数如下:Preferably, in step G2, the constructed fitness function is as follows:
其中,表示样本标签,表示网络输出,M表示输出层神经元数目,T表示样本数目。in, represents the sample label, represents the network output, M represents the number of neurons in the output layer, and T represents the number of samples.
优选的,在步骤G3中,选择操作产生新个体i的被选择概率Pi为:Preferably, in step G3, the selection probability Pi of the new individual i generated by the selection operation is:
其中:N为种群数量,Fi为个体i的适应度函数值。Among them: N is the population number, Fi is the fitness function value of individual i.
优选的,在步骤G4中,交叉操作按照公式1-6产生新个体,公式1-6如下:Preferably, in step G4, the crossover operation generates a new individual according to formula 1-6, and formula 1-6 is as follows:
其中:分别表示p和q个体在第j位的基因,β为(0,1)的随机数。in: respectively represent the gene at position j of individual p and q, and β is a random number of (0,1).
优选的,在步骤G5中,变异操作按照公式1-7进行操作,公式1-7如下:Preferably, in step G5, the mutation operation is performed according to formula 1-7, and formula 1-7 is as follows:
其中:gmax,gmin为基因gi的取值范围,t为迭代次数,T为最大进化次数。Among them: gmax , gmin is the value range of gene gi , t is the number of iterations, and T is the maximum number of evolution.
优选的,在步骤B中,分别得到6个光流模值序列和6个光流模值对应的方向序列。Preferably, in step B, 6 optical flow mode value sequences and 6 optical flow mode value sequences corresponding to the directions are obtained respectively.
有益效果:Beneficial effects:
1、能够在不添加外加设备的情况下,准确的检测出是否存在活体人脸,能够有效的判断出图像、视频、三维模型的人脸欺骗行为;1. It can accurately detect whether there is a live face without adding additional equipment, and can effectively judge the face deception behavior of images, videos and 3D models;
2、基于GA算法对BP神经网络进行优化,能够得到BP神经网络的最优参数,克服了BP神经网络的一定缺陷,加快了网络的收敛速度,减少识别时间。2. Optimizing the BP neural network based on the GA algorithm can obtain the optimal parameters of the BP neural network, overcome certain defects of the BP neural network, speed up the convergence speed of the network, and reduce the recognition time.
附图说明Description of drawings
图1是本发明的人脸活体检测识别的流程图;Fig. 1 is the flow chart of the face living body detection and recognition of the present invention;
具体实施方式Detailed ways
下面结合附图并通过具体实施方式来进一步说明本发明的技术方案。The technical solutions of the present invention will be further described below with reference to the accompanying drawings and through specific embodiments.
在本实施例中,分为活体检测和人脸识别两个部分,活体检测首先分割出红外图像中的人脸图像,进行傅里叶变换,提取出序列频率、光流模值直方图、序列相关性三种特征信息,进行训练得到一个SVM分类器,根据阈值的选择来进行人脸活体的判断。In this embodiment, it is divided into two parts: living body detection and face recognition. The living body detection first segmentes the face image in the infrared image, performs Fourier transform, and extracts the sequence frequency, optical flow modulus value histogram, sequence Correlation three feature information, training to obtain a SVM classifier, according to the selection of the threshold to judge the face living body.
人脸识别首先根据红外图像检测到的活体人脸图像对彩色图像进行更加精确的分割,使用PCA算法对得到彩色人脸图像进行降维提取特征信息,减少人脸图像数据量,基于GA算法对BP神经网络进行优化,构建PCA-GA-BP网络对提取的人脸特征进行训练,进而进行识别。Face recognition firstly segmented the color image more accurately according to the living face image detected by the infrared image, and used the PCA algorithm to reduce the dimensionality of the obtained color face image to extract feature information and reduce the amount of face image data. The BP neural network is optimized, and the PCA-GA-BP network is constructed to train the extracted facial features, and then identify them.
本实施例的一种人脸活体检测识别方法,如图1所示,包括以下步骤:A method for detecting and identifying a human face in this embodiment, as shown in FIG. 1 , includes the following steps:
步骤A:采集视频人脸图像;Step A: collect video face images;
步骤B:将视频帧图像中的人脸部分分割出来,从人脸区域得到6个光流模值序列和6个光流模值对应的方向序列,共计12个序列;Step B: segment the face part in the video frame image, and obtain 6 optical flow modulus value sequences and 6 optical flow modulus value corresponding direction sequences from the face region, a total of 12 sequences;
步骤C:从6个光流模值序列和6个光流模值对应的方向序列中提取出三类特征信息,包括序列频率、光流模值直方图和序列相关性;Step C: extracting three types of feature information from the 6 optical flow modulus value sequences and the direction sequences corresponding to the 6 optical flow modulus values, including sequence frequency, optical flow modulus value histogram and sequence correlation;
步骤D:将步骤三提取到的特征进行训练得到一个SVM分类器,根据得分情况判断是否为活体人脸;Step D: train the features extracted in step 3 to obtain an SVM classifier, and judge whether it is a living face according to the score;
步骤E:根据红外图像检测到的活体人脸图像将彩色图像中人脸部分分割出来,利用PCA算法对彩色人脸图像进行降维特诊提取,采集到的人脸图像用一维向量表示,并得到该向量的协方差矩阵C;Step E: Segment the face part in the color image according to the living face image detected by the infrared image, and use the PCA algorithm to perform dimension reduction and special diagnosis extraction on the color face image, and the collected face image is represented by a one-dimensional vector, and Get the covariance matrix C of the vector;
步骤F:由协方差矩阵C得到特征脸空间W,并将特征脸空间W作为神经网络的输入;Step F: Obtain the eigenface space W from the covariance matrix C, and use the eigenface space W as the input of the neural network;
步骤G:基于GA算法对BP神经网络进行优化训练,训练完成后将待识别的人脸图像输入网络进行人脸识别。Step G: Optimize the training of the BP neural network based on the GA algorithm. After the training is completed, input the face image to be recognized into the network for face recognition.
优选的,步骤C中提取三类特诊信息包括以下步骤:Preferably, extracting three types of special medical information in step C includes the following steps:
步骤C1:提取序列频率包括对光流模值序列和光流模值对应的方向序列进行一维傅里叶变换,将光流模值序列和方向序列所对应的傅里叶系数的模值进行归一化拼接成一个特征向量;Step C1: Extracting the sequence frequency includes performing one-dimensional Fourier transform on the optical flow mode value sequence and the direction sequence corresponding to the optical flow mode value, and normalizing the mode values of the Fourier coefficients corresponding to the optical flow mode value sequence and the direction sequence. Unify and splicing into a feature vector;
步骤C2:提取光流模值直方图包括统计6个光流模值序列的直方图特征;光流模值对于区分真人脸视频序列和所有类型的假人脸均是有效的,可以看到,采集自固定的人脸照片的假人脸视频序列的光流模值非常小,其他类型的假人脸视频序列的光流矢量模值又较大。这非常有利于进行活体检测。Step C2: Extracting the histogram of optical flow modulus values includes counting the histogram features of 6 optical flow modulus value sequences; the optical flow modulus value is effective for distinguishing real face video sequences and all types of fake faces. It can be seen that, The optical flow modulus value of fake face video sequences collected from fixed face photos is very small, and the optical flow vector modulus values of other types of fake face video sequences are large. This is very useful for liveness testing.
步骤C3:提取序列相关性包括根据步骤C2中得到的直方图分析光流模值序列的相关性。给定6个模值序列和6个方向序列,可以计算得到60个直方图。序列的相关性在有全局运动的视频序列中非常明显。在假人脸视频中,光流失量的模值和角度在时间上均有较大的相关性,而在真人脸视频序列中相关性没那么明显,且前景和背景的相关性非常低。Step C3: Extracting the sequence correlation includes analyzing the correlation of the optical flow modulus value sequence according to the histogram obtained in Step C2. Given 6 series of modulo values and 6 series of directions, 60 histograms can be calculated. Sequence correlation is evident in video sequences with global motion. In the fake face video, the modulo value and angle of the light loss have a large correlation in time, but in the real face video sequence, the correlation is not so obvious, and the correlation between the foreground and the background is very low.
优选的,在步骤E中,将采集到的第i幅人脸图像用一维向量表示,记为Xi,则Xi的协方差矩阵C可表示为:Preferably, in step E, the collected i-th face image is represented by a one-dimensional vector, denoted as Xi , then the covariance matrix C of Xi can be represented as:
其中:X为训练样本的均值向量,N为训练样本的总个数。Among them: X is the mean vector of training samples, and N is the total number of training samples.
优选的,步骤F1:协方差矩阵C的维度较高,数据量较大,计算起来较冗余,而将协方差矩阵C转换成对角阵容易计算,由协方差矩阵C得到特征脸空间W包括以下步骤:Preferably, step F1: the dimension of the covariance matrix C is high, the amount of data is large, and the calculation is redundant, and the covariance matrix C is converted into a diagonal matrix for easy calculation, and the eigenface space W is obtained from the covariance matrix C Include the following steps:
将协方差矩阵C转换成对角阵,转换后C的特征向量表示为:Convert the covariance matrix C into a diagonal matrix, and the eigenvectors of C after conversion are expressed as:
其中,P为特征向量,λi为特征值;Among them, P is the eigenvector, and λi is the eigenvalue;
步骤F2:根据公式1-3选取前K个特征值所对应的特征向量构成的投影空间,表示为特征脸空间W,并将此作为神经网络的输入;Step F2: according to formula 1-3, select the projection space that the corresponding eigenvectors of the first K eigenvalues are formed, be represented as eigenface space W, and use this as the input of the neural network;
公式1-3如下:Formulas 1-3 are as follows:
优选的,步骤G中所述基于GA算法对BP神经网络进行优化训练包括以下步骤:Preferably, the optimization training of the BP neural network based on the GA algorithm described in step G includes the following steps:
步骤G1:初始化种群,将BP神经网络的参数构成种群,进行初始化;Step G1: Initialize the population, form the population with the parameters of the BP neural network, and initialize it;
步骤G2:构造适应度函数决定GA算法的搜索方向,并判断是否接收新解;Step G2: Construct a fitness function to determine the search direction of the GA algorithm, and determine whether to receive a new solution;
步骤G3:选择操作,采用轮盘赌法从步骤G1的种群中选出适应度函数值较高的个体,使其交配产生新个体i;Step G3: selection operation, using the roulette method to select individuals with higher fitness function values from the population in step G1, and make them mate to generate a new individual i;
步骤G4:交叉操作,按照交叉概率将配对个体的部分基因进行交叉,形成新个体;Step G4: Crossover operation, crossover some genes of the paired individuals according to the crossover probability to form a new individual;
步骤G5:变异操作;Step G5: mutation operation;
步骤G6:计算当前解得适应度函数值,判断是否满足GA算法的结束条件,如果满足,则输出最优权值和阈值,如不满足,则返回步骤G3重新计算;Step G6: Calculate the fitness function value obtained from the current solution, and judge whether the end condition of the GA algorithm is met. If so, output the optimal weight and threshold. If not, return to step G3 to recalculate;
步骤G7:将步骤G6中得出的最优权值和阈值输入到BP神经网络中,并进行网络训练,将待识别的人脸图像输入到BP神经网络中进行人脸识别。Step G7: Input the optimal weights and thresholds obtained in step G6 into the BP neural network, and perform network training, and input the face image to be recognized into the BP neural network for face recognition.
优选的,在步骤G2中,构造的适应度函数如下:Preferably, in step G2, the constructed fitness function is as follows:
其中,表示样本标签,表示网络输出,M表示输出层神经元数目,T表示样本数目。in, represents the sample label, represents the network output, M represents the number of neurons in the output layer, and T represents the number of samples.
优选的,在步骤G3中,选择操作产生新个体i的被选择概率Pi为:Preferably, in step G3, the selection probability Pi of the new individual i generated by the selection operation is:
其中:N为种群数量,Fi为个体i的适应度函数值。Among them: N is the population number, Fi is the fitness function value of individual i.
优选的,在步骤G4中,交叉操作按照公式1-6产生新个体,公式1-6如下:Preferably, in step G4, the crossover operation generates a new individual according to formula 1-6, and formula 1-6 is as follows:
其中:分别表示p和q个体在第j位的基因,β为(0,1)的随机数。in: respectively represent the gene at position j of individual p and q, and β is a random number of (0,1).
优选的,在步骤G5中,变异操作按照公式1-7进行操作,公式1-7如下:Preferably, in step G5, the mutation operation is performed according to formula 1-7, and formula 1-7 is as follows:
其中:gmax,gmin为基因gi的取值范围,t为迭代次数,T为最大进化次数。Among them: gmax , gmin is the value range of gene gi , t is the number of iterations, and T is the maximum number of evolution.
优选的,在步骤B中,分别得到6个光流模值序列和6个光流模值对应的方向序列。Preferably, in step B, 6 optical flow mode value sequences and 6 optical flow mode value sequences corresponding to the directions are obtained respectively.
以上结合具体实施例描述了本发明的技术原理。这些描述只是为了解释本发明的原理,而不能以任何方式解释为对本发明保护范围的限制。基于此处的解释,本领域的技术人员不需要付出创造性的劳动即可联想到本发明的其它具体实施方式,这些方式都将落入本发明的保护范围之内。The technical principle of the present invention has been described above with reference to the specific embodiments. These descriptions are only for explaining the principle of the present invention, and should not be construed as limiting the protection scope of the present invention in any way. Based on the explanations herein, those skilled in the art can think of other specific embodiments of the present invention without creative efforts, and these methods will all fall within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810956513.8ACN109145817A (en) | 2018-08-21 | 2018-08-21 | A kind of face In vivo detection recognition methods |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810956513.8ACN109145817A (en) | 2018-08-21 | 2018-08-21 | A kind of face In vivo detection recognition methods |
| Publication Number | Publication Date |
|---|---|
| CN109145817Atrue CN109145817A (en) | 2019-01-04 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810956513.8APendingCN109145817A (en) | 2018-08-21 | 2018-08-21 | A kind of face In vivo detection recognition methods |
| Country | Link |
|---|---|
| CN (1) | CN109145817A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109977846A (en)* | 2019-03-22 | 2019-07-05 | 中国科学院重庆绿色智能技术研究院 | A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular |
| CN110069983A (en)* | 2019-03-08 | 2019-07-30 | 深圳神目信息技术有限公司 | Vivo identification method, device, terminal and readable medium based on display medium |
| CN110163126A (en)* | 2019-05-06 | 2019-08-23 | 北京华捷艾米科技有限公司 | A kind of biopsy method based on face, device and equipment |
| CN110245645A (en)* | 2019-06-21 | 2019-09-17 | 北京字节跳动网络技术有限公司 | Face vivo identification method, device, equipment and storage medium |
| CN110956080A (en)* | 2019-10-14 | 2020-04-03 | 北京海益同展信息科技有限公司 | Image processing method and device, electronic equipment and storage medium |
| CN110991301A (en)* | 2019-11-27 | 2020-04-10 | 成都超有范儿科技有限公司 | Face recognition method |
| CN111126229A (en)* | 2019-12-17 | 2020-05-08 | 中国建设银行股份有限公司 | Data processing method and device |
| CN111178137A (en)* | 2019-12-04 | 2020-05-19 | 百度在线网络技术(北京)有限公司 | Method, device, electronic equipment and computer readable storage medium for detecting real human face |
| CN111368666A (en)* | 2020-02-25 | 2020-07-03 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-current network |
| CN112507767A (en)* | 2019-09-16 | 2021-03-16 | 纬创资通股份有限公司 | Face identification method and related computer system |
| US11151385B2 (en) | 2019-12-20 | 2021-10-19 | RTScaleAI Inc | System and method for detecting deception in an audio-video response of a user |
| CN113793368A (en)* | 2021-09-29 | 2021-12-14 | 北京朗达和顺科技有限公司 | Video face privacy method based on optical flow |
| CN113822222A (en)* | 2021-10-11 | 2021-12-21 | 中国平安人寿保险股份有限公司 | Human face anti-cheating method and device, computer equipment and storage medium |
| CN113837161A (en)* | 2021-11-29 | 2021-12-24 | 广东东软学院 | Identity recognition method, device and equipment based on image recognition |
| CN116959127A (en)* | 2023-06-30 | 2023-10-27 | 北京百度网讯科技有限公司 | Living body detection method, living body detection model training method, device and electronic equipment |
| CN118747927A (en)* | 2024-06-06 | 2024-10-08 | 深圳市华百安智能技术有限公司 | Access control method, system and access control terminal based on multi-source data |
| CN119007261A (en)* | 2024-07-26 | 2024-11-22 | 贯文信息技术(苏州)有限公司 | Artificial intelligence visual detection method based on characteristic face method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108229362A (en)* | 2017-12-27 | 2018-06-29 | 杭州悉尔科技有限公司 | A kind of binocular recognition of face biopsy method based on access control system |
| CN108363944A (en)* | 2017-12-28 | 2018-08-03 | 杭州宇泛智能科技有限公司 | Recognition of face terminal is double to take the photograph method for anti-counterfeit, apparatus and system |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108229362A (en)* | 2017-12-27 | 2018-06-29 | 杭州悉尔科技有限公司 | A kind of binocular recognition of face biopsy method based on access control system |
| CN108363944A (en)* | 2017-12-28 | 2018-08-03 | 杭州宇泛智能科技有限公司 | Recognition of face terminal is double to take the photograph method for anti-counterfeit, apparatus and system |
| Title |
|---|
| 杨健伟: "面向人脸识别的人脸活体检测方法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》* |
| 王飞: "基于深度学习的人脸识别算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110069983A (en)* | 2019-03-08 | 2019-07-30 | 深圳神目信息技术有限公司 | Vivo identification method, device, terminal and readable medium based on display medium |
| CN109977846A (en)* | 2019-03-22 | 2019-07-05 | 中国科学院重庆绿色智能技术研究院 | A kind of in-vivo detection method and system based on the camera shooting of near-infrared monocular |
| CN110163126A (en)* | 2019-05-06 | 2019-08-23 | 北京华捷艾米科技有限公司 | A kind of biopsy method based on face, device and equipment |
| CN110245645A (en)* | 2019-06-21 | 2019-09-17 | 北京字节跳动网络技术有限公司 | Face vivo identification method, device, equipment and storage medium |
| CN110245645B (en)* | 2019-06-21 | 2021-06-08 | 北京字节跳动网络技术有限公司 | Face living body identification method, device, equipment and storage medium |
| CN112507767A (en)* | 2019-09-16 | 2021-03-16 | 纬创资通股份有限公司 | Face identification method and related computer system |
| CN112507767B (en)* | 2019-09-16 | 2023-12-08 | 纬创资通股份有限公司 | Face recognition method and related computer system |
| CN110956080A (en)* | 2019-10-14 | 2020-04-03 | 北京海益同展信息科技有限公司 | Image processing method and device, electronic equipment and storage medium |
| CN110956080B (en)* | 2019-10-14 | 2023-11-03 | 京东科技信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
| CN110991301A (en)* | 2019-11-27 | 2020-04-10 | 成都超有范儿科技有限公司 | Face recognition method |
| CN111178137A (en)* | 2019-12-04 | 2020-05-19 | 百度在线网络技术(北京)有限公司 | Method, device, electronic equipment and computer readable storage medium for detecting real human face |
| CN111178137B (en)* | 2019-12-04 | 2023-05-26 | 百度在线网络技术(北京)有限公司 | Method, device, electronic equipment and computer readable storage medium for detecting real face |
| CN111126229A (en)* | 2019-12-17 | 2020-05-08 | 中国建设银行股份有限公司 | Data processing method and device |
| US11151385B2 (en) | 2019-12-20 | 2021-10-19 | RTScaleAI Inc | System and method for detecting deception in an audio-video response of a user |
| CN111368666A (en)* | 2020-02-25 | 2020-07-03 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-current network |
| CN111368666B (en)* | 2020-02-25 | 2023-08-18 | 上海蠡图信息科技有限公司 | Living body detection method based on novel pooling and attention mechanism double-flow network |
| CN113793368A (en)* | 2021-09-29 | 2021-12-14 | 北京朗达和顺科技有限公司 | Video face privacy method based on optical flow |
| CN113822222A (en)* | 2021-10-11 | 2021-12-21 | 中国平安人寿保险股份有限公司 | Human face anti-cheating method and device, computer equipment and storage medium |
| CN113837161B (en)* | 2021-11-29 | 2022-02-22 | 广东东软学院 | Identity recognition method, device and equipment based on image recognition |
| CN113837161A (en)* | 2021-11-29 | 2021-12-24 | 广东东软学院 | Identity recognition method, device and equipment based on image recognition |
| CN116959127A (en)* | 2023-06-30 | 2023-10-27 | 北京百度网讯科技有限公司 | Living body detection method, living body detection model training method, device and electronic equipment |
| CN116959127B (en)* | 2023-06-30 | 2025-07-18 | 北京百度网讯科技有限公司 | Liveness detection method, liveness detection model training method, device and electronic equipment |
| CN118747927A (en)* | 2024-06-06 | 2024-10-08 | 深圳市华百安智能技术有限公司 | Access control method, system and access control terminal based on multi-source data |
| CN119007261A (en)* | 2024-07-26 | 2024-11-22 | 贯文信息技术(苏州)有限公司 | Artificial intelligence visual detection method based on characteristic face method |
| Publication | Publication Date | Title |
|---|---|---|
| CN109145817A (en) | A kind of face In vivo detection recognition methods | |
| CN109063565B (en) | Low-resolution face recognition method and device | |
| CN111553193A (en) | Visual SLAM closed-loop detection method based on lightweight deep neural network | |
| CN104268593B (en) | The face identification method of many rarefaction representations under a kind of Small Sample Size | |
| CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
| CN110580460A (en) | Pedestrian re-identification method based on joint identification and verification of pedestrian identity and attribute features | |
| CN113205002B (en) | Low-definition face recognition method, device, equipment and medium for unlimited video monitoring | |
| CN106778496A (en) | Biopsy method and device | |
| CN104504362A (en) | Face detection method based on convolutional neural network | |
| CN101299234B (en) | Method for recognizing human eye state based on built-in type hidden Markov model | |
| CN110390308B (en) | Video behavior identification method based on space-time confrontation generation network | |
| CN106778796A (en) | Human motion recognition method and system based on hybrid cooperative model training | |
| CN111414875B (en) | 3D Point Cloud Head Pose Estimation System Based on Depth Regression Forest | |
| CN110728216A (en) | Unsupervised pedestrian re-identification method based on pedestrian attribute adaptive learning | |
| CN113343198B (en) | A video-based random gesture authentication method and system | |
| CN103870811A (en) | Method for quickly recognizing front face through video monitoring | |
| CN107392187A (en) | A kind of human face in-vivo detection method based on gradient orientation histogram | |
| CN108960142B (en) | Pedestrian re-identification method based on global feature loss function | |
| CN110659586A (en) | A Cross-View Gait Recognition Method Based on Identity Preserving Recurrent Generative Adversarial Networks | |
| CN110880010A (en) | Visual SLAM closed loop detection algorithm based on convolutional neural network | |
| CN114519897B (en) | Human face living body detection method based on color space fusion and cyclic neural network | |
| CN103699874A (en) | Crowd abnormal behavior identification method based on SURF (Speed-Up Robust Feature) stream and LLE (Locally Linear Embedding) sparse representation | |
| CN108416325A (en) | A kind of gait recognition method of combination visual angle effect model and hidden Markov model | |
| CN110175511A (en) | It is a kind of to be embedded in positive negative sample and adjust the distance pedestrian's recognition methods again of distribution | |
| CN114360058B (en) | Cross-view gait recognition method based on walking view prediction |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication | Application publication date:20190104 |