Movatterモバイル変換


[0]ホーム

URL:


CN106709442B - Face recognition method - Google Patents

Face recognition method
Download PDF

Info

Publication number
CN106709442B
CN106709442BCN201611177440.XACN201611177440ACN106709442BCN 106709442 BCN106709442 BCN 106709442BCN 201611177440 ACN201611177440 ACN 201611177440ACN 106709442 BCN106709442 BCN 106709442B
Authority
CN
China
Prior art keywords
face
face image
features
feature vector
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201611177440.XA
Other languages
Chinese (zh)
Other versions
CN106709442A (en
Inventor
李昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen LD Robot Co Ltd
Original Assignee
Inmotion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inmotion Technologies Co LtdfiledCriticalInmotion Technologies Co Ltd
Priority to CN201611177440.XApriorityCriticalpatent/CN106709442B/en
Publication of CN106709442ApublicationCriticalpatent/CN106709442A/en
Application grantedgrantedCritical
Publication of CN106709442BpublicationCriticalpatent/CN106709442B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种人脸识别方法,包括:对采集的待识别用户的人脸图像中关键点提取特征,由各关键点的特征构成特征向量;将特征向量与训练矩阵运算,生成模型,训练矩阵由样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行训练得到的协方差矩阵构成;将模型与样本库中人脸图像进行比对,识别用户。本发明人脸识别方法通过提取人脸图像关键点的特征构成特征向量,并采用联合贝叶斯模型进行模型训练和识别,与现有基于深度学习的人脸识别算法相比,在进行模型训练时所需样本数量少,计算量少,可提高人脸识别效率。

Figure 201611177440

The invention discloses a face recognition method, comprising: extracting features from key points in a collected face image of a user to be recognized, and forming a feature vector from the features of each key point; operating the feature vector and a training matrix to generate a model, The training matrix is composed of the feature vector obtained by extracting the features of the sample face image, and the covariance matrix obtained by inputting the joint Bayesian model for training; the model is compared with the face images in the sample database to identify the user. The face recognition method of the present invention forms a feature vector by extracting the features of the key points of the face image, and adopts the joint Bayesian model for model training and recognition. The number of samples required is small, and the amount of calculation is small, which can improve the efficiency of face recognition.

Figure 201611177440

Description

Translated fromChinese
一种人脸识别方法A face recognition method

技术领域technical field

本发明涉及计算机视觉处理技术领域,特别是涉及一种人脸识别方法。The invention relates to the technical field of computer vision processing, in particular to a face recognition method.

背景技术Background technique

人脸识别是基于人的脸部特征信息进行身份认证的一种生物识别技术。通过采集含有人脸的图像或视频流,并在图像中检测和跟踪人脸,进而对检测到的人脸进行匹配与识别。目前,人脸识别的应用领域很广泛,在金融支付、门禁考勤、身份识别等众多领域起到非常重要的作用,给人们的生活带来很大便利。Face recognition is a biometric technology for identity authentication based on human facial feature information. By collecting images or video streams containing human faces, and detecting and tracking human faces in the images, the detected faces are then matched and recognized. At present, face recognition has a wide range of applications, playing a very important role in financial payment, access control attendance, identity recognition and many other fields, bringing great convenience to people's lives.

人脸识别方法有很多,基本上都可以归结为人脸特征提取结合分类算法。在所有算法中,基于深度学习的人脸识别算法能够达到较佳识别效果,近年来也越来越受到关注。但该算法中深度学习模型复杂,想要实现计算规模庞大,并不能适用于所有场合。There are many face recognition methods, which can basically be attributed to face feature extraction combined with classification algorithms. Among all the algorithms, the face recognition algorithm based on deep learning can achieve better recognition effect, and it has received more and more attention in recent years. However, the deep learning model in this algorithm is complex, and it is not suitable for all occasions to achieve a large scale of calculation.

发明内容SUMMARY OF THE INVENTION

鉴于此,本发明提供一种人脸识别方法,与现有方法相比,可降低计算量,提高识别效率。In view of this, the present invention provides a face recognition method, which can reduce the amount of calculation and improve the recognition efficiency compared with the existing method.

为实现上述目的,本发明提供如下技术方案:To achieve the above object, the present invention provides the following technical solutions:

一种人脸识别方法,包括:A face recognition method, comprising:

S10:对采集的待识别用户的人脸图像中关键点提取特征,由各关键点的特征构成特征向量;S10: Extract features from key points in the collected face image of the user to be identified, and form a feature vector from the features of each key point;

S11:将所述特征向量与训练矩阵运算,生成模型,所述训练矩阵由样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行训练得到的协方差矩阵构成;S11: Operate the feature vector and the training matrix to generate a model, and the training matrix is composed of the feature vector obtained after the feature extraction of the sample face image, and the covariance matrix obtained by inputting the joint Bayesian model for training;

S12:将所述模型与样本库中人脸图像进行比对,识别用户。S12: Compare the model with the face images in the sample database to identify the user.

可选地,所述步骤S10具体包括:Optionally, the step S10 specifically includes:

对人脸图像进行多尺度缩放,对于同一关键点在各尺度的人脸图像中提取特征并连接,再将各关键点的特征连接,构成所述特征向量。Multi-scale scaling is performed on the face image, and features are extracted from face images of different scales for the same key point and connected, and then the features of each key point are connected to form the feature vector.

可选地,所述步骤S10还包括:对所述特征向量进行维度压缩。Optionally, the step S10 further includes: performing dimension compression on the feature vector.

可选地,对人脸图像中关键点提取特征具体包括:Optionally, extracting features for key points in the face image specifically includes:

在人脸图像中选取多个关键点,提取各关键点处的局部二值模式特征。Select multiple key points in the face image, and extract the local binary pattern features at each key point.

可选地,提取关键点处的局部二值模式特征描述为:Optionally, extracting local binary pattern features at key points is described as:

Figure DEST_PATH_GDA0001251833900000021
Figure DEST_PATH_GDA0001251833900000021

其中,gc表示中心点亮度,gp表示邻域点亮度,P表示邻域点数,R表示邻域半径,并定义函数:Among them, gc represents the brightness of the center point, gp represents the brightness of the neighborhood point, P represents the number of neighbor points, R represents the neighborhood radius, and defines the function:

Figure DEST_PATH_GDA0001251833900000022
Figure DEST_PATH_GDA0001251833900000022

可选地,对所述特征向量进行维度压缩处理中,在进行矩阵乘法运算时控制运算芯片优先访问连续的内存区域,并进行并行运算。Optionally, in the dimension compression process for the feature vector, the control chip is controlled to preferentially access continuous memory areas and perform parallel operations when performing matrix multiplication operations.

可选地,采集待识别用户的人脸图像包括:Optionally, collecting the face image of the user to be identified includes:

根据平均人脸模型计算采集到的人脸图像的投影矩阵,根据所述投影矩阵计算人脸角度,从采集的人脸图像中选取人脸角度位于预设范围内的人脸图像作为输入的人脸图像。Calculate the projection matrix of the collected face image according to the average face model, calculate the face angle according to the projection matrix, select the face image whose face angle is within the preset range from the collected face image as the input person face image.

可选地,将所述模型与样本库中人脸图像进行比对,识别用户通过评价函数实现,具体为:Optionally, the model is compared with the face images in the sample library, and the identification of the user is achieved through an evaluation function, specifically:

构造:

Figure DEST_PATH_GDA0001251833900000023
其中th表示阈值;structure:
Figure DEST_PATH_GDA0001251833900000023
where th represents the threshold;

Figure DEST_PATH_GDA0001251833900000024
其中,Xi表示第i个人,N(Xi)表示第i个样本人脸图像;
Figure DEST_PATH_GDA0001251833900000024
Among them, Xi represents the ith person, and N(Xi) represents the ith sample face image;

评价函数表示为:The evaluation function is expressed as:

Figure DEST_PATH_GDA0001251833900000025
Figure DEST_PATH_GDA0001251833900000025

由上述技术方案可知,本发明所提供的人脸识别方法,首先对采集的待识别用户的人脸图像提取特征,对人脸图像中关键点提取特征,由各关键点的特征构成特征向量;将特征向量与训练矩阵运算生成模型,所述训练矩阵由样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行训练得到的协方差矩阵构成;通过将生成的模型与样本库中人脸图像进行比对,识别出用户。与现有基于深度学习的人脸识别算法相比,本发明人脸识别方法在进行模型训练时所需样本数量少,计算量少,可提高人脸识别效率。As can be seen from the above technical solutions, the face recognition method provided by the present invention firstly extracts features from the collected face image of the user to be identified, extracts features from key points in the face image, and forms a feature vector from the features of each key point; The model is generated by operating the feature vector and the training matrix, and the training matrix is composed of the feature vector obtained after the feature extraction of the sample face image, and the covariance matrix obtained by inputting the joint Bayesian model for training; by combining the generated model with the sample The face images in the library are compared to identify the user. Compared with the existing face recognition algorithms based on deep learning, the face recognition method of the present invention requires less samples and less computation when training the model, and can improve the efficiency of face recognition.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.

图1为本发明实施例提供的一种人脸识别方法的流程图。FIG. 1 is a flowchart of a face recognition method according to an embodiment of the present invention.

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本发明中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described The embodiments are only some of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

本发明实施例提供一种人脸识别方法,请参考图1,为本实施例提供的人脸识别方法的流程图,方法包括步骤:An embodiment of the present invention provides a face recognition method. Please refer to FIG. 1 , which is a flowchart of the face recognition method provided by this embodiment. The method includes steps:

S10:对采集的待识别用户的人脸图像中关键点提取特征,由各关键点的特征构成特征向量。S10: Extract features from key points in the collected face image of the user to be identified, and form a feature vector from the features of each key point.

人脸图像中的关键点指人脸图像中表现明显特征的位置,例如鼻子、眼睛等位置区域。提取各关键点处的特征后,将各关键点的特征连接构成特征向量。The key points in the face image refer to the positions of obvious features in the face image, such as the nose, eyes and other position areas. After extracting the features at each key point, the features of each key point are connected to form a feature vector.

S11:将所述特征向量与训练矩阵运算,生成模型,所述训练矩阵由样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行训练得到的协方差矩阵构成。S11: Operate the feature vector and a training matrix to generate a model, where the training matrix is composed of a feature vector obtained by extracting features from a sample face image, and a covariance matrix obtained by inputting a joint Bayesian model for training.

本实施例中,采用联合贝叶斯模型进行模型训练,将样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行运算,得到的协方差矩阵构成训练矩阵。In this embodiment, the joint Bayesian model is used for model training, and the feature vector obtained after the feature extraction of the sample face image is input into the joint Bayesian model for operation, and the obtained covariance matrix constitutes a training matrix.

S12:将所述模型与样本库中人脸图像进行比对,识别用户。S12: Compare the model with the face images in the sample database to identify the user.

本实施例人脸识别方法,首先对采集的待识别用户的人脸图像提取特征,对人脸图像中关键点提取特征,由各关键点的特征构成特征向量;将特征向量与训练矩阵运算生成模型,所述训练矩阵由样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行训练得到的协方差矩阵构成;通过将生成的模型与样本库中人脸图像进行比对,识别出用户。与现有基于深度学习的人脸识别算法相比,本实施例人脸识别方法在进行模型训练时所需样本数量少,计算量少,可提高人脸识别效率,可应用于各种场合。The face recognition method in this embodiment firstly extracts features from the collected face image of the user to be recognized, extracts features from key points in the face image, and forms a feature vector from the features of each key point; The training matrix is composed of the feature vector obtained after extracting the features of the sample face image, and the covariance matrix obtained by inputting the joint Bayesian model for training; by comparing the generated model with the face images in the sample library , identify the user. Compared with the existing face recognition algorithms based on deep learning, the face recognition method of the present embodiment requires fewer samples and less computation during model training, which can improve the efficiency of face recognition and can be applied to various occasions.

下面结合具体实施方式对本实施例人脸识别方法进行详细说明。本实施例人脸识别方法包括以下步骤:The face recognition method of this embodiment will be described in detail below with reference to specific implementation manners. The face recognition method of the present embodiment includes the following steps:

S10:对采集的待识别用户的人脸图像中关键点提取特征,由各关键点的特征构成特征向量。S10: Extract features from key points in the collected face image of the user to be identified, and form a feature vector from the features of each key point.

对于采集到的人脸图像,可先对人脸图像大小、角度进行归一化处理,并转化为灰度图,For the collected face image, the size and angle of the face image can be normalized and converted into a grayscale image.

本方法中提取图像特征采用局部二值模式(Local Binary Patterns,LBP)特征。在图像中提取特征具体过程为,在人脸图像中选取多个关键点,提取各关键点处的局部二值模式特征。In this method, image features are extracted using local binary patterns (Local Binary Patterns, LBP) features. The specific process of extracting features in an image is to select multiple key points in the face image, and extract local binary pattern features at each key point.

局部二值模式特征的提取方法是以窗口中心像素为阈值,与相邻8个像素的灰度值进行比较,若周围像素值大则置为1,否则置为0。The extraction method of the local binary pattern feature takes the central pixel of the window as the threshold value, and compares it with the gray value of the adjacent 8 pixels. If the surrounding pixel value is large, it is set to 1, otherwise it is set to 0.

具体的,定义函数:Specifically, define the function:

Figure DEST_PATH_GDA0001251833900000051
Figure DEST_PATH_GDA0001251833900000051

则提取关键点处的局部二值模式特征描述为:Then the local binary pattern features at the extracted key points are described as:

Figure DEST_PATH_GDA0001251833900000052
Figure DEST_PATH_GDA0001251833900000052

其中,gc表示中心点亮度,gp表示邻域点亮度,P表示邻域点数,R表示邻域半径。Among them, gc represents the brightness of the center point, gp represents the brightness of the neighborhood points, P represents the number of neighbor points, and R represents the radius of the neighborhood.

对某一关键点提取特征完毕后将得到一串二进制数表示的特征数据,经典情况下P=8,R=1.0,则得到一串8位二进制数,共计256种状态。在实际应用中,这256种状态出现的概率并不相同,为了压缩状态数,可以用这串数中01的改变次数来区分不同状态。90%以上的情况下,01最多只会改变两次。After the feature extraction of a certain key point is completed, a string of feature data represented by binary numbers will be obtained. In the classic case, P=8, R=1.0, a string of 8-bit binary numbers will be obtained, with a total of 256 states. In practical applications, the probabilities of these 256 states appearing are not the same. In order to compress the number of states, the number of changes of 01 in this string of numbers can be used to distinguish different states. More than 90% of the time, 01 will only be changed twice at most.

定义:definition:

Figure DEST_PATH_GDA0001251833900000053
Figure DEST_PATH_GDA0001251833900000053

当U≤2时,LBP特征共有58种;当U>2时,LBP特征归为同一种。因此这样将产生的状态数压缩为58+1种。When U≤2, there are 58 LBP features; when U>2, LBP features are classified as one. Therefore, this compresses the number of states generated to 58+1.

在采集到的人脸图像中拟合选取多个关键点,选取表现明显特征的部位作为关键点,例如眼睛、鼻子、嘴巴、眉毛以及脸部轮廓等部位,选择其中效果较好的关键点进行特征提取。示例性的,可拟合人脸图像中的68个人脸关键点,选择其中效果较好的关键点进行特征提取。Fitting and selecting multiple key points in the collected face images, selecting the parts with obvious features as key points, such as eyes, nose, mouth, eyebrows, and facial contours, etc., and selecting the key points with better effect. Feature extraction. Exemplarily, 68 face key points in the face image can be fitted, and the key points with better effect among them can be selected for feature extraction.

优选的,在本实施例的一种具体实施方式中,为增加模型的普适性,本步骤中还具体包括对人脸图像进行多尺度缩放,对于同一关键点,在各尺度的人脸图像中提取特征并连接,再将各关键点的特征连接,构成所述特征向量。Preferably, in a specific implementation of this embodiment, in order to increase the universality of the model, this step also specifically includes performing multi-scale scaling on the face image. For the same key point, the face images at each scale are The features are extracted and connected, and then the features of each key point are connected to form the feature vector.

在提取特征时,对人脸图像进行金字塔式的多尺度缩放,对于同一关键点,在各尺度的人脸图像中均提取特征并连接,然后将所有关键点的特征连接,构成所述特征向量。通过对人脸图像进行金字塔式的多尺度缩放,使同时在大尺度下提取精细特征,在小尺度下提取广泛特征,提高对人脸图像特征提取的精准度。其中对图像的缩放次数可相应调整并测试,最终保证在计算量合适的情况下达到最佳效果。When extracting features, a pyramid-like multi-scale scaling is performed on the face image. For the same key point, features are extracted and connected in the face images of each scale, and then the features of all key points are connected to form the feature vector. . By performing pyramidal multi-scale scaling on the face image, fine features can be extracted at large scales, and extensive features can be extracted at small scales, thereby improving the accuracy of face image feature extraction. The number of times of zooming in the image can be adjusted and tested accordingly, and finally the best effect can be achieved when the amount of calculation is appropriate.

优选的,本步骤中还包括:对所述特征向量进行维度压缩。Preferably, this step further includes: performing dimension compression on the feature vector.

将提取得到的人脸图像各关键点的特征连接,构成特征向量,形成的特征向量为高维特征向量,维度在10k-100k维。本实施例方法中采用主元分析方法(PrincipalComponent Analysis,PCA)对特征向量进行维度压缩处理,经压缩后的特征向量维数在200-2000维。通过对特征向量进行降维处理,可以降低后续数据运算的计算量。The features of each key point of the extracted face image are connected to form a feature vector, and the formed feature vector is a high-dimensional feature vector with a dimension of 10k-100k. In the method of this embodiment, Principal Component Analysis (PCA) is used to perform dimension compression processing on the feature vector, and the dimension of the compressed feature vector is 200-2000 dimensions. By reducing the dimensionality of the feature vector, the calculation amount of subsequent data operations can be reduced.

优选的,本实施例方法中,对所述特征向量进行维度压缩处理中,针对运算芯片(即CPU)的运算特性进行了优化,在进行矩阵乘法运算时,控制运算芯片优先访问内存中连续的内存区域,并进行并行运算,可以大大提高运算速度,运算速度可提升10倍以上。Preferably, in the method of this embodiment, in the dimension compression processing for the feature vector, optimization is carried out for the operation characteristics of the computing chip (ie, the CPU), and when performing the matrix multiplication operation, the computing chip is controlled to preferentially access the continuous memory in the memory. The memory area and parallel operation can greatly improve the operation speed, and the operation speed can be increased by more than 10 times.

另外,在采集待识别用户的人脸图像时,可通过相机对用户脸部图像进行自动采集,用户面对相机转动头部数秒即可完成自动采集。考虑到人脸的长度及瞳距差别不会太大,可根据平均人脸模型计算采集到的人脸图像的投影矩阵,根据投影矩阵计算人脸角度,从采集的人脸图像中选取人脸角度位于预设范围内的人脸图像作为输入的人脸图像。根据平均人脸模型及检测到的人脸模型计算投影矩阵,估算人脸角度,假定人脸正面对相机为0度,人脸角度过大将不能提取准确的关键点,会影响识别准确性。因此从采集的用户人脸图像中选取角度适配的图像作为输入的人脸图像。同时,也可手动加入样本图像。样本图像会随着用户使用不断更新,以达到最佳效果。In addition, when the face image of the user to be recognized is collected, the face image of the user can be automatically collected by the camera, and the automatic collection can be completed by the user turning his head facing the camera for a few seconds. Considering that the difference between the length and interpupillary distance of the face is not too large, the projection matrix of the collected face image can be calculated according to the average face model, the face angle can be calculated according to the projection matrix, and the face can be selected from the collected face image. The face image whose angle is within the preset range is used as the input face image. Calculate the projection matrix according to the average face model and the detected face model, and estimate the face angle. It is assumed that the face is facing the camera at 0 degrees. If the face angle is too large, it will not be able to extract accurate key points, which will affect the recognition accuracy. Therefore, an angle-adapted image is selected from the collected face images of the user as the input face image. At the same time, sample images can also be added manually. The sample images are constantly updated as the user uses them to achieve the best results.

S11:将所述特征向量与训练矩阵运算,生成模型,所述训练矩阵由样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行训练得到的协方差矩阵构成。S11: Operate the feature vector and a training matrix to generate a model, where the training matrix is composed of a feature vector obtained by extracting features from a sample face image, and a covariance matrix obtained by inputting a joint Bayesian model for training.

本实施例方法中,采用联合贝叶斯模型进行模型训练。其基本思想为,将人脸分为两个部分,其中a部分表示不同人脸同一部位特征的差异,b部分表示同一人同一部位特征在不同环境下的差异。变量a、变量b分别服从高斯分布N(0,Sa)、N(0,Sb)。In the method of this embodiment, a joint Bayesian model is used for model training. The basic idea is to divide the face into two parts, where part a represents the difference of the features of the same part of different faces, and part b represents the difference of the features of the same part of the same person in different environments. Variable a and variable b obey Gaussian distribution N(0, Sa) and N(0, Sb) respectively.

通过计算Sa、Sb的协方差矩阵可以得到两个人脸的对数似然比R(x1,x2)。设Hs表示两张人脸为同一人,Hd表示两张人脸为不同人,采用对数似然比判别两张人脸的相似度,其公式描述为The log-likelihood ratio R(x1,x2) of the two faces can be obtained by calculating the covariance matrix of Sa and Sb. Let Hs indicate that the two faces are the same person, and Hd indicate that the two faces are different people. The log-likelihood ratio is used to determine the similarity of the two faces. The formula is described as

Figure DEST_PATH_GDA0001251833900000071
Figure DEST_PATH_GDA0001251833900000071

其中,A=(Sa+Sb)-1-(F+G),F=Sb-1,G=-(2Sa+Sb)-1SaSb-1,其中迭代阈值可设置为10^(-6)。Among them, A=(Sa+Sb)-1 -(F+G), F=Sb-1 , G=-(2Sa+Sb)-1 SaSb-1 , and the iteration threshold can be set to 10^(-6) .

在进行训练时,基于样本人脸图像,根据样本人脸图像的标签(即ID,不同人的人脸图像ID不同)随机生成数千相同人及不同人的数据对,采用迭代算法计算协方差矩阵。其中,迭代算法可采用最大期望算法进行迭代运算,或者也可采用其它类型的迭代算法。During training, based on the sample face image, according to the label of the sample face image (ie ID, the ID of different people's face images is different), thousands of data pairs of the same person and different people are randomly generated, and an iterative algorithm is used to calculate the covariance matrix. The iterative algorithm may use the maximum expectation algorithm to perform the iterative operation, or may also use other types of iterative algorithms.

S12:将所述模型与样本库中人脸图像进行比对,识别用户。S12: Compare the model with the face images in the sample database to identify the user.

本实施例中,将所述模型与样本库中人脸图像进行比对,识别用户通过评价函数实现,具体为:In this embodiment, the model is compared with the face images in the sample library, and the user identification is realized through an evaluation function, specifically:

构造:

Figure DEST_PATH_GDA0001251833900000072
其中th表示阈值;structure:
Figure DEST_PATH_GDA0001251833900000072
where th represents the threshold;

Figure DEST_PATH_GDA0001251833900000073
其中,Xi为第i个人,N(Xi)表示第i个样本人脸图像;
Figure DEST_PATH_GDA0001251833900000073
Among them, Xi is the ith person, and N(Xi) represents the ith sample face image;

评价函数表示为:The evaluation function is expressed as:

Figure DEST_PATH_GDA0001251833900000074
Figure DEST_PATH_GDA0001251833900000074

若V(Xi)=1,则判定识别为第i个人。If V(Xi)=1, it is determined to be the i-th person.

本实施例人脸识别方法,训练模型时所需样本数量较少,迭代速度较快,后续添加样本方便,无需重新训练,开发成本低;并且本方法计算量小、速度快、精度高,尤其适用于嵌入式环境。In the face recognition method of the present embodiment, the number of samples required for training the model is small, the iteration speed is fast, the subsequent addition of samples is convenient, there is no need for retraining, and the development cost is low; and the method is small in calculation amount, fast in speed, and high in precision, especially Suitable for embedded environments.

以上对本发明所提供的一种人脸识别方法进行了详细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。A face recognition method provided by the present invention has been described in detail above. The principles and implementations of the present invention are described herein by using specific examples, and the descriptions of the above embodiments are only used to help understand the method and the core idea of the present invention. It should be pointed out that for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can also be made to the present invention, and these improvements and modifications also fall within the protection scope of the claims of the present invention.

Claims (8)

Translated fromChinese
1.一种人脸识别方法,其特征在于,包括步骤:1. a face recognition method, is characterized in that, comprises the steps:S10:对采集的待识别用户的人脸图像中关键点提取特征,由各关键点的特征构成特征向量;S10: Extract features from key points in the collected face image of the user to be identified, and form a feature vector from the features of each key point;S11:将所述特征向量与训练矩阵运算,生成模型,所述训练矩阵由样本人脸图像经过提取特征后得到的特征向量,输入联合贝叶斯模型进行训练得到的协方差矩阵构成;S11: Operate the feature vector and the training matrix to generate a model, and the training matrix is composed of the feature vector obtained after the feature extraction of the sample face image, and the covariance matrix obtained by inputting the joint Bayesian model for training;S12:将所述模型与样本库中人脸图像进行比对,识别用户;S12: Compare the model with the face images in the sample library to identify the user;还包括:采用联合贝叶斯模型进行模型训练,基本思想为,将人脸分为两个部分,其中a部分表示不同人脸同一部位特征的差异,b部分表示同一人同一部位特征在不同环境下的差异,变量a、变量b分别服从高斯分布N(0,Sa)、N(0,Sb);通过计算Sa、Sb的协方差矩阵得到两个人脸的对数似然比R(x1,x2),设Hs表示两张人脸为同一人,Hd表示两张人脸为不同人,采用对数似然比判别两张人脸的相似度,其公式描述为:It also includes: using a joint Bayesian model for model training. The basic idea is to divide the face into two parts, where part a represents the difference in the features of the same part of different faces, and part b represents the same part of the same person in different environments. The difference between the two faces, the variable a and the variable b obey the Gaussian distribution N(0, Sa), N(0, Sb) respectively; by calculating the covariance matrix of Sa and Sb, the log-likelihood ratio R(x1, x2), set Hs to indicate that the two faces are the same person, and Hd to indicate that the two faces are different people. The log-likelihood ratio is used to determine the similarity of the two faces. The formula is described as:
Figure FDA0002296166770000011
Figure FDA0002296166770000011
其中,A=(Sa+Sb)-1-(F+G),F=Sb-1,G=-(2Sa+Sb)-1SaSb-1Wherein, A=(Sa+Sb)-1- (F+G), F=Sb-1 , G=-(2Sa+Sb)-1 SaSb-1 ;在进行训练时,基于样本人脸图像,根据样本人脸图像的标签随机生成数千相同人及不同人的数据对,采用迭代算法计算协方差矩阵。During training, based on the sample face image, according to the label of the sample face image, thousands of data pairs of the same person and different people are randomly generated, and an iterative algorithm is used to calculate the covariance matrix.2.根据权利要求1所述的人脸识别方法,其特征在于,所述步骤S10具体包括:2. The face recognition method according to claim 1, wherein the step S10 specifically comprises:对人脸图像进行多尺度缩放,对于同一关键点在各尺度的人脸图像中提取特征并连接,再将各关键点的特征连接,构成所述特征向量。Multi-scale scaling is performed on the face image, and features are extracted from face images of different scales for the same key point and connected, and then the features of each key point are connected to form the feature vector.3.根据权利要求2所述的人脸识别方法,其特征在于,所述步骤S10还包括:对所述特征向量进行维度压缩。3 . The face recognition method according to claim 2 , wherein the step S10 further comprises: dimensionally compressing the feature vector. 4 .4.根据权利要求1-3任意一项 所述的人脸识别方法,其特征在于,对人脸图像中关键点提取特征具体包括:4. according to the method for face recognition described in any one of claim 1-3, it is characterised in that the key point extraction feature in the face image specifically comprises:在人脸图像中选取多个关键点,提取各关键点处的局部二值模式特征。Select multiple key points in the face image, and extract the local binary pattern features at each key point.5.根据权利要求4所述的人脸识别方法,其特征在于,提取关键点处的局部二值模式特征描述为:5. face recognition method according to claim 4, is characterized in that, the local binary pattern feature at extraction key point place is described as:
Figure FDA0002296166770000021
Figure FDA0002296166770000021
其中,gc表示中心点亮度,gp表示邻域点亮度,P表示邻域点数,R表示邻域半径,并定义函数:Among them, gc represents the brightness of the center point, gp represents the brightness of the neighborhood point, P represents the number of neighbor points, R represents the neighborhood radius, and defines the function:
Figure FDA0002296166770000022
Figure FDA0002296166770000022
6.根据权利要求3所述的人脸识别方法,其特征在于,对所述特征向量进行维度压缩处理中,在进行矩阵乘法运算时控制运算芯片优先访问连续的内存区域,并进行并行运算。6 . The face recognition method according to claim 3 , characterized in that, in performing dimension compression processing on the feature vector, when performing matrix multiplication operation, the computing chip is controlled to preferentially access continuous memory areas and perform parallel operations. 7 .7.根据权利要求1所述的人脸识别方法,其特征在于,采集待识别用户的人脸图像包括:7. The face recognition method according to claim 1, wherein collecting the face image of the user to be identified comprises:根据平均人脸模型计算采集到的人脸图像的投影矩阵,根据所述投影矩阵计算人脸角度,从采集的人脸图像中选取人脸角度位于预设范围内的人脸图像作为输入的人脸图像。Calculate the projection matrix of the collected face image according to the average face model, calculate the face angle according to the projection matrix, select the face image whose face angle is within the preset range from the collected face image as the input person face image.8.根据权利要求1所述的人脸识别方法,其特征在于,将所述模型与样本库中人脸图像进行比对,识别用户通过评价函数实现,具体为:8. The face recognition method according to claim 1, wherein the model is compared with the face image in the sample library, and the identification user is realized by an evaluation function, specifically:构造:
Figure FDA0002296166770000023
其中th表示阈值;
structure:
Figure FDA0002296166770000023
where th represents the threshold;
Figure FDA0002296166770000024
其中,Xi表示第i个人,N(Xi)表示第i个样本人脸图像;
Figure FDA0002296166770000024
Among them, Xi represents the ith person, and N(Xi) represents the ith sample face image;
评价函数表示为:The evaluation function is expressed as:
Figure FDA0002296166770000025
Figure FDA0002296166770000025
CN201611177440.XA2016-12-192016-12-19Face recognition methodExpired - Fee RelatedCN106709442B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201611177440.XACN106709442B (en)2016-12-192016-12-19Face recognition method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201611177440.XACN106709442B (en)2016-12-192016-12-19Face recognition method

Publications (2)

Publication NumberPublication Date
CN106709442A CN106709442A (en)2017-05-24
CN106709442Btrue CN106709442B (en)2020-07-24

Family

ID=58939234

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201611177440.XAExpired - Fee RelatedCN106709442B (en)2016-12-192016-12-19Face recognition method

Country Status (1)

CountryLink
CN (1)CN106709442B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109377429A (en)*2018-11-132019-02-22广东同心教育科技有限公司 A face recognition quality education wisdom evaluation system
CN110458134B (en)*2019-08-172020-06-16南京昀趣互动游戏有限公司Face recognition method and device
CN114299584B (en)*2021-12-302024-08-23郑州工程技术学院Method, device, equipment and storage medium for face recognition under illumination based on iterative training model

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102004924A (en)*2010-11-032011-04-06无锡中星微电子有限公司Human head detection system and method
CN104573652A (en)*2015-01-042015-04-29华为技术有限公司Method, device and terminal for determining identity identification of human face in human face image
CN105138968A (en)*2015-08-052015-12-09北京天诚盛业科技有限公司Face authentication method and device
CN105468760A (en)*2015-12-012016-04-06北京奇虎科技有限公司Method and apparatus for labeling face images
CN106228142A (en)*2016-07-292016-12-14西安电子科技大学Face verification method based on convolutional neural networks and Bayesian decision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102004924A (en)*2010-11-032011-04-06无锡中星微电子有限公司Human head detection system and method
CN104573652A (en)*2015-01-042015-04-29华为技术有限公司Method, device and terminal for determining identity identification of human face in human face image
CN105138968A (en)*2015-08-052015-12-09北京天诚盛业科技有限公司Face authentication method and device
CN105468760A (en)*2015-12-012016-04-06北京奇虎科技有限公司Method and apparatus for labeling face images
CN106228142A (en)*2016-07-292016-12-14西安电子科技大学Face verification method based on convolutional neural networks and Bayesian decision

Also Published As

Publication numberPublication date
CN106709442A (en)2017-05-24

Similar Documents

PublicationPublication DateTitle
William et al.Face recognition using facenet (survey, performance test, and comparison)
CN106355138A (en)Face recognition method based on deep learning and key features extraction
Angadi et al.Face recognition through symbolic modeling of face graphs and texture
CN107292299B (en)Side face recognition methods based on kernel specification correlation analysis
Jalilian et al.Enhanced segmentation-CNN based finger-vein recognition by joint training with automatically generated and manual labels
Ammar et al.Evaluation of histograms local features and dimensionality reduction for 3D face verification
CN106709442B (en)Face recognition method
Lin et al.Low‐complexity face recognition using contour‐based binary descriptor
Shankar et al.Frames extracted from video streaming to recognition of face: LBPH, FF and CNN
Ramalingam et al.Robust face recognition using enhanced local binary pattern
Aggarwal et al.Face recognition system using image enhancement with PCA and LDA
Adeyanju et al.Development of an american sign language recognition system using canny edge and histogram of oriented gradient
Gupta et al.Real‐Time Gender Recognition for Juvenile and Adult Faces
Teo et al.2.5 D Face Recognition System using EfficientNet with Various Optimizers
Su et al.Order-preserving wasserstein discriminant analysis
Jebarani et al.Robust face recognition and classification system based on SIFT and DCP techniques in image processing
Wang et al.Feature extraction method of face image texture spectrum based on a deep learning algorithm
Zhang et al.A multi-view camera-based anti-fraud system and its applications
CN115810213A (en)Face verification and identification method for preventing face shielding based on improved Facenet model
Otiniano-Rodríguez et al.Finger spelling recognition using kernel descriptors and depth images
CN114998966A (en)Facial expression recognition method based on feature fusion
Intan et al.Facial recognition using multi edge detection and distance measure
Loo et al.The influence of ethnicity in facial gender estimation
Harshitha et al.Hand sign classification techniques using supervised and unsupervised learning algorithms
Hattab et al.A robust face recognition method based on altp and sift

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20210401

Address after:518000 room 1601, building 2, Wanke Yuncheng phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after:SHENZHEN LD ROBOT Co.,Ltd.

Address before:518055 18th floor, building B1, Nanshan wisdom Park, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before:INMOTION TECHNOLOGIES Co.,Ltd.

CP03Change of name, title or address
CP03Change of name, title or address

Address after:518000 room 1601, building 2, Vanke Cloud City phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province (16th floor, block a, building 6, Shenzhen International Innovation Valley)

Patentee after:Shenzhen Ledong robot Co.,Ltd.

Address before:518000 room 1601, building 2, Wanke Yuncheng phase 6, Tongfa South Road, Xili community, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before:SHENZHEN LD ROBOT Co.,Ltd.

CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20200724


[8]ページ先頭

©2009-2025 Movatter.jp