Movatterモバイル変換


[0]ホーム

URL:


CN106874877A - A kind of combination is local and global characteristics without constraint face verification method - Google Patents

A kind of combination is local and global characteristics without constraint face verification method
Download PDF

Info

Publication number
CN106874877A
CN106874877ACN201710090721.XACN201710090721ACN106874877ACN 106874877 ACN106874877 ACN 106874877ACN 201710090721 ACN201710090721 ACN 201710090721ACN 106874877 ACN106874877 ACN 106874877A
Authority
CN
China
Prior art keywords
face
feature
features
picture
extract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710090721.XA
Other languages
Chinese (zh)
Inventor
胡彬
文万志
曲平
李牧
程显毅
杨赛
李跃华
陈晓勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Nantong Research Institute for Advanced Communication Technologies Co Ltd
Original Assignee
Nantong University
Nantong Research Institute for Advanced Communication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University, Nantong Research Institute for Advanced Communication Technologies Co LtdfiledCriticalNantong University
Priority to CN201710090721.XApriorityCriticalpatent/CN106874877A/en
Publication of CN106874877ApublicationCriticalpatent/CN106874877A/en
Withdrawnlegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供一种结合局部和全局特征的无约束人脸验证方法,首先整理人脸样本库,每个人包含不同姿态、不同环境和不同时间的多张人脸照,提取人脸的68个特征点;然后根据人脸特征点提取人脸的局部特征和整体特征共5种特征,并将这5种特征映射到核空间;然后在训练集上使用这5种特征分别使用级联贝叶斯方法训练得到5组模型;在人脸验证阶段,首先提取输入图像的人脸特征,然后根据训练的模型分别计算5组特征对的相似度,最后以5组相似度的平均值作为最终相似度,从而判断两人是否为同一个人。本发明结合考虑了人脸的局部特征和整体特征,解决了室外无约束环境下的人脸验证问题。

The present invention provides an unconstrained face verification method combining local and global features. First, the face sample database is sorted out. Each person contains multiple face photos of different postures, different environments and different times, and 68 features of the face are extracted. points; then extract 5 types of local features and overall features of the face according to the face feature points, and map these 5 types of features to the kernel space; then use these 5 types of features on the training set to use cascaded Bayesian Method training to obtain 5 sets of models; in the face verification stage, first extract the face features of the input image, then calculate the similarity of 5 sets of feature pairs according to the trained model, and finally use the average of the 5 sets of similarities as the final similarity , so as to determine whether the two are the same person. The invention combines the local features and the overall features of the human face, and solves the problem of face verification in an outdoor unconstrained environment.

Description

Translated fromChinese
一种结合局部和全局特征的无约束人脸验证方法An Unconstrained Face Verification Method Combining Local and Global Features

技术领域technical field

本发明涉及计算机视觉领域,具体涉及一种结合局部和全局特征的无约束人脸验证方法。The invention relates to the field of computer vision, in particular to an unconstrained face verification method combining local and global features.

背景技术Background technique

人脸识别有两个方向:人脸验证和人脸身份识别,其中,人脸验证指判断两张脸是否为同一个人,人脸身份识别从一个人脸库中找到给定的这张人脸对应的身份。由于室外无约束环境下的人脸图片存在光照、姿态、年龄、装束等多种影响,使得室外无约束环境下的人脸验证难度非常大。Face recognition has two directions: face verification and face identification. Among them, face verification refers to judging whether two faces are the same person, and face identification finds a given face from a face database. corresponding identity. Face verification in an outdoor unconstrained environment is very difficult due to the influence of illumination, posture, age, and clothing on face pictures in an outdoor unconstrained environment.

近几年提出了很多方法来改善无约束环境下的人脸验证,这些方法大概可以分为两类:基于特征的方法和基于度量学习的方法。对于第一类方法,目标在于提取鲁棒的、有区分性的特征,希望不同人的人脸特征能尽可能的不同,经典的人脸特征描述算子包括SIFT、LBP、PEM和fisher人脸等。基于度量学习的方法则是通过带标记的样本数据学习出一种距离度量算法,在该算法框架下,相同的人脸距离更小而不同的人脸距离更大,经典的距离度量算法有LDML、CSML、PCCA和PMML等算法。In recent years, many methods have been proposed to improve face verification in unconstrained environments. These methods can be roughly divided into two categories: feature-based methods and metric-learning-based methods. For the first type of method, the goal is to extract robust and discriminative features. It is hoped that the facial features of different people can be as different as possible. The classic facial feature description operators include SIFT, LBP, PEM and fisher face. Wait. The method based on metric learning is to learn a distance measurement algorithm through labeled sample data. Under the framework of this algorithm, the distance between the same face is smaller and the distance between different faces is larger. The classic distance measurement algorithm is LDML. , CSML, PCCA and PMML and other algorithms.

目前,已有一些人脸验证方法被公开,比如:中国发明专利“基于类内类间距离的人脸验证方法201310589074.9”和“一种基于特征学习的跨年龄人脸验证方法201510270145.8”都是针对整张人脸提取全局特征,容易受到诸如被验证人戴帽子、眼镜以及表情等局部属性干扰。中国发明专利“一种基于多姿态识别的人脸验证方法及装置201410795404.4”需要采集至少两张人脸照片,当样本库里只有一张人脸照片时,该方法就失效了。中国发明专利“一种基于分块深度神经网络的非限制环境人脸验证方法201310664180.9”虽然将人脸划分为多个区域,并按块提取特征,但是其划分区域是随意划分的,没有考虑人脸的五官分布。At present, some face verification methods have been published, such as: Chinese invention patents "Face verification method based on intra-class and inter-class distance 201310589074.9" and "A cross-age face verification method based on feature learning 201510270145.8" are aimed at The whole face extracts global features, which are easily disturbed by local attributes such as hats, glasses and expressions of the verified person. The Chinese invention patent "a face verification method and device based on multi-pose recognition 201410795404.4" needs to collect at least two face photos. When there is only one face photo in the sample database, the method will fail. Chinese Invention Patent "A Unrestricted Environment Face Verification Method Based on Block Deep Neural Network 201310664180.9" although the face is divided into multiple areas and features are extracted by blocks, the division area is arbitrarily divided without considering human The facial features distribution.

发明内容Contents of the invention

本发明要解决的技术问题是提供一种结合局部和全局特征的无约束人脸验证方法,结合考虑了人脸的五官轮廓特征、眼部特征、嘴部特征、鼻部特征这些局部特征和人脸的全局特征,通过级联贝叶斯模型为两张人脸的各部分特征相似度打分,最后取相似度均值,从而可以有效的规避局部装束等外在影响造成的识别效果不佳。The technical problem to be solved by the present invention is to provide an unconstrained face verification method combining local and global features, which takes into account the facial features, eye features, mouth features, nose features and other local features. For the global features of the face, the cascaded Bayesian model is used to score the similarity of the features of each part of the two faces, and finally the average value of the similarity is taken, which can effectively avoid the poor recognition effect caused by external influences such as partial clothing.

为解决上述技术问题,本发明的实施例提供一种结合局部和全局特征的无约束人脸验证方法,包括以下步骤:In order to solve the above technical problems, an embodiment of the present invention provides an unconstrained face verification method combining local and global features, including the following steps:

步骤1,整理人脸训练样本集,样本集包含一万个人的人脸图片,每个人包含至少15张以上不同姿态、不同环境、不同时间的人脸照;Step 1. Organize the face training sample set. The sample set contains face pictures of 10,000 people, and each person contains at least 15 face photos of different poses, different environments, and different times;

步骤2,检测样本集图片中的人脸区域并提取68个特征点,然后进行人脸校准和归一化处理;Step 2, detect the face area in the sample set picture and extract 68 feature points, and then perform face calibration and normalization processing;

步骤3,根据步骤2得到的68个特征点,提取人脸的五官轮廓特征、眼部特征、嘴部特征、鼻部特征、以及人脸的全局特征;Step 3, according to the 68 feature points obtained in step 2, extract facial features, eye features, mouth features, nose features, and global features of the face;

步骤4,将步骤3得到的5种特征分别使用RBF核函数投影到易区分的非线性空间;Step 4, project the five features obtained in step 3 to an easily distinguishable non-linear space using the RBF kernel function;

步骤5,将步骤4得到的5种特征分别使用级联贝叶斯算法训练,得到5组协方差矩阵A和G;Step 5, use the cascaded Bayesian algorithm to train the five features obtained in step 4 to obtain five sets of covariance matrices A and G;

步骤6,人脸验证阶段,针对输入的两张图片,使用步骤3中的方法得到两张人脸照的5种特征,并投影到非线性空间,得到两个人的5种特征,表示为Step 6, the face verification stage, for the two input pictures, use the method in step 3 to get the 5 features of the two face photos, and project them into the nonlinear space to get the 5 features of the two people, expressed as with ;

步骤7,使用步骤5中得到的两个矩阵A和G,分别计算步骤6中的5对特征的相似度,计算公式为:Step 7, using the two matrices A and G obtained in step 5, respectively calculate the similarity of the 5 pairs of features in step 6, the calculation formula is: ;

步骤8,计算步骤7中5对相似度的平均值,得到最终的相似度值,与阈值相比从而判断出两个人是否为同一个人。Step 8, calculate the 5 pairs of similarities in step 7 The average value is obtained to obtain the final similarity value, which is compared with the threshold to determine whether two people are the same person.

其中,所述步骤2中提取人脸上的68个特征点的具体步骤为:Wherein, the specific steps of extracting 68 feature points on the human face in the step 2 are:

步骤2-1,针对输入图片,采用基于Adaboost的人脸检测算法检测照片中的人脸,如果未检测到人脸,返回,继续处理下一张图片,检测到人脸则转入步骤2-2;Step 2-1. For the input picture, use the Adaboost-based face detection algorithm to detect the face in the photo. If no face is detected, return and continue to process the next picture. If a face is detected, go to step 2- 2;

步骤2-2,将步骤2-1得到的人脸区域图片送入人脸特征点检测模块,得到人脸的68个特征点;Step 2-2, sending the face area picture obtained in step 2-1 into the face feature point detection module to obtain 68 feature points of the face;

步骤2-3,根据步骤2-2得到的人脸特征点,进行人脸校准;Step 2-3, perform face calibration according to the face feature points obtained in step 2-2;

步骤2-4,人脸归一化处理,消除光照影响。Steps 2-4, face normalization processing to eliminate the influence of light.

其中,所述步骤3中提取人脸特征的具体步骤如下:Wherein, the specific steps of extracting face features in the step 3 are as follows:

步骤3-1,根据步骤2得到的68个特征点,任选一点作为基准点P,以基准点P为原点建立极坐标,以10°为间距,极坐标被划分为36个区域,依次求取其余67个特征点与基准点的连线同水平正方向形成的角度,得到以P为基准点的角度直方图;按照上述方法,选取眼、鼻、嘴三处6个点作为基准点,得到6组直方图,以该直方图作为人脸的五官轮廓特征;Step 3-1, according to the 68 feature points obtained in step 2, choose a point as the reference point P, and establish the polar coordinates with the reference point P as the origin, with 10° as the interval, the polar coordinates are divided into 36 regions, and then calculate Take the angle formed by the connection line between the remaining 67 feature points and the reference point and the horizontal positive direction to obtain the angle histogram with P as the reference point; according to the above method, select 6 points at the eyes, nose and mouth as the reference point, Obtain 6 groups of histograms, and use the histograms as the facial features of the face;

步骤3-2,根据步骤2得到的68个特征点,定位眼部区域,计算眼部区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为眼部特征;Step 3-2, according to the 68 feature points obtained in step 2, locate the eye region, calculate the dense LBP feature of the eye region, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the eye feature;

步骤3-3,根据步骤2得到的68个特征点,定位嘴部区域,计算嘴部区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为嘴部特征;Step 3-3, according to the 68 feature points obtained in step 2, locate the mouth area, calculate the dense LBP feature of the mouth area, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the mouth feature;

步骤3-4,根据步骤2得到的68个特征点,定位鼻部区域,计算鼻部区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为鼻部特征;Step 3-4, according to the 68 feature points obtained in step 2, locate the nose area, calculate the dense LBP feature of the nose area, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the nose feature;

步骤3-5,根据步骤2得到的68个特征点,定位人脸的整体区域,计算人脸整体区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为人脸的全局特征。Step 3-5, according to the 68 feature points obtained in step 2, locate the overall area of the face, calculate the dense LBP feature of the overall area of the face, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the global face feature.

其中,所述步骤5中级联贝叶斯算法训练的具体过程为:Wherein, the specific process of cascade Bayesian algorithm training in the step 5 is:

步骤5-1,根据步骤3得到的人脸的五官轮廓特征,计算不同人和相同人特征之间的协方差矩阵Step 5-1, according to the facial facial features obtained in step 3, calculate the covariance matrix between different people and the same person features with ;

步骤5-2,根据步骤5-1计算得到的,计算出,其中,Step 5-2, calculated according to step 5-1 with ,Calculate with ,in, , ;

步骤5-3,根据步骤3得到的眼部特征、嘴部特征、鼻部特征和人脸的全局特征,重复步骤5-1和5-2,最终得到度量矩阵Step 5-3, according to the eye features, mouth features, nose features and global features of the face obtained in step 3, repeat steps 5-1 and 5-2, and finally get the metric matrix with .

其中,所述步骤6包括如下具体步骤:Wherein, said step 6 includes the following specific steps:

步骤6-1,读取验证图片1和图片2,分别进行人脸检测,如果未检测到两张人脸,提示未检测到人脸,结束匹配,否则进入步骤6-2;Step 6-1, read the verification picture 1 and picture 2, and perform face detection respectively. If no two faces are detected, it will prompt that no face is detected, and end the matching, otherwise go to step 6-2;

步骤6-2,检测人脸1和人脸2的68个特征点,如果未检测到,提示匹配失败,结束匹配,否则进入步骤6-3;Step 6-2, detect 68 feature points of face 1 and face 2, if not detected, prompt matching failure, end matching, otherwise go to step 6-3;

步骤6-3、根据步骤3所述方法,提取人脸1和人脸2的5种特征。Step 6-3, according to the method described in step 3, extract five kinds of features of face 1 and face 2.

本发明的上述技术方案的有益效果如下:本发明结合考虑了人脸的五官轮廓特征、眼部特征、嘴部特征、鼻部特征这些局部特征和人脸的全局特征,通过级联贝叶斯模型为两张人脸的各部分特征相似度打分,最后取相似度均值,从而可以有效的规避局部装束等外在影响造成的识别效果不佳。The beneficial effects of the above-mentioned technical solution of the present invention are as follows: the present invention combines the local features of facial features, eye features, mouth features, and nose features with the global features of the face, through cascading Bayesian The model scores the similarity of the features of each part of the two faces, and finally takes the average of the similarity, which can effectively avoid the poor recognition effect caused by external influences such as partial clothing.

附图说明Description of drawings

图1为本发明中68个人脸特征点的分布示意图;Fig. 1 is the distribution schematic diagram of 68 facial feature points among the present invention;

图2为本发明的工作原理框图;Fig. 2 is a working principle block diagram of the present invention;

图3为本发明步骤6至步骤8的流程图;Fig. 3 is the flowchart of step 6 to step 8 of the present invention;

图4为本发明实施例中人脸数据集的示意图;Fig. 4 is the schematic diagram of face data set in the embodiment of the present invention;

图5为人脸的68个特征点分布示意图。FIG. 5 is a schematic diagram of the distribution of 68 feature points of a human face.

具体实施方式detailed description

为使本发明要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。In order to make the technical problems, technical solutions and advantages to be solved by the present invention clearer, the following will describe in detail with reference to the drawings and specific embodiments.

如图1、图3所示,一种结合局部和全局特征的无约束人脸验证方法,包括以下步骤:As shown in Figure 1 and Figure 3, an unconstrained face verification method combining local and global features includes the following steps:

步骤1,整理人脸训练样本集,样本集包含一万个的人脸照片,每个人包含至少15张以上不同姿态、不同环境、不同时间的人脸照;Step 1. Organize the face training sample set. The sample set contains 10,000 face photos, and each person contains at least 15 face photos of different poses, different environments, and different times;

步骤2,检测样本集照片人脸并提取人脸上的68个特征点,如图1所示,并进行人脸校准和归一化处理;Step 2, detect the face of the sample set photo and extract 68 feature points on the face, as shown in Figure 1, and perform face calibration and normalization processing;

其中,提取人脸上的68个特征点的具体步骤为:Wherein, the specific steps of extracting 68 feature points on the human face are:

步骤2-1,针对输入图片,采用基于Adaboost的人脸检测算法检测照片中的人脸,如果未检测到人脸,返回,继续处理下一张图片,检测到人脸则转入步骤2-2;Step 2-1. For the input picture, use the Adaboost-based face detection algorithm to detect the face in the photo. If no face is detected, return and continue to process the next picture. If a face is detected, go to step 2- 2;

步骤2-2,将步骤2-1得到的人脸区域图片送入人脸特征点检测模块,得到人脸的68个特征点;Step 2-2, sending the face area picture obtained in step 2-1 into the face feature point detection module to obtain 68 feature points of the face;

步骤2-3,根据步骤2-2得到的人脸特征点,进行人脸校准;Step 2-3, perform face calibration according to the face feature points obtained in step 2-2;

步骤2-4,人脸归一化处理,消除光照影响。Steps 2-4, face normalization processing to eliminate the influence of light.

步骤3,根据步骤2得到的68个特征点,提取人脸的五官轮廓特征、眼部特征、嘴部特征、鼻部特征、以及人脸的全局特征;Step 3, according to the 68 feature points obtained in step 2, extract facial features, eye features, mouth features, nose features, and global features of the face;

其中,提取人脸特征的具体步骤如下:Among them, the specific steps of extracting face features are as follows:

步骤3-1,根据步骤2得到的68个特征点,任选一点作为基准点P,以基准点P为原点建立极坐标,以10°为间距,极坐标被划分为36个区域,依次求取其余67个特征点与基准点的连线同水平正方向形成的角度,得到以P为基准点的角度直方图;按照上述方法,选取眼、鼻、嘴三处6个点作为基准点,得到6组直方图,以该直方图作为人脸的五官轮廓特征;Step 3-1, according to the 68 feature points obtained in step 2, choose a point as the reference point P, and establish the polar coordinates with the reference point P as the origin, with 10° as the interval, the polar coordinates are divided into 36 regions, and then calculate Take the angle formed by the connection line between the remaining 67 feature points and the reference point and the horizontal positive direction to obtain the angle histogram with P as the reference point; according to the above method, select 6 points at the eyes, nose and mouth as the reference point, Obtain 6 groups of histograms, and use the histograms as the facial features of the face;

步骤3-2,根据步骤2得到的68个特征点,定位眼部区域,计算眼部区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为眼部特征;Step 3-2, according to the 68 feature points obtained in step 2, locate the eye region, calculate the dense LBP feature of the eye region, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the eye feature;

步骤3-3,根据步骤2得到的68个特征点,定位嘴部区域,计算嘴部区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为嘴部特征;Step 3-3, according to the 68 feature points obtained in step 2, locate the mouth area, calculate the dense LBP feature of the mouth area, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the mouth feature;

步骤3-4,根据步骤2得到的68个特征点,定位鼻部区域,计算鼻部区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为鼻部特征;Step 3-4, according to the 68 feature points obtained in step 2, locate the nose area, calculate the dense LBP feature of the nose area, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the nose feature;

步骤3-5,根据步骤2得到的68个特征点,定位人脸的整体区域,计算人脸整体区域的密集LBP特征,进行WPCA变换消除冗余提取主特征,以该特征作为人脸的全局特征。Step 3-5, according to the 68 feature points obtained in step 2, locate the overall area of the face, calculate the dense LBP feature of the overall area of the face, perform WPCA transformation to eliminate redundancy and extract the main feature, and use this feature as the global face feature.

步骤4,将步骤3得到的5种特征分别使用RBF核函数投影到易区分的非线性空间;Step 4, project the five features obtained in step 3 to an easily distinguishable non-linear space using the RBF kernel function;

步骤5,将步骤4得到的5种特征分别使用级联贝叶斯算法训练,得到5组协方差矩阵A和G;Step 5, use the cascaded Bayesian algorithm to train the five features obtained in step 4 to obtain five sets of covariance matrices A and G;

其中,级联贝叶斯算法训练的具体过程为:Among them, the specific process of cascade Bayesian algorithm training is:

步骤5-1,根据步骤3-1得到的人脸的五官轮廓特征,计算不同人和相同人特征之间的协方差矩阵Step 5-1, according to the facial features of the face obtained in step 3-1, calculate the covariance matrix between different people and the same person features with ;

步骤5-2,根据步骤5-1计算得到的,计算出,其中,Step 5-2, calculated according to step 5-1 with ,Calculate with ,in, , ;

步骤5-3,根据步骤3-2至步骤3-5得到的眼部特征、嘴部特征、鼻部特征和人脸的全局特征,重复步骤5-1和5-2,最终得到度量矩阵Step 5-3, according to the eye features, mouth features, nose features and global features of the face obtained from steps 3-2 to 3-5, repeat steps 5-1 and 5-2, and finally get the metric matrix with .

步骤6,人脸验证阶段,针对输入的两张图片,检测人脸并提取人脸描述特征,然后投影到非线性空间,得到两个人的5种特征,表示为;具体步骤如下:Step 6, the face verification stage, for the two input pictures, detect the face and extract the face description features, and then project it into the nonlinear space to obtain the five features of the two people, expressed as with ;Specific steps are as follows:

步骤6-1,读取验证图片1和图片2,分别进行人脸检测,如果未检测到两张人脸,提示未检测到人脸,结束匹配,否则进入步骤6-2;Step 6-1, read the verification picture 1 and picture 2, and perform face detection respectively. If no two faces are detected, it will prompt that no face is detected, and end the matching, otherwise go to step 6-2;

步骤6-2,检测人脸1和人脸2的68个特征点,如果未检测到,提示匹配失败,结束匹配,否则进入步骤6-3;Step 6-2, detect 68 feature points of face 1 and face 2, if not detected, prompt matching failure, end matching, otherwise go to step 6-3;

步骤6-3、根据步骤3所述方法,提取人脸1和人脸2的5种特征。Step 6-3, according to the method described in step 3, extract five kinds of features of face 1 and face 2.

步骤7,使用步骤5中得到的两个矩阵A和G,分别计算步骤6中的5对特征的相似度,计算公式为:Step 7, using the two matrices A and G obtained in step 5, respectively calculate the similarity of the 5 pairs of features in step 6, the calculation formula is: ;

步骤8,计算步骤7中5对相似度的平均值,得到最终的相似度值,与阈值相比从而判断出两个人是否为同一个人。Step 8, calculate the 5 pairs of similarities in step 7 The average value is obtained to obtain the final similarity value, which is compared with the threshold to determine whether two people are the same person.

本发明结合考虑了人脸的五官轮廓特征、眼部特征、嘴部特征、鼻部特征这些局部特征和人脸的全局特征,通过级联贝叶斯模型为两张人脸的各部分特征相似度打分,最后取相似度均值,从而可以有效的规避局部装束等外在影响造成的识别效果不佳。The present invention considers the local features of facial features, eye features, mouth features, and nose features and the global features of the face, and uses the cascaded Bayesian model to make the features of each part of the two faces similar Finally, the average value of the similarity is taken, which can effectively avoid the poor recognition effect caused by external influences such as partial clothing.

实施例:Example:

模型训练阶段包括步骤1到步骤5,人脸数据集如图4,一万个人,每个人至少包含15张不同时期的人脸图片。The model training phase includes steps 1 to 5. The face data set is shown in Figure 4. There are 10,000 people, and each person contains at least 15 face pictures from different periods.

步骤2,检测人脸特征点,以图5为例,检测到人脸的68个特征点。Step 2, detect the feature points of the face. Taking Figure 5 as an example, 68 feature points of the face are detected.

步骤3,提取5组局部特征和全局特征;Step 3, extract 5 groups of local features and global features;

步骤4,使用RBF核函数将5组特征分别映射到核空间;Step 4, use the RBF kernel function to map the five groups of features to the kernel space;

步骤5,使用级联贝叶斯算法训练得到度量矩阵,至此模型训练阶段结束。Step 5, use the cascaded Bayesian algorithm to train the metric matrix with , so far the model training phase is over.

利用训练好的模型,即度量矩阵,根据步骤6到步骤8可以判断两张人脸图片是否为同一个人,该技术可以应用到诸多系统中,如:Use the trained model, the metric matrix with , according to step 6 to step 8, it can be judged whether the two face pictures are the same person. This technology can be applied to many systems, such as:

(1)签到点名系统,系统中只需保存一个人一张人脸图片即可,将输入的人脸图片与系统中保存的人脸一一对比验证,识别到此人即完成点名签到。(1) Sign-in and roll-call system, only one face picture of a person needs to be saved in the system, and the input face picture is compared and verified with the faces saved in the system one by one, and the roll-call and sign-in is completed when the person is recognized.

(2)犯罪分子人脸检索,公安系统中保存一张个人的身份证头像图片,输入犯罪分子人脸图片,与系统中保存的人脸图片一一对比,根据相似度值排序,给出最相近的若干人。(2) Criminal face retrieval, the public security system saves a personal ID card avatar picture, enters the criminal face picture, compares it with the face pictures saved in the system one by one, sorts according to the similarity value, and gives the most Several people who are close.

以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above description is a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present invention, some improvements and modifications can also be made, and these improvements and modifications can also be made. It should be regarded as the protection scope of the present invention.

Claims (5)

CN201710090721.XA2017-02-202017-02-20A kind of combination is local and global characteristics without constraint face verification methodWithdrawnCN106874877A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710090721.XACN106874877A (en)2017-02-202017-02-20A kind of combination is local and global characteristics without constraint face verification method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710090721.XACN106874877A (en)2017-02-202017-02-20A kind of combination is local and global characteristics without constraint face verification method

Publications (1)

Publication NumberPublication Date
CN106874877Atrue CN106874877A (en)2017-06-20

Family

ID=59167314

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710090721.XAWithdrawnCN106874877A (en)2017-02-202017-02-20A kind of combination is local and global characteristics without constraint face verification method

Country Status (1)

CountryLink
CN (1)CN106874877A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107644208A (en)*2017-09-212018-01-30百度在线网络技术(北京)有限公司Method for detecting human face and device
CN107944401A (en)*2017-11-292018-04-20合肥寰景信息技术有限公司The embedded device for tracking and analyzing with multiple faces dynamic
CN108229444A (en)*2018-02-092018-06-29天津师范大学A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion
CN108563997A (en)*2018-03-162018-09-21新智认知数据服务有限公司It is a kind of establish Face datection model, recognition of face method and apparatus
CN108764334A (en)*2018-05-282018-11-06北京达佳互联信息技术有限公司Facial image face value judgment method, device, computer equipment and storage medium
CN108829900A (en)*2018-07-312018-11-16成都视观天下科技有限公司A kind of Research on face image retrieval based on deep learning, device and terminal
CN111960203A (en)*2020-08-132020-11-20安徽迅立达电梯有限公司Intelligent induction system for opening and closing elevator door
WO2023029702A1 (en)*2021-09-062023-03-09京东科技信息技术有限公司Method and apparatus for verifying image

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103049736A (en)*2011-10-172013-04-17天津市亚安科技股份有限公司Face identification method based on maximum stable extremum area
CN103440510A (en)*2013-09-022013-12-11大连理工大学 A method for locating feature points in facial images
CN105138968A (en)*2015-08-052015-12-09北京天诚盛业科技有限公司Face authentication method and device
CN105719248A (en)*2016-01-142016-06-29深圳市商汤科技有限公司Real-time human face deforming method and system
CN106228142A (en)*2016-07-292016-12-14西安电子科技大学Face verification method based on convolutional neural networks and Bayesian decision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103049736A (en)*2011-10-172013-04-17天津市亚安科技股份有限公司Face identification method based on maximum stable extremum area
CN103440510A (en)*2013-09-022013-12-11大连理工大学 A method for locating feature points in facial images
CN105138968A (en)*2015-08-052015-12-09北京天诚盛业科技有限公司Face authentication method and device
CN105719248A (en)*2016-01-142016-06-29深圳市商汤科技有限公司Real-time human face deforming method and system
CN106228142A (en)*2016-07-292016-12-14西安电子科技大学Face verification method based on convolutional neural networks and Bayesian decision

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107644208A (en)*2017-09-212018-01-30百度在线网络技术(北京)有限公司Method for detecting human face and device
CN107944401A (en)*2017-11-292018-04-20合肥寰景信息技术有限公司The embedded device for tracking and analyzing with multiple faces dynamic
CN108229444A (en)*2018-02-092018-06-29天津师范大学A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion
CN108229444B (en)*2018-02-092021-10-12天津师范大学Pedestrian re-identification method based on integral and local depth feature fusion
CN108563997A (en)*2018-03-162018-09-21新智认知数据服务有限公司It is a kind of establish Face datection model, recognition of face method and apparatus
CN108563997B (en)*2018-03-162021-10-12新智认知数据服务有限公司Method and device for establishing face detection model and face recognition
CN108764334A (en)*2018-05-282018-11-06北京达佳互联信息技术有限公司Facial image face value judgment method, device, computer equipment and storage medium
CN108829900A (en)*2018-07-312018-11-16成都视观天下科技有限公司A kind of Research on face image retrieval based on deep learning, device and terminal
CN108829900B (en)*2018-07-312020-11-10成都视观天下科技有限公司Face image retrieval method and device based on deep learning and terminal
CN111960203A (en)*2020-08-132020-11-20安徽迅立达电梯有限公司Intelligent induction system for opening and closing elevator door
WO2023029702A1 (en)*2021-09-062023-03-09京东科技信息技术有限公司Method and apparatus for verifying image

Similar Documents

PublicationPublication DateTitle
CN106874877A (en)A kind of combination is local and global characteristics without constraint face verification method
CN107609497B (en) Real-time video face recognition method and system based on visual tracking technology
CN107194341B (en) Maxout multi-convolutional neural network fusion face recognition method and system
Günther et al.Unconstrained face detection and open-set face recognition challenge
US8655029B2 (en)Hash-based face recognition system
CN109800643B (en)Identity recognition method for living human face in multiple angles
CN112766159A (en)Cross-database micro-expression identification method based on multi-feature fusion
CN103577815B (en)A kind of face alignment method and system
CN105550657B (en)Improvement SIFT face feature extraction method based on key point
US11594074B2 (en)Continuously evolving and interactive Disguised Face Identification (DFI) with facial key points using ScatterNet Hybrid Deep Learning (SHDL) network
CN102682309B (en) A face registration method and device based on template learning
CN103218609B (en)A kind of Pose-varied face recognition method based on hidden least square regression and device thereof
CN109101865A (en)A kind of recognition methods again of the pedestrian based on deep learning
WO2019033574A1 (en)Electronic device, dynamic video face recognition method and system, and storage medium
CN112613480B (en) A face recognition method, system, electronic device and storage medium
CN106355138A (en)Face recognition method based on deep learning and key features extraction
CN106295522A (en)A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
TW201137768A (en)Face recognition apparatus and methods
CN109858362A (en)A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function
CN107292299B (en)Side face recognition methods based on kernel specification correlation analysis
CN110796101A (en)Face recognition method and system of embedded platform
Sudhakar et al.Facial identification of twins based on fusion score method
Ge et al.Deep and discriminative feature learning for fingerprint classification
CN110443577A (en)A kind of campus attendance checking system based on recognition of face
Nigam et al.Review of facial recognition techniques

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication
WW01Invention patent application withdrawn after publication

Application publication date:20170620


[8]ページ先頭

©2009-2025 Movatter.jp