Movatterモバイル変換


[0]ホーム

URL:


CN105354554A - Color and singular value feature-based face in-vivo detection method - Google Patents

Color and singular value feature-based face in-vivo detection method
Download PDF

Info

Publication number
CN105354554A
CN105354554ACN201510770424.0ACN201510770424ACN105354554ACN 105354554 ACN105354554 ACN 105354554ACN 201510770424 ACN201510770424 ACN 201510770424ACN 105354554 ACN105354554 ACN 105354554A
Authority
CN
China
Prior art keywords
image
color
singular value
face
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510770424.0A
Other languages
Chinese (zh)
Inventor
宋彬
赵梦洁
田方
王宇
秦浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian UniversityfiledCriticalXidian University
Priority to CN201510770424.0ApriorityCriticalpatent/CN105354554A/en
Publication of CN105354554ApublicationCriticalpatent/CN105354554A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention discloses a color and singular value feature-based face in-vivo detection method, and mainly aims at solving the problem, that the existing face authenticity identification technology is complicated in calculation and low in identification rate. The method is realized through the following steps: 1) marking positive and negative samples of a face database, and dividing the samples into a training set and a testing set; 2) segmenting face images in the training set into blocks and extracting the color features and singular value features of the small blocks in the training set in batches; 3) normalizing vectors of the extracted features and sending the normalized vectors into a support vector machine classifier to train so as to obtain a training model; and 4) carrying out feature extraction on data in the testing set and predicting the features by utilizing the training model so as to obtain a classification result. The color and singular value feature-based face in-vivo detection method is capable of improving the classification efficiency and obtaining higher classification result, and can be used for the face authenticity detection in social networks or real life.

Description

Translated fromChinese
基于颜色和奇异值特征的人脸活体检测方法Face Liveness Detection Method Based on Color and Singular Value Features

技术领域technical field

本发明属于图像处理技术领域,特别涉及到一种人脸图像的检测方法,可用于身份认证,公共安全等领域。The invention belongs to the technical field of image processing, and in particular relates to a detection method of a human face image, which can be used in fields such as identity authentication and public security.

背景技术Background technique

随着生物特征识别技术的发展不断推进,人脸图像的应用也越来越广泛。近几年来开始出现人脸识别解锁,人脸考勤机,人脸识别门禁等等应用,在一些对安全性能要求比较高的应用场合中,如门禁,安全解锁,对人脸防伪技术提出更高的要求。人脸识别技术作为当今有效的身份认证技术,不单单是需要人脸检测,人脸普通识别。随之出现的一些不法分子,利用人脸面具,照片或者视频仿冒真实人的生物特征欺骗人脸识别系统。一般在社会生活场景中有以下几种常见的欺骗手段:(1)利用真实人的打印照片欺骗;(2)利用人脸面具欺骗;(3)利用手机或者平板视频欺骗。以上的几种欺骗手段中,照片欺骗由于其代价小,操作简单,是比较广泛的伪装方法。这些对于传统的人脸识别技术提出了更高的要求,由此衍生出人脸活体检测这一概念。With the continuous advancement of biometric identification technology, the application of face images is becoming more and more extensive. In recent years, face recognition unlocking, face attendance machine, face recognition access control and other applications have begun to appear. In some applications that require relatively high security performance, such as access control and security unlocking, higher requirements are put forward for face anti-counterfeiting technology. requirements. Face recognition technology, as an effective identity authentication technology today, not only requires face detection, but also ordinary face recognition. Some lawbreakers that emerged subsequently used face masks, photos or videos to imitate the biological characteristics of real people to deceive the face recognition system. Generally, there are the following common deception methods in social life scenes: (1) deceiving by using printed photos of real people; (2) deceiving by using face masks; (3) deceiving by using mobile phone or tablet videos. Among the above deception methods, photo deception is a relatively common method of camouflage due to its low cost and simple operation. These put forward higher requirements for traditional face recognition technology, and thus derived the concept of face liveness detection.

人脸识别系统都是在默认是真实人的情况下进行的,但是随着社交网络的不断发展,如果存在生物欺骗的情况下,识别系统可能存在错误识别的情况。因此,对人脸活体检测的研究具有重要意义。人脸活体检测又叫人脸活体取证,是利用人脸真实图像和欺骗照片或者视频的特征差异,通过特征提取、特征处理、分类判别,识别所判决的对象是否是活体真人。现有的活体检测算法一般有以下几种:Face recognition systems are all performed on the basis of real people by default, but with the continuous development of social networks, if there is biological deception, the recognition system may have misidentification. Therefore, the research on live face detection is of great significance. Live face detection, also known as live face forensics, uses the feature difference between real face images and spoofed photos or videos to identify whether the judged object is a real person through feature extraction, feature processing, and classification. The existing liveness detection algorithms generally have the following types:

1交互法:这种方法是通过对面部或者头部的运动检测,区分所判决对象是真人还是照片,比如捕捉被检测对象眨眼睛,摇头等动作,但是这种算法需要测试者的动作配合,而且对于视频欺骗效果不太好;1 Interaction method: This method is to distinguish whether the judged object is a real person or a photo by detecting the movement of the face or head, such as capturing the movements of the detected object blinking, shaking the head, etc., but this algorithm requires the cooperation of the tester's movements. And it is not very effective for video spoofing;

2光流法:根据人脸图像和背景图像特征值的相似度对比来检测,这种算法理解起来比较形象直观,但是光流法适合对图像进行动态分析,对于静态分析需要其他算法辅助;2 Optical flow method: detection is based on the comparison of the similarity between the face image and the background image feature value. This algorithm is more intuitive to understand, but the optical flow method is suitable for dynamic analysis of images, and other algorithm assistance is required for static analysis;

3纹理统计法:根据照片或者视频与真实人的纹理细节差异进行检测,对于照片欺骗和视频欺骗都有比较好的效果,但是对于复杂场景的识别时,识别效果不佳;3 Texture statistics method: detection is based on the difference in texture details between photos or videos and real people. It has a good effect on photo deception and video deception, but the recognition effect is not good when it comes to the recognition of complex scenes;

4三维深度检测,通过监测人脸图像的三维深度曲线变化来进行检测,对于普通的人脸识别问题效果较好,但是其算法复杂度和计算量都比较大。4. Three-dimensional depth detection, which is detected by monitoring the change of the three-dimensional depth curve of the face image, is better for ordinary face recognition problems, but its algorithm complexity and calculation amount are relatively large.

发明内容Contents of the invention

本发明的目的在于针对上述已有技术的不足,提出一种基于颜色和奇异值特征的人脸活体检测方法,以降低计算复杂度和对设备的高要求,提高活体人脸的检测效果。The purpose of the present invention is to address the above-mentioned deficiencies in the prior art, and propose a human face detection method based on color and singular value features, to reduce computational complexity and high requirements on equipment, and improve the detection effect of living human faces.

本发明的技术思路是:通过权衡以上算法的优缺点,采用融合法,提取人脸图像的多种特征,并将分块图像的颜色直方图信息和灰度矩阵的奇异值特征信息作为分类依据。The technical idea of the present invention is: by weighing the advantages and disadvantages of the above algorithms, the fusion method is used to extract various features of the face image, and the color histogram information of the block image and the singular value feature information of the gray matrix are used as the classification basis .

根据上述思路,本发明的技术方案是:根据真实人脸图像与翻拍图像的特征差异,分块提取人脸图像特征,最后进行分类判决,得出并演示判决结果,其实现步骤包括如下:According to the above ideas, the technical solution of the present invention is: according to the feature difference between the real face image and the reproduced image, the features of the face image are extracted in blocks, and finally the classification judgment is carried out, and the judgment result is obtained and demonstrated. The implementation steps include the following:

(1)将人脸数据库中的活体真人数据和翻拍伪装数据分别进行正负样本标记,并将整个数据划分为训练集和测试集两部分,训练集和测试集的比例为3:1;(1) Mark the live real person data and remake camouflage data in the face database as positive and negative samples respectively, and divide the whole data into two parts: training set and test set, the ratio of training set and test set is 3:1;

(2)对训练集图像进行批量特征提取:(2) Perform batch feature extraction on the training set images:

2a)将每张样本图像进行颜色空间转换,即将红绿蓝RGB图像转换成灰度图和色调,饱和度,亮度的HSV图;2a) Perform color space conversion on each sample image, that is, convert the red, green, blue RGB image into a grayscale image and hue, saturation, and HSV image of brightness;

2b)将颜色空间转换后的图像,分割成3×3的小图像块;2b) dividing the image after the color space conversion into small image blocks of 3×3;

2c)在每个小图像块上分别提取色调h、饱和度s、亮度v颜色分量的均值和方差特征,以及前10个最大的奇异值特征;2c) On each small image block, extract the mean and variance features of the color components of hue h, saturation s, and brightness v, as well as the top 10 largest singular value features;

2d)将每张图像的每一小块特征组合成特征向量,得到144维的特征向量集合,再对每个特征向量进行归一化并将其转换成标准格式,以便分类器能够识别。2d) Combining each small block feature of each image into a feature vector to obtain a 144-dimensional feature vector set, and then normalizing each feature vector and converting it into a standard format so that the classifier can recognize it.

(3)将训练集归一化的特征向量送入支撑向量机SVM分类器,通过参数调试来优化分类器性能,即采用交叉验证方法获得最佳惩罚系数c和核函数系数g,再根据训练集图像的特征向量训练出数据模型;(3) Send the normalized eigenvectors of the training set to the support vector machine SVM classifier, optimize the performance of the classifier through parameter debugging, that is, use the cross-validation method to obtain the best penalty coefficient c and kernel function coefficient g, and then according to the training The feature vector of the set image is used to train the data model;

(4)对测试集数据进行特征提取,将测试集归一化的特征向量送入支撑向量机SVM分类器,利用数据模型预测出测试集图像的正负分类结果;(4) Carry out feature extraction to test set data, send the feature vector of test set normalization into support vector machine SVM classifier, utilize data model to predict the positive and negative classification result of test set image;

(5)将步骤(1)中测试集的样本标记向量与步骤(4)测出的测试集图像正负样本向量进行比较,得到分类结果的准确率;(5) compare the positive and negative sample vectors of the test set image measured by the sample label vector of the test set in step (1) with the step (4), to obtain the accuracy rate of the classification result;

(6)判断分类准确率是否满足普通活体检测75%的精确度要求:若满足,则结束分类,若不满足普通精确度的要求,返回步骤(2),在每一个小块上提取更多的颜色值和奇异值特征,并修正步骤(3)中的惩罚系数c和核函数系数g参数,重新训练出数据模型并进行预测。(6) Judging whether the classification accuracy rate meets the 75% accuracy requirement of ordinary living body detection: if it is satisfied, then end the classification; if it does not meet the requirement of ordinary accuracy, return to step (2), and extract more from each small block The color value and singular value features of , and modify the penalty coefficient c and kernel function coefficient g parameters in step (3), retrain the data model and make predictions.

本发明与现有技术相比,具有如下优点:Compared with the prior art, the present invention has the following advantages:

1.简单高效1. Simple and efficient

本发明选取简单的色调,饱和度,亮度颜色特征和奇异值特征,避免了以往复杂特征的提取,使计算复杂度大大降低,并且通过仿真实验可得到较高的分类准确率;The invention selects simple hue, saturation, brightness color features and singular value features, avoids the extraction of complex features in the past, greatly reduces the computational complexity, and can obtain higher classification accuracy through simulation experiments;

2.设备成本低2. Low equipment cost

现有的活体检测算法在提取特征时需要辅助的摄像头和光流检测设备,设备成高;本发明在整个特征提取过程中只需进行特征数据的处理,不需要另外添加辅助设备,这样大大降低了设备成本。The existing living body detection algorithm needs auxiliary cameras and optical flow detection equipment when extracting features, and the cost of equipment is high; the present invention only needs to process feature data during the entire feature extraction process, and does not need to add additional auxiliary equipment, which greatly reduces equipment cost.

附图说明Description of drawings

图1是本发明的实现总流程图;Fig. 1 is the realization overall flowchart of the present invention;

图2是本发明中特征提取子流程图;Fig. 2 is a subflow chart of feature extraction in the present invention;

图3是本发明中支撑向量机分类子流程图;Fig. 3 is a support vector machine classification subflow chart in the present invention;

图4是本发明仿真使用的不同核函数的ROC曲线;Fig. 4 is the ROC curve of the different kernel functions that simulation of the present invention uses;

图5是用本发明进行分类判决的仿真效果图。Fig. 5 is a simulation effect diagram of classifying and judging by the present invention.

具体实施方式detailed description

下面结合附图对本发明实例及效果作进一步详细描述。The examples and effects of the present invention will be further described in detail below in conjunction with the accompanying drawings.

参照图1,本发明的实现步骤如下。With reference to Fig. 1, the realization steps of the present invention are as follows.

步骤1:数据样本标记。Step 1: Data sample labeling.

本发明所用图像数据库来源于南京航空航天的NUAA图片欺骗数据库。该数据库分为真人活体图像和翻拍的伪装图像两部分,本发明将活体真人数据标记为正样本,翻拍的照片数据标记为负样本,整个数据集共有5105张正样本和7509张负样本图像;The image database used in the present invention comes from the NUAA image deception database of Nanjing Aerospace. The database is divided into two parts: real live images and reproduced camouflage images. The present invention marks the live real data as positive samples, and the reproduced photo data as negative samples. There are 5105 positive samples and 7509 negative sample images in the entire data set;

随机抽取3362张正样本图像和5761张负样本图像作为训练集数据,占总数据约70%;其余的3491张数据作为测试集数据,其中包含1743张正样本图像和1748张负样本图像。3362 positive sample images and 5761 negative sample images are randomly selected as training set data, accounting for about 70% of the total data; the remaining 3491 data are used as test set data, which contains 1743 positive sample images and 1748 negative sample images.

步骤2:数据集特征提取。Step 2: Dataset Feature Extraction.

对训练集数据进行特征提取可采用现有的方法主要有纹理特征提取法、颜色特征提取法和奇异值特征提取法,本发明采用颜色特征和奇异特征相结合的方式进行特征提取。Existing methods for feature extraction of the training set data mainly include texture feature extraction, color feature extraction and singular value feature extraction. The present invention uses a combination of color features and singular features for feature extraction.

参照图2,本步骤的特征提取如下:Referring to Figure 2, the feature extraction in this step is as follows:

2a)输入训练集图像,将每张样本图像进行颜色空间转换,即将红绿蓝RGB图像转换成灰度图和色调,饱和度,亮度的HSV图;2a) Input the training set image, and perform color space conversion on each sample image, that is, convert the red, green, blue RGB image into a grayscale image and an HSV image of hue, saturation, and brightness;

2b)将颜色空间转换后的图像,分割成3×3的小图像块;2b) dividing the image after the color space conversion into small image blocks of 3×3;

2c)在每个小图像块上分别提取色调h、饱和度s、亮度v颜色分量的均值和方差特征,以及前10个最大的奇异值特征;2c) On each small image block, extract the mean and variance features of the color components of hue h, saturation s, and brightness v, as well as the top 10 largest singular value features;

2d)将每张图像的每一小块特征组合成特征向量,得到144维的特征向量集合,再对每个特征向量进行归一化。2d) Combining each small block feature of each image into a feature vector to obtain a 144-dimensional feature vector set, and then normalizing each feature vector.

步骤3:用支撑向量机训练数据模型。Step 3: Train the data model with support vector machines.

3a)求待训练模型的斜率和截距:3a) Find the slope and intercept of the model to be trained:

支撑向量机的分类原理等价于一个二次规划问题,对图像这种非线性分类等价于求解如下二次规划问题中符合要求的两个参数值ω和b:The classification principle of the support vector machine is equivalent to a quadratic programming problem. The nonlinear classification of images is equivalent to solving the following two parameter values ω and b that meet the requirements in the quadratic programming problem:

mmiinno1122||||ωω||||22++ccΣΣ11nnoϵϵii

st.yiTx-b)≥1st.yiT xb)≥1

其中c为待优化惩罚系数,εi为第i个误差且i∈[1:n],n为待分类数据的总数,ω为待训练模型的斜率,b为模型的截距;Where c is the penalty coefficient to be optimized, εi is the i-th error and i∈[1:n], n is the total number of data to be classified, ω is the slope of the model to be trained, and b is the intercept of the model;

3b)求支撑向量机的核函数:3b) Find the kernel function of the support vector machine:

非线性分类问题在低维空间线性不可分,可以利用核函数将低维空间映射到高维空间,在高维空间中进行线性分解,本发明中的支撑向量机分类器使用径向基函数RBF作为核函数K(x,y),其公式如下:Non-linear classification problem is linearly inseparable in low-dimensional space, can utilize kernel function to map low-dimensional space to high-dimensional space, carry out linear decomposition in high-dimensional space, support vector machine classifier among the present invention uses radial basis function RBF as Kernel function K(x,y), its formula is as follows:

KK((xx,,ythe y))==ee--||||xx--ythe y||||2222gg22,,

其中的g为待优化核函数系数,x和y是待分类数据点坐标,||x-y||表示x-y值的模;Among them, g is the coefficient of the kernel function to be optimized, x and y are the coordinates of the data points to be classified, and ||x-y|| represents the modulus of the x-y value;

3c)利用基于径向基函数的支撑向量机训练数据模型:3c) Train the data model using support vector machine based on radial basis function:

参照图3,本步骤的具体实现如下:Referring to Figure 3, the specific implementation of this step is as follows:

3c1)输入归一化的训练集特征向量,将特征向量集合转换成支撑向量机分类器要求的标准数据格式,即在分类属性值后加上特征向量值;3c1) Input the normalized training set feature vector, convert the feature vector set into the standard data format required by the support vector machine classifier, that is, add the feature vector value after the classification attribute value;

3c2)通过交叉验证方法获得支撑向量机分类器最佳惩罚系数c和核函数系数g,本实例通过交叉验证,得到的结果是:最佳惩罚系数c=8.0,核函数系数g=0.0078125;将支撑向量机分类器参数设置为这两个最佳参数;3c2) Obtain the optimal penalty coefficient c and kernel function coefficient g of the support vector machine classifier through the cross-validation method. In this example, through cross-validation, the results obtained are: the best penalty coefficient c=8.0, the kernel function coefficient g=0.0078125; The support vector machine classifier parameters are set to these two optimal parameters;

3c3)将符合分类器标准格式的训练集特征向量集合,送入到该支撑向量机分类器训练,经训练后得到数据模型。3c3) Send the feature vector set of the training set conforming to the standard format of the classifier to the support vector machine classifier for training, and obtain the data model after training.

步骤4:利用数据模型预测分类结果。Step 4: Use the data model to predict the classification results.

利用步骤3中所得数据模型对测试集特征向量集合进行分类预测:Use the data model obtained in step 3 to classify and predict the test set feature vector set:

4a)将测试集特征向量集合送入支撑向量机;4a) Send the feature vector set of the test set into the support vector machine;

4b)利用步骤3c3)中得到的数据模型对测试集特征向量集合进行分类预测,得到预测的分类结果以及属于该分类结果的概率;4b) using the data model obtained in step 3c3) to classify and predict the test set feature vector set, and obtain the predicted classification result and the probability of belonging to the classification result;

4c)将预测得到的分类结果与步骤1中人为标定的样本标记比较,计算分类的准确率。4c) Comparing the predicted classification result with the artificially calibrated sample label in step 1, and calculating the classification accuracy.

本发明的效果可通过以下仿真进一步说明,The effect of the present invention can be further illustrated by the following simulations,

1.仿真条件1. Simulation conditions

在仿真过程中,用到了负正类率FPR,真正类率TPR参数,其计算公式如下:In the simulation process, the parameters of negative positive class rate FPR and true class rate TPR are used, and the calculation formula is as follows:

FPR=FP/(FP+TN)FPR=FP/(FP+TN)

TPR=TP/(TP+FN)TPR=TP/(TP+FN)

其中真正类TP为将测试集数据中正样本预测为正的个数,假负类FN为将若为将测试集数据中正样本预测为负的个数,假正类FP为将若为将测试集数据中负样本预测为正的个数,真负类TN为将若为将测试集数据中负样本预测为负的个数;Among them, the true class TP is the number of positive samples predicted to be positive in the test set data, the false negative class FN is the number of predicted positive samples in the test set data to be negative, and the false positive class FP is the number of predicted positive samples in the test set data. The number of negative samples in the data is predicted to be positive, and the true negative class TN is the number of negative samples in the test set data that are predicted to be negative;

2.仿真内容2. Simulation content

仿真1:利用径向基函数,多项式函数、sigmoid函数这三种常用的核函数,设计支撑向量机,分别对测试数据进行分类,结果如表1Simulation 1: Using radial basis function, polynomial function, and sigmoid function, three commonly used kernel functions, design a support vector machine and classify the test data respectively. The results are shown in Table 1

表1三种核函数分类结果对比Table 1 Comparison of classification results of three kernel functions

从表1中可以看出,径向基函数得到的分类效果最好,所需要的支撑向量数目也最少。It can be seen from Table 1 that the classification effect obtained by the radial basis function is the best, and the number of support vectors required is also the least.

仿真2:使用的支撑向量机采用径向基函数作为核函数,绘制负正类率FPR和真正类率TPR根据阈值变化的接受者操作特性ROC曲线,如图4所示,实现对支撑向量机分类器分类性能的评估。Simulation 2: The support vector machine used uses the radial basis function as the kernel function, and draws the receiver operating characteristic ROC curve of the negative positive class rate FPR and the true class rate TPR according to the threshold change, as shown in Figure 4, to realize the support vector machine Evaluation of Classifier Classification Performance.

下面根据附图4,从两个方面考察该支撑向量机分类器性能:According to accompanying drawing 4 below, investigate this support vector machine classifier performance from two aspects:

1.负正类率FPR足够小,真正类率TPR足够大。1. The negative and positive rate FPR is small enough, and the true rate TPR is large enough.

附图4为径向基函数的ROC曲线,其横坐标为负正类率FPR,纵坐标为真正类率TPR。一个好的分类算法需要使得负正类率足够小,真正类率足够大,体现在附图4的曲线上就是ROC曲线越接近左上角,分类器性能越好。从附图4中可以看出,ROC曲线很接近左上角,分类性能好;Accompanying drawing 4 is the ROC curve of the radial basis function, and its abscissa is the negative and positive class rate FPR, and the ordinate is the true class rate TPR. A good classification algorithm needs to make the rate of negative and positive classes small enough, and the rate of true classes large enough. It is reflected in the curve in Figure 4 that the closer the ROC curve is to the upper left corner, the better the performance of the classifier. It can be seen from Figure 4 that the ROC curve is very close to the upper left corner, and the classification performance is good;

2.ROC曲线下面积足够大。2. The area under the ROC curve is large enough.

ROC曲线下的面积用AUC表示,AUC的大小用来评价一个分类器的好坏。从附图4中可以看出径向基函数的AUC值达到0.9851,相比其他分类器,其AUC值更大,分类性能更好。The area under the ROC curve is represented by AUC, and the size of AUC is used to evaluate the quality of a classifier. It can be seen from Figure 4 that the AUC value of the radial basis function reaches 0.9851. Compared with other classifiers, its AUC value is larger and the classification performance is better.

仿真3:本发明利用特征提取算法,将最后得出的分类结果通过仿真实验可视化地演示出来。Simulation 3: The present invention uses a feature extraction algorithm to visually demonstrate the final classification results through simulation experiments.

整个演示步骤如下:The entire demo steps are as follows:

首先,将需要检测的人脸图片转换到明度、蓝色偏量、红色偏量值的YCbCr颜色空间,分别提取待测试图像的明度y、蓝色偏量cb和红色偏量值cr,根据AnilK.Jain博士提出的椭圆肤色聚类模型,若图像像素点满足肤色模型要求,判定为肤色,否则判为非肤色。椭圆聚类肤色模型公式如下:First, convert the face image to be detected into the YCbCr color space of brightness, blue offset, and red offset value, and extract the brightness y, blue offset cb, and red offset value cr of the image to be tested respectively, according to AnilK .The ellipse skin color clustering model proposed by Dr. Jain, if the image pixel meets the skin color model requirements, it is judged as skin color, otherwise it is judged as non-skin color. The formula of ellipse clustering skin color model is as follows:

((xx--eeccxx))22aa22++((ythe y--eeccythe y))22bb22≤≤11

xxythe y==ccoosthe sθθsthe siinnoθθ--sthe siinnoθθccoosthe sθθccbb--ccxxccrr--ccythe y

其中x,y为人脸图像的像素位置。参数ecy,ecx,a,b,θ,cy,cx为AnilK.Jain博士经实验获得的常量,ecx,ecy为椭圆聚类中心,a,b为椭圆模型的长短轴,θ为椭圆旋转角度,cy,cx分别为红蓝分量修正参数;Where x, y are the pixel positions of the face image. The parameters ecy, ecx, a, b, θ, cy, and cx are constants obtained by Dr. AnilK.Jain through experiments, ecx, ecy are the ellipse cluster centers, a, b are the major and minor axes of the ellipse model, θ is the rotation angle of the ellipse, cy, cx are red and blue component correction parameters respectively;

其次,对满足肤色模型条件的像素值所构成的区域,进行人脸概率估计,得到人脸概率分布图,将人脸概率分布图二值化,即将人脸肤色区域用白色表示,非肤色区域用黑色表示;Secondly, the face probability estimation is performed on the area formed by the pixel values satisfying the condition of the skin color model, and the face probability distribution map is obtained, and the face probability distribution map is binarized, that is, the skin color area of the face is represented by white, and the non-skin color area is white. in black;

最后,将白色矩形区域判定为人脸区域,对该人脸区域用本发明的特征提取算法提取出特征向量,再将该特征向量送入支撑向量机分类器预测,判断是否为活体真人,若为活体真人用黑色实框描述,否则为假冒,用黑色虚框表示,如图5所示。Finally, the white rectangular area is judged as the human face area, and the feature extraction algorithm of the present invention is used to extract the feature vector to the human face area, and then the feature vector is sent to the support vector machine classifier for prediction, and it is judged whether it is a living real person, if it is Live real people are described by black solid boxes, otherwise they are counterfeit and represented by black virtual boxes, as shown in Figure 5.

从图5中可以看出,本发明能够将真人照片与翻拍照片有效地区分出来。It can be seen from FIG. 5 that the present invention can effectively distinguish real photos from remakes.

Claims (5)

Translated fromChinese
1.基于颜色和奇异值特征的人脸活体检测方法,包括:1. A face detection method based on color and singular value features, including:(1)将人脸数据库中的活体真人数据和翻拍伪装数据分别进行正负样本标记,并将整个数据划分为训练集和测试集两部分,训练集和测试集的比例为3:1;(1) Mark the live real person data and remake camouflage data in the face database as positive and negative samples respectively, and divide the whole data into two parts: training set and test set, the ratio of training set and test set is 3:1;(2)对训练集图像进行批量特征提取:(2) Perform batch feature extraction on the training set images:2a)将每张样本图像进行颜色空间转换,即将红绿蓝RGB图像转换成灰度图和色调,饱和度,亮度的HSV图;2a) Perform color space conversion on each sample image, that is, convert the red, green, blue RGB image into a grayscale image and hue, saturation, and HSV image of brightness;2b)将颜色空间转换后的图像,分割成3×3的小图像块;2b) dividing the image after the color space conversion into small image blocks of 3×3;2c)在每个小图像块上分别提取色调h、饱和度s、亮度v颜色分量的均值和方差特征,以及前10个最大的奇异值特征;2c) On each small image block, extract the mean and variance features of the color components of hue h, saturation s, and brightness v, as well as the top 10 largest singular value features;2d)将每张图像的每一小块特征组合成特征向量,得到144维的特征向量集合,再对每个特征向量进行归一化并将其转换成分类器要求的标准格式,以便分类器能够识别;2d) Combine each small block feature of each image into a feature vector to obtain a 144-dimensional feature vector set, and then normalize each feature vector and convert it into a standard format required by the classifier, so that the classifier able to identify;(3)将训练集归一化的特征向量送入支撑向量机SVM分类器,通过参数调试来优化分类器性能,即采用5折交叉验证方法获得模型参数的最佳惩罚系数c和核函数系数g,再根据训练集图像的特征向量训练出数据模型;(3) Send the normalized eigenvectors of the training set into the support vector machine SVM classifier, and optimize the performance of the classifier through parameter debugging, that is, use the 5-fold cross-validation method to obtain the best penalty coefficient c and kernel function coefficient of the model parameters g, and then train the data model according to the feature vector of the training set image;(4)对测试集进行特征提取,将测试集归一化的特征向量送入支撑向量机SVM分类器,利用数据模型预测出测试集图像的正负分类结果;(4) Carry out feature extraction to test set, send the feature vector of test set normalization into SVM classifier, utilize data model to predict the positive and negative classification result of test set image;(5)将步骤(1)中测试集的样本标记向量与步骤(4)测出的测试集图像正负样本向量进行比较,得到分类结果的准确率;(5) compare the positive and negative sample vectors of the test set image measured by the sample label vector of the test set in step (1) with the step (4), to obtain the accuracy rate of the classification result;(6)判断分类准确率是否满足普通活体检测75%的精确度要求:若满足,则结束分类,若不满足普通精确度的要求,返回步骤(2),在每一个小块上提取更多的颜色值和奇异值特征,并修正步骤(3)中的惩罚系数c和核函数系数g参数,重新训练出数据模型并进行预测。(6) Judging whether the classification accuracy rate meets the 75% accuracy requirement of ordinary living body detection: if it is satisfied, then end the classification; if it does not meet the requirement of ordinary accuracy, return to step (2), and extract more from each small block The color value and singular value features of , and modify the penalty coefficient c and kernel function coefficient g parameters in step (3), retrain the data model and make predictions.2.根据权利要求1所述的基于颜色和奇异值特征提取的人脸活体检测方法,其中步骤2c)中在每个小图像块上分别提取色调h、饱和度s、亮度v颜色分量的均值u和方差s,按如下公式提取:2. the live face detection method based on color and singular value feature extraction according to claim 1, wherein step 2c) extracts respectively the mean value of hue h, saturation s, brightness v color components on each small image block u and variance s are extracted according to the following formula:uu==11NNΣΣii==11NNhhiisthe s==11NNΣΣii==11NN((hhii--uu))22其中N为图像块像素的数目,i为像素编号且i∈[1:N],hi为第i个像素的h分量值。Where N is the number of pixels in the image block, i is the pixel number and i∈[1:N], hi is the h component value of the i-th pixel.3.根据权利要求1所述的基于颜色和奇异值特征提取的人脸活体检测方法,其中步骤2c)中提取前10个最大的奇异值特征,是通过调用linalg.svd()函数将图像块灰度矩阵分解成正交阵U、V和对角阵S三个矩阵,该对角阵S对角线上的值即为该像素块所有的奇异值;将S对角线上的所有奇异值提取出来并对其进行降序排序,排序完后的前10个即为最大的奇异值。3. the live face detection method based on color and singular value feature extraction according to claim 1, wherein step 2c) extracts the first 10 largest singular value features by calling the linalg.svd () function to convert the image block The grayscale matrix is decomposed into three matrices of orthogonal matrix U, V and diagonal matrix S. The values on the diagonal of the diagonal matrix S are all the singular values of the pixel block; all the singular values on the S diagonal are The values are extracted and sorted in descending order, and the top 10 after sorting are the largest singular values.4.根据权利要求1所述的基于颜色和奇异值特征提取的人脸活体检测方法,其中步骤2d)中对每个特征向量进行归一化,其步骤如下:4. the live face detection method based on color and singular value feature extraction according to claim 1, wherein in step 2d), each feature vector is normalized, and its steps are as follows:2d1)计算每一个144维特征向量X=(x1,x2…xi…x144)中的最大值M=max(x1,x2…xi…x144)和最小值m=min(x1,x2…xi…x144);2d1) Calculate the maximum value M=max(x1 , x2 ...xi ...x144 ) and the minimum value m=min of each 144-dimensional feature vector X=(x1 , x2 ... xi ... x144 ) (x1 ,x2 ... xi ... x144 );2d2)求每一个特征向量X中每个元素的归一化值xi',计算公式如下:2d2) Calculate the normalized value xi ' of each element in each eigenvector X, the calculation formula is as follows:xxii′′==((xxii--mm))Mm--mm,,ii==11,,22......144.144.5.根据权利要求1所述的基于颜色和奇异值特征提取的人脸活体检测方法,其中步骤(3)中采用5折交叉验证方法获得模型参数的最佳惩罚系数c和核函数系数g,按如下步骤进行:5. the human face detection method based on color and singular value feature extraction according to claim 1, wherein in the step (3), adopt 5-fold cross-validation method to obtain the best penalty coefficient c and kernel function coefficient g of model parameter, Proceed as follows:3a)每次将数据集随机分成五个子集,五个子集中的每一个子集轮流作为一次验证集,其余4个子集作为训练集;3a) Randomly divide the data set into five subsets each time, each of the five subsets takes turns as a verification set, and the remaining 4 subsets are used as a training set;3b)每次对一个训练集进行训练得到一个数据模型,用所得数据模型再对相应验证集预测后得到分类准确率;3b) Each time a training set is trained to obtain a data model, and the classification accuracy rate is obtained after predicting the corresponding verification set with the obtained data model;3c)将其余训练集重复步骤3b)得到5个分类准确率,取五个当中具有最高准确率的模型参数值c和g,即为最佳惩罚系数c和核函数系数g。3c) Repeat step 3b) for the rest of the training set to obtain 5 classification accuracy rates, and take the model parameter values c and g with the highest accuracy rates among the five, which are the optimal penalty coefficient c and kernel function coefficient g.
CN201510770424.0A2015-11-122015-11-12Color and singular value feature-based face in-vivo detection methodPendingCN105354554A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510770424.0ACN105354554A (en)2015-11-122015-11-12Color and singular value feature-based face in-vivo detection method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510770424.0ACN105354554A (en)2015-11-122015-11-12Color and singular value feature-based face in-vivo detection method

Publications (1)

Publication NumberPublication Date
CN105354554Atrue CN105354554A (en)2016-02-24

Family

ID=55330522

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510770424.0APendingCN105354554A (en)2015-11-122015-11-12Color and singular value feature-based face in-vivo detection method

Country Status (1)

CountryLink
CN (1)CN105354554A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106228142A (en)*2016-07-292016-12-14西安电子科技大学Face verification method based on convolutional neural networks and Bayesian decision
CN106845497A (en)*2017-01-122017-06-13天津大学Maize in Earlier Stage image damage caused by a drought recognition methods based on multi-feature fusion
CN106971161A (en)*2017-03-272017-07-21深圳大图科创技术开发有限公司Face In vivo detection system based on color and singular value features
CN107122709A (en)*2017-03-172017-09-01上海云从企业发展有限公司Biopsy method and device
CN107167561A (en)*2017-06-202017-09-15福建师范大学福清分校Intelligent formaldehyde examination system and method based on ZigBee technology
CN107463941A (en)*2017-06-302017-12-12百度在线网络技术(北京)有限公司A kind of vehicle owner identification method and device
CN107609494A (en)*2017-08-312018-01-19北京飞搜科技有限公司A kind of human face in-vivo detection method and system based on silent formula
CN107609364A (en)*2017-10-302018-01-19泰康保险集团股份有限公司User identification confirmation method and apparatus
CN107679457A (en)*2017-09-062018-02-09阿里巴巴集团控股有限公司User identity method of calibration and device
CN107798279A (en)*2016-09-072018-03-13北京眼神科技有限公司Face living body detection method and device
CN107992842A (en)*2017-12-132018-05-04深圳云天励飞技术有限公司Biopsy method, computer installation and computer-readable recording medium
CN108446705A (en)*2017-02-162018-08-24华为技术有限公司The method and apparatus of image procossing
CN108764126A (en)*2018-05-252018-11-06郑州目盼智能科技有限公司A kind of embedded living body faces tracking system
CN108875331A (en)*2017-08-012018-11-23北京旷视科技有限公司Face unlocking method, device and system and storage medium
CN108875467A (en)*2017-06-052018-11-23北京旷视科技有限公司The method, apparatus and computer storage medium of In vivo detection
CN110348322A (en)*2019-06-192019-10-18西华师范大学Human face in-vivo detection method and equipment based on multi-feature fusion
CN110532993A (en)*2019-09-042019-12-03深圳市捷顺科技实业股份有限公司A kind of face method for anti-counterfeit, device, electronic equipment and medium
CN110751069A (en)*2019-10-102020-02-04武汉普利商用机器有限公司Face living body detection method and device
CN110969202A (en)*2019-11-282020-04-07上海观安信息技术股份有限公司Portrait collection environment verification method and system based on color component and perceptual hash algorithm
CN111046899A (en)*2019-10-092020-04-21京东数字科技控股有限公司Method, device and equipment for identifying authenticity of identity card and storage medium
CN111178112A (en)*2018-11-092020-05-19株式会社理光Real face recognition device
CN111241873A (en)*2018-11-282020-06-05马上消费金融股份有限公司Image reproduction detection method, training method of model thereof, payment method and payment device
CN111582045A (en)*2020-04-152020-08-25深圳市爱深盈通信息技术有限公司Living body detection method and device and electronic equipment
CN111767923A (en)*2020-07-282020-10-13腾讯科技(深圳)有限公司Image data detection method and device and computer readable storage medium
CN112651268A (en)*2019-10-112021-04-13北京眼神智能科技有限公司Method and device for eliminating black and white photos in biopsy, and electronic equipment
CN112766162A (en)*2021-01-202021-05-07北京市商汤科技开发有限公司Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN114820211A (en)*2022-04-262022-07-29中国平安人寿保险股份有限公司Claims data quality inspection method, device, computer equipment and storage medium
CN115249371A (en)*2021-04-282022-10-28中国移动通信集团四川有限公司Training method and device of face recognition model and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102592145A (en)*2011-12-282012-07-18浙江大学Human face detection method based on principal component analysis and support vector machine
CN103116763A (en)*2013-01-302013-05-22宁波大学Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102592145A (en)*2011-12-282012-07-18浙江大学Human face detection method based on principal component analysis and support vector machine
CN103116763A (en)*2013-01-302013-05-22宁波大学Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《中国优秀硕士学位论文全文数据库》*

Cited By (45)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106228142A (en)*2016-07-292016-12-14西安电子科技大学Face verification method based on convolutional neural networks and Bayesian decision
CN106228142B (en)*2016-07-292019-02-15西安电子科技大学 Face Verification Method Based on Convolutional Neural Network and Bayesian Decision Making
CN107798279B (en)*2016-09-072022-01-25北京眼神科技有限公司Face living body detection method and device
CN107798279A (en)*2016-09-072018-03-13北京眼神科技有限公司Face living body detection method and device
CN106845497A (en)*2017-01-122017-06-13天津大学Maize in Earlier Stage image damage caused by a drought recognition methods based on multi-feature fusion
CN108446705B (en)*2017-02-162021-03-23华为技术有限公司 Method and device for image processing
CN108446705A (en)*2017-02-162018-08-24华为技术有限公司The method and apparatus of image procossing
CN107122709A (en)*2017-03-172017-09-01上海云从企业发展有限公司Biopsy method and device
CN106971161A (en)*2017-03-272017-07-21深圳大图科创技术开发有限公司Face In vivo detection system based on color and singular value features
CN108875467A (en)*2017-06-052018-11-23北京旷视科技有限公司The method, apparatus and computer storage medium of In vivo detection
CN108875467B (en)*2017-06-052020-12-25北京旷视科技有限公司Living body detection method, living body detection device and computer storage medium
CN107167561A (en)*2017-06-202017-09-15福建师范大学福清分校Intelligent formaldehyde examination system and method based on ZigBee technology
CN107463941A (en)*2017-06-302017-12-12百度在线网络技术(北京)有限公司A kind of vehicle owner identification method and device
CN108875331A (en)*2017-08-012018-11-23北京旷视科技有限公司Face unlocking method, device and system and storage medium
CN108875331B (en)*2017-08-012022-08-19北京旷视科技有限公司Face unlocking method, device and system and storage medium
CN107609494A (en)*2017-08-312018-01-19北京飞搜科技有限公司A kind of human face in-vivo detection method and system based on silent formula
CN107679457A (en)*2017-09-062018-02-09阿里巴巴集团控股有限公司User identity method of calibration and device
CN107609364A (en)*2017-10-302018-01-19泰康保险集团股份有限公司User identification confirmation method and apparatus
CN107992842A (en)*2017-12-132018-05-04深圳云天励飞技术有限公司Biopsy method, computer installation and computer-readable recording medium
WO2019114580A1 (en)*2017-12-132019-06-20深圳励飞科技有限公司Living body detection method, computer apparatus and computer-readable storage medium
CN107992842B (en)*2017-12-132020-08-11深圳励飞科技有限公司Living body detection method, computer device, and computer-readable storage medium
CN108764126A (en)*2018-05-252018-11-06郑州目盼智能科技有限公司A kind of embedded living body faces tracking system
CN108764126B (en)*2018-05-252021-09-07郑州目盼智能科技有限公司Embedded living body face tracking system
CN111178112A (en)*2018-11-092020-05-19株式会社理光Real face recognition device
CN111178112B (en)*2018-11-092023-06-16株式会社理光Face recognition device
CN111241873A (en)*2018-11-282020-06-05马上消费金融股份有限公司Image reproduction detection method, training method of model thereof, payment method and payment device
CN110348322A (en)*2019-06-192019-10-18西华师范大学Human face in-vivo detection method and equipment based on multi-feature fusion
CN110532993A (en)*2019-09-042019-12-03深圳市捷顺科技实业股份有限公司A kind of face method for anti-counterfeit, device, electronic equipment and medium
CN110532993B (en)*2019-09-042022-03-08深圳市捷顺科技实业股份有限公司Face anti-counterfeiting method and device, electronic equipment and medium
CN111046899A (en)*2019-10-092020-04-21京东数字科技控股有限公司Method, device and equipment for identifying authenticity of identity card and storage medium
CN111046899B (en)*2019-10-092023-12-08京东科技控股股份有限公司Identification card authenticity identification method, device, equipment and storage medium
CN110751069A (en)*2019-10-102020-02-04武汉普利商用机器有限公司Face living body detection method and device
CN112651268B (en)*2019-10-112024-05-28北京眼神智能科技有限公司 Method, device and electronic device for excluding black and white photos in liveness detection
CN112651268A (en)*2019-10-112021-04-13北京眼神智能科技有限公司Method and device for eliminating black and white photos in biopsy, and electronic equipment
CN110969202B (en)*2019-11-282023-12-19上海观安信息技术股份有限公司Portrait acquisition environment verification method and system based on color component and perceptual hash algorithm
CN110969202A (en)*2019-11-282020-04-07上海观安信息技术股份有限公司Portrait collection environment verification method and system based on color component and perceptual hash algorithm
CN111582045B (en)*2020-04-152024-05-10芯算一体(深圳)科技有限公司Living body detection method and device and electronic equipment
CN111582045A (en)*2020-04-152020-08-25深圳市爱深盈通信息技术有限公司Living body detection method and device and electronic equipment
CN111767923A (en)*2020-07-282020-10-13腾讯科技(深圳)有限公司Image data detection method and device and computer readable storage medium
CN111767923B (en)*2020-07-282024-02-20腾讯科技(深圳)有限公司Image data detection method, device and computer readable storage medium
CN112766162A (en)*2021-01-202021-05-07北京市商汤科技开发有限公司Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN112766162B (en)*2021-01-202023-12-22北京市商汤科技开发有限公司Living body detection method, living body detection device, electronic equipment and computer readable storage medium
CN115249371A (en)*2021-04-282022-10-28中国移动通信集团四川有限公司Training method and device of face recognition model and electronic equipment
CN114820211A (en)*2022-04-262022-07-29中国平安人寿保险股份有限公司Claims data quality inspection method, device, computer equipment and storage medium
CN114820211B (en)*2022-04-262024-06-14中国平安人寿保险股份有限公司Method, device, computer equipment and storage medium for checking and verifying quality of claim data

Similar Documents

PublicationPublication DateTitle
CN105354554A (en)Color and singular value feature-based face in-vivo detection method
Atoum et al.Face anti-spoofing using patch and depth-based CNNs
CN110084135B (en)Face recognition method, device, computer equipment and storage medium
Peng et al.Face presentation attack detection using guided scale texture
Carvalho et al.Illuminant-based transformed spaces for image forensics
CN112381775A (en)Image tampering detection method, terminal device and storage medium
CN103886301A (en)Human face living detection method
CN111275685A (en)Method, device, equipment and medium for identifying copied image of identity document
CN110427972B (en)Certificate video feature extraction method and device, computer equipment and storage medium
CN113743365B (en) Fraud detection method and device in face recognition process
CN101968813A (en)Method for detecting counterfeit webpage
CN111191521B (en)Face living body detection method and device, computer equipment and storage medium
Zhou et al.Digital image modification detection using color information and its histograms
CN105404859A (en)Vehicle type recognition method based on pooling vehicle image original features
CN107103266A (en)The training of two-dimension human face fraud detection grader and face fraud detection method
Luo et al.Adaptive skin detection using face location and facial structure estimation
Gou et al.mom: Mean of moments feature for person re-identification
CN111832405A (en) A face recognition method based on HOG and deep residual network
Patel et al.Compass local binary patterns for gender recognition of facial photographs and sketches
Jha et al.A novel texture based approach for facial liveness detection and authentication using deep learning classifier
Aiping et al.Face detection technology based on skin color segmentation and template matching
Sudhakar et al.Facial identification of twins based on fusion score method
CN108073940A (en)A kind of method of 3D object instance object detections in unstructured moving grids
Liu et al.Presentation attack detection for face in mobile phones
Yang et al.-Means Based Fingerprint Segmentation with Sensor Interoperability

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20160224


[8]ページ先頭

©2009-2025 Movatter.jp