Movatterモバイル変換


[0]ホーム

URL:


CN102592145A - Human face detection method based on principal component analysis and support vector machine - Google Patents

Human face detection method based on principal component analysis and support vector machine
Download PDF

Info

Publication number
CN102592145A
CN102592145ACN2011104461130ACN201110446113ACN102592145ACN 102592145 ACN102592145 ACN 102592145ACN 2011104461130 ACN2011104461130 ACN 2011104461130ACN 201110446113 ACN201110446113 ACN 201110446113ACN 102592145 ACN102592145 ACN 102592145A
Authority
CN
China
Prior art keywords
face
principal component
people
collection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011104461130A
Other languages
Chinese (zh)
Inventor
党路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJUfiledCriticalZhejiang University ZJU
Priority to CN2011104461130ApriorityCriticalpatent/CN102592145A/en
Publication of CN102592145ApublicationCriticalpatent/CN102592145A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

The invention relates to a human face detection method, which is universal and fast and can be used in a video monitoring system. The human face detection method is characterized by being capable of achieving the aim of fast end efficient human face detection by performing principal component analysis on an input image area and simultaneously classifying the analyzed intermediate results through a support vector machine.

Description

Translated fromChinese
基于主元分析和支持向量机的人脸检测方法Face Detection Method Based on Principal Component Analysis and Support Vector Machine

技术领域:Technical field:

本发明涉及一种通用快速的人脸检测方法,该方法可以适用于视频监控系统,该方法的主要特点是能够通过对输入图像区域进行主元分析,同时用支持向量机对分析的中间结果进行分类,从而能够达到快速高效地进行人脸检测的目的。The present invention relates to a kind of universal fast face detection method, and this method can be applicable to the video monitoring system, and the main feature of this method is that can carry out principal component analysis to input image area, at the same time use support vector machine to analyze the intermediate result Classification, so as to achieve the purpose of fast and efficient face detection.

背景技术:Background technique:

人脸检测技术在门禁系统、监控系统、数码相机以及社交网络当中有着十分广泛的应用和发展前景。首先,对于安全敏感区域的门禁系统,可以借助人脸检测来抓取人脸,之后进行识别,从而可以辨识进入者的身份。其次,在诸如银行、体育场、机场以及商场等等的人流量密集的公共场所中,可以对人群进行监控,利用人脸检测技术,不但可以监测人流量的变化,而且同样可以对检测的结果进行识别,更可以进一步对可疑的目标进行自动跟踪,这对于防范恐怖袭击、盗窃等突发事件大有帮助。再次,在使用数码相机拍摄人物时,可以通过人脸检测功能来对相机进行辅助对焦,从而增强了用户的拍摄体验。最后,在目前流行的社交网络当中,可以对用户分享照片当中的人脸进行检测并加以分析,产生推荐的好友,也方便了用户在虚拟社交系统中扩大交友圈。Face detection technology has a wide range of applications and development prospects in access control systems, monitoring systems, digital cameras and social networks. First of all, for access control systems in security-sensitive areas, face detection can be used to capture faces and then identify them, so that the identity of the entrant can be identified. Secondly, in public places with dense traffic such as banks, stadiums, airports, shopping malls, etc., the crowd can be monitored. Using face detection technology, not only can the changes in the traffic of people be monitored, but also the detection results can be monitored. It can further automatically track suspicious targets, which is of great help in preventing terrorist attacks, theft and other emergencies. Thirdly, when using a digital camera to shoot people, the camera can be assisted in focusing through the face detection function, thereby enhancing the user's shooting experience. Finally, in the current popular social networks, the faces in the photos shared by users can be detected and analyzed to generate recommended friends, and it is also convenient for users to expand the circle of friends in the virtual social system.

目前比较成熟的人脸检测算法有Voila等人提出的基于Harr-like特征的Adaboost算法,它的最大优点是训练误差以指数形式趋向于0,且检测误差存在边界。但由于要针对不同的训练集训练很多的弱分类器,其在训练过程中要耗费非常大量的时间。At present, the relatively mature face detection algorithm is the Adaboost algorithm based on Harr-like features proposed by Voila et al. Its biggest advantage is that the training error tends to 0 in an exponential form, and the detection error has a boundary. However, since many weak classifiers need to be trained for different training sets, it takes a lot of time during the training process.

发明内容:Invention content:

本发明要解决的是现有技术存在的上述问题,旨在提供一种基于主元分析和支持向量机的人脸检测方法。The present invention aims to solve the above-mentioned problems in the prior art, and aims to provide a face detection method based on principal component analysis and support vector machine.

为解决上述技术问题,本发明采用以下技术方案:基于主元分析和支持向量机的人脸检测方法,其特征在于在于按以下步骤进行:For solving the problems of the technologies described above, the present invention adopts following technical scheme: the face detection method based on principal component analysis and support vector machine, is characterized in that carrying out by following steps:

1)构建人脸分类器:通过对大量不同的经过预处理过的人脸样本的构成的向量空间进行分析,找出最能表征其向量空间数据分布的一组正交基构成主元向量集,对应其方差最大的前几个方向;对于人脸样本以及非人脸样本,在这组正交基上投影后,利用支持向量机的方法找出一个能够区分两类的超平面,从而构建一个人脸分类器;1) Construct a face classifier: by analyzing the vector space composed of a large number of different pre-processed face samples, find out a set of orthogonal bases that can best represent the data distribution of its vector space to form a principal component vector set , corresponding to the first few directions with the largest variance; for face samples and non-face samples, after projecting on this set of orthogonal bases, use the support vector machine method to find a hyperplane that can distinguish the two types, so as to construct a face classifier;

2)检测阶段:对于需要测试的人脸,对其在主元向量集的投影,利用构建好的分类器进行判别。2) Detection stage: For the face to be tested, use the constructed classifier to discriminate its projection on the principal component vector set.

本发明基于主元分析和支持向量机的人脸检测方法能够准确高效地进行人脸检测。The human face detection method based on the principal component analysis and the support vector machine of the present invention can detect the human face accurately and efficiently.

附图说明:Description of drawings:

图1是本发明训练阶段的流程图。Figure 1 is a flowchart of the training phase of the present invention.

图2是本发明主元向量集的方向的示意图。Fig. 2 is a schematic diagram of the direction of the pivot vector set in the present invention.

图3是本发明最优超平面的示意图。Fig. 3 is a schematic diagram of the optimal hyperplane of the present invention.

图4是本发明进行人脸测试阶段的流程图。Fig. 4 is a flow chart of the face testing stage of the present invention.

具体实施方式:Detailed ways:

参照附图,本发明的基于主元分析和支持向量机的人脸检测方法,包括两个阶段:With reference to accompanying drawing, the face detection method based on principal component analysis and support vector machine of the present invention comprises two stages:

1.基于主元分析的人脸检测:训练阶段1. Face detection based on principal component analysis: training phase

在进行人脸检测之前,我们首先要对人脸模型进行训练,训练集包括等同数量的人脸样本和非人脸样本。为了保证人脸检测的高效并同时维持较低的错检率,我们需要对输入的人脸图像进行预处理,保留其较显著的易于区分的特征。Before face detection, we first need to train the face model, and the training set includes an equal number of face samples and non-face samples. In order to ensure the high efficiency of face detection and maintain a low false detection rate at the same time, we need to preprocess the input face image to retain its more prominent and easy-to-distinguish features.

下面根据本发明的流程图图1对训练阶段的各个步骤进行详细说明:Below according to flow chart Fig. 1 of the present invention, each step of training phase is described in detail:

●步骤一:预处理训练集●Step 1: Preprocessing the training set

把所有用于作为训练的人脸以及非人脸图像调整为统一的大小,转换为灰度图,并做自适应阈值化处理,即使用每幅转换后图像灰度的中值作为阈值。对于每幅图像I,其灰度的中值为thresh,那么转换后的图像I′如下:Adjust all the face and non-face images used for training to a uniform size, convert them to grayscale images, and perform adaptive thresholding, that is, use the median value of the grayscale of each converted image as the threshold. For each image I, the median value of its grayscale is thresh, then the converted image I' is as follows:

IIii′′==11IIii≥&Greater Equal;threshthresh00elseelse

其中I′i为I′的每个像素灰度值。WhereI'i is the gray value of each pixel of I'.

●步骤二:计算主元向量集●Step 2: Calculate the pivot vector set

对于预处理后训练集中的所有人脸图像(m个),将其全部展开成列向量(n维,n为图像的像素数目),组成人脸图像集合Xn×m=[I(1),I(2),...I(m)]。对X进行SVD奇异值分解,即:For all face images (m) in the training set after preprocessing, it is all expanded into a column vector (n dimension, n is the number of pixels of the image), and the face image collection Xn × m =[I(1) is formed , I(2) , . . . I(m) ]. Perform SVD singular value decomposition on X, namely:

X=USVX = USV

其中,Pk×n=[u1,u2,...uk]的列向量即为所求的主元向量集,取前k个作为主元向量集Pk×n=[u1,u2,...uk],它们是正交的,并且X在这些方向上的方差最大,如图2所示。后面的乘积SV的每一列表示X的每一列在这些主元向量集上的投影坐标。因此,我们可以在Pk×n=[u1,u2,...uk]所构成的低维向量空间上表达人脸图像。Among them, the column vectors of Pk×n =[u1 , u2 ,... uk ] are the set of pivot vectors to be sought, and the first k are taken as the set of pivot vectors Pk×n =[u1 , u2 ,...uk ], they are orthogonal, and X has the largest variance in these directions, as shown in Figure 2. Each column of the subsequent product SV represents the projection coordinates of each column of X on these pivot vector sets. Therefore, we can express the human face image on the low-dimensional vector space formed by Pk×n =[u1 , u2 , . . . uk ].

●步骤三:计算人脸集和非人脸集在主元向量集上的投影坐标●Step 3: Calculate the projection coordinates of the face set and non-face set on the pivot vector set

对于非人脸集,按照步骤二中同样的方法将其组成矩阵Y,人脸集和非人脸集在主元向量集上的投影坐标分别计算如下:For the non-face set, follow the same method instep 2 to form the matrix Y, and the projection coordinates of the face set and the non-face set on the pivot vector set are respectively calculated as follows:

人脸:Projface=PTXFace: Projface = PT X

非人脸:Projnonface=PTYNon-face: Projnonface = PT Y

●步骤四:根据投影训练出支持向量机模型●Step 4: Train the support vector machine model according to the projection

在主元向量集P所构成的向量空间中,属于人脸的点标记为1,非人脸的点标记为-1,目的是需要找出一个超平面f(x)=wTx+b=0,使得对于f(x)<0的点为非人脸,反之为人脸。通过对如下优化问题的求解:In the vector space formed by the principal component vector set P, the points belonging to the face are marked as 1, and the points of the non-face are marked as -1. The purpose is to find a hyperplane f(x)=wT x+b =0, so that the points for f(x)<0 are non-human faces, otherwise they are human faces. By solving the following optimization problem:

maxmax&gamma;&gamma;~~,,sthe s..tt..ythe y((ii))((wwTTxx((ii))++bb))==&gamma;&gamma;^^((ii))&GreaterEqual;&Greater Equal;&gamma;&gamma;^^,,ii==11......22nno

即可找到一个最优的超平面,如图3所示的最优超平面,它能够区分两类数据,并且到两类数据的距离最远,其中

Figure BDA0000125797980000033
An optimal hyperplane can be found, such as the optimal hyperplane shown in Figure 3, which can distinguish two types of data and has the farthest distance to the two types of data, where
Figure BDA0000125797980000033

2.基于主元分析的人脸检测:检测阶段2. Face detection based on principal component analysis: detection stage

本阶段对测试的图像用支持向量机模型进行分类,若分类的结果标签为1,那么表明检测结果为人脸,反之为非人脸。下面根据流程图图4对检测阶段的各个步骤作以说明:At this stage, the test image is classified by the support vector machine model. If the classification result label is 1, it indicates that the detection result is a human face, otherwise it is a non-human face. Below, according to the flow chart Fig. 4, each step of the detection stage is described:

●步骤一:预处理测试图像●Step 1: Preprocessing the test image

与训练阶段中预处理相同,先将图像调整到与训练阶段的大小一致,然后转换为灰度图像,对这个灰度图像进行自适应阈值化处理,变为二值图像。The same as the preprocessing in the training phase, the image is first adjusted to the same size as the training phase, and then converted to a grayscale image, and the grayscale image is adaptively thresholded to become a binary image.

●步骤二:计算测试图像在主元向量集上的投影坐标●Step 2: Calculate the projection coordinates of the test image on the pivot vector set

将步骤一中的结果展开成为列向量x,其投影坐标计算如下:Expand the result in step 1 into a column vector x, and its projection coordinates are calculated as follows:

Projtest=PTxProjtest = PT x

其中P为主元向量集。where P is the vector set of components.

●步骤三:计算分类结果●Step 3: Calculate the classification result

对于步骤二中的投影坐标,将其带入训练阶段得出的支持向量机模型,若分类结果标签为1,则为人脸,反之为非人脸。For the projected coordinates instep 2, bring them into the support vector machine model obtained in the training phase. If the classification result label is 1, it is a human face, otherwise it is a non-human face.

应该理解到的是:上述实施例只是对本发明的说明,而不是对本发明的限制,任何不超出本发明实质精神范围内的发明创造,均落入本发明的保护范围之内。It should be understood that: the above-mentioned embodiments are only descriptions of the present invention, rather than limitations of the present invention, and any inventions that do not exceed the spirit of the present invention fall within the protection scope of the present invention.

Claims (3)

1. based on the method for detecting human face of pivot analysis and SVMs, it is characterized in that being carrying out according to the following steps:
1) make up people's face sorter: the vector space through to the formation of the pretreated people's face sample of a large amount of different processes is analyzed; Find out one group of orthogonal basis that can characterize its vector space DATA DISTRIBUTION and constitute the principal component vector collection, several directions before corresponding its variance maximum; For people's face sample and non-face sample,, utilize the method for SVMs to find out the lineoid that to distinguish two types, thereby make up people's face sorter this group orthogonal basis upslide movie queen;
2) detection-phase:,, utilize the sorter that builds to differentiate to its projection at the principal component vector collection for people's face of needs test.
2. the method for detecting human face based on pivot analysis and SVMs as claimed in claim 1 is characterized in that step 1) carries out by following step by step:
Step 1: pre-service training set
Be adjusted into unified size to be useful on as the people's face and the non-face image of training, convert gray-scale map into, and the intermediate value of using every width of cloth conversion back gradation of image is as threshold value; For every width of cloth image I, the intermediate value of its gray scale is thresh, the image I after changing so ' as follows:
Ii&prime;=1Ii&GreaterEqual;thresh0else
I ' whereiniEach grey scale pixel value for I ';
Step 2: calculate the principal component vector collection
All m facial image in the training set after the pre-service all is launched into the n dimensional vector with it, forms facial image set XN * m=[I(1), I(2)... I(m)]; It is following that X is carried out the SVD svd:
X=USV
Wherein, n is the number of pixels of image, PK * n=[u1, u2... uk] column vector be the principal component vector collection of being asked, k is as principal component vector collection P before gettingK * n=[u1, u2... uk], they are quadratures, and the variance of X on these directions is maximum; Each tabulation of the product SV of back show X each be listed in the projection coordinate on these principal component vector collection, finally at PK * n=[u1, u2... uk] express facial image on the low gt that constituted;
Step 3: calculate people's face collection and the projection coordinate of non-face collection on the principal component vector collection
For non-face collection, according to method same in the step 2 it is formed matrix Y, people's face collection and the projection coordinate of non-face collection on the principal component vector collection are calculated respectively as follows:
People's face: ProjFace=PTX
Non-face: ProjNonface=PTY
Step 4: train supporting vector machine model according to projection
In the vector space that principal component vector collection P is constituted, the point that belongs to people's face is labeled as 1, and non-face point is labeled as-1, and purpose is to find out a lineoid f (x)=wTX+b=0, feasible point for f (x)<0 is non-face, otherwise is people's face; Through finding the solution to following optimization problem:
max&gamma;~,s.t.y(i)(wTx(i)+b)=&gamma;^(i)&GreaterEqual;&gamma;^,i=1...2n
Find the lineoid of an optimum; This lineoid can be distinguished two types of data; And to the furthest of two types of data,
Figure FDA0000125797970000022
wherein
3. according to claim 1 or claim 2 the method for detecting human face based on pivot analysis and SVMs is characterized in that step 2) carry out step by step by following:
Step 1: pre-service test pattern
Identical with pre-service in the training stage, earlier image is adjusted to big or small consistent with the training stage, convert gray level image then into, this gray level image is carried out the self-adaption thresholding processing, become bianry image;
Step 2: calculate the projection coordinate of test pattern on the principal component vector collection
At first the result in the step 1 is launched to become column vector x, its projection coordinate is calculated as follows:
Projtest=PTx
Wherein P is the principal component vector collection.
Step 3: calculate classification results
For the projection coordinate in the step 2, carry it into the supporting vector machine model that the training stage draws, if the classification results label is 1, then is people's face, otherwise is non-face.
CN2011104461130A2011-12-282011-12-28Human face detection method based on principal component analysis and support vector machinePendingCN102592145A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN2011104461130ACN102592145A (en)2011-12-282011-12-28Human face detection method based on principal component analysis and support vector machine

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN2011104461130ACN102592145A (en)2011-12-282011-12-28Human face detection method based on principal component analysis and support vector machine

Publications (1)

Publication NumberPublication Date
CN102592145Atrue CN102592145A (en)2012-07-18

Family

ID=46480755

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN2011104461130APendingCN102592145A (en)2011-12-282011-12-28Human face detection method based on principal component analysis and support vector machine

Country Status (1)

CountryLink
CN (1)CN102592145A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105354554A (en)*2015-11-122016-02-24西安电子科技大学Color and singular value feature-based face in-vivo detection method

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20100096739A (en)*2009-02-252010-09-02오리엔탈종합전자(주)Class discriminating feature vector-based support vector machine and face membership authentication based on it

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20100096739A (en)*2009-02-252010-09-02오리엔탈종합전자(주)Class discriminating feature vector-based support vector machine and face membership authentication based on it

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
何国辉,甘俊英: "《基于核主元分析和支持向量机的人脸识别》", 《计算机工程与设计》*
张燕昆等: "《基于主元分析与支持向量机的人脸识别方法》", 《上海交通大学学报》*
李兰兰等: "《基于主元分析与支持向量机超参数调节的人脸识别研究》", 《电脑知识与技术》*
王辉: "《基于核主成分分析特征提取及支持向量机的人脸识别应用研究》", 《中国优秀硕士学位论文全文数据库》*
袁立等: "《基于核主元分析法和支持向量机的人耳识别》", 《北京科技大学学报》*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105354554A (en)*2015-11-122016-02-24西安电子科技大学Color and singular value feature-based face in-vivo detection method

Similar Documents

PublicationPublication DateTitle
CN103839065B (en)Extraction method for dynamic crowd gathering characteristics
Cao et al.Linear SVM classification using boosting HOG features for vehicle detection in low-altitude airborne videos
CN103824070B (en)A kind of rapid pedestrian detection method based on computer vision
CN109583342A (en)Human face in-vivo detection method based on transfer learning
CN101477626B (en)Method for detecting human head and shoulder in video of complicated scene
CN103605983B (en)Remnant detection and tracking method
CN111881750A (en)Crowd abnormity detection method based on generation of confrontation network
CN106339657B (en)Crop straw burning monitoring method based on monitor video, device
CN107657225B (en)Pedestrian detection method based on aggregated channel characteristics
CN102682303A (en)Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model
CN103871077B (en)A kind of extraction method of key frame in road vehicles monitoring video
CN104933414A (en)Living body face detection method based on WLD-TOP (Weber Local Descriptor-Three Orthogonal Planes)
CN103473571A (en)Human detection method
CN102129568B (en)Method for detecting image-based spam email by utilizing improved gauss hybrid model classifier
CN105426828A (en)Face detection method, face detection device and face detection system
CN115797970B (en)Dense pedestrian target detection method and system based on YOLOv5 model
CN114581663A (en)Gate multi-target ticket evasion detection method and device, computer equipment and storage medium
CN119131364A (en) A method for detecting small targets in drones based on unsupervised adversarial learning
CN108229411A (en)Human body hand-held knife behavioral value system and method based on RGB color image
CN103971100A (en)Video-based camouflage and peeping behavior detection method for automated teller machine
CN102156879B (en)Human target matching method based on weighted terrestrial motion distance
Dong et al.Nighttime pedestrian detection with near infrared using cascaded classifiers
Li et al.Image quality classification algorithm based on InceptionV3 and SVM
CN101877135A (en) A Moving Object Detection Method Based on Background Reconstruction
CN102592145A (en)Human face detection method based on principal component analysis and support vector machine

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C02Deemed withdrawal of patent application after publication (patent law 2001)
WD01Invention patent application deemed withdrawn after publication

Application publication date:20120718


[8]ページ先頭

©2009-2025 Movatter.jp