技术领域technical field
本发明涉及计算机视觉和机器学习技术领域,特别涉及一种亚裔人脸的年龄特征模型生成方法及年龄估计方法。The invention relates to the technical fields of computer vision and machine learning, in particular to a method for generating an age characteristic model of an Asian face and an age estimation method.
背景技术Background technique
人脸作为一个对计算机来说识别和分析都非常困难的物体,从20世纪90年代开始就引起了研究者们的广泛关注。而成功有效的人脸年龄估计在智能监控,视频索引以及人口信息统计等领域又存在着巨大的应用前景。人脸年龄估计的平均绝对误差是其关键性指标。As an object that is very difficult for computers to recognize and analyze, human face has attracted extensive attention of researchers since the 1990s. Successful and effective face age estimation has great application prospects in the fields of intelligent monitoring, video indexing, and demographic information statistics. The mean absolute error of face age estimation is its key indicator.
与人脸性别识别或者人脸表情识别等问题相比,人脸年龄估计问题本身是一个复杂而困难的问题,即使是用人眼进行判断,也很难仅基于脸部信息准确估计一个人的年龄,而人眼却很容易判断一个人的性别或表情。故该研究领域的发展一直较为缓慢。Compared with problems such as face gender recognition or facial expression recognition, the problem of face age estimation itself is a complex and difficult problem. Even if it is judged by human eyes, it is difficult to accurately estimate a person's age based on facial information alone. , while the human eye can easily judge a person's gender or expression. Therefore, the development of this field of research has been relatively slow.
现有人脸年龄估计方法主要针对欧美人脸,而这些方法在亚裔人脸识别上效果都不太好。这是因为该领域研究者在进行方法设计、特征提取、测试等环节中,使用的都是国外的公共数据集,这些数据集均由欧美人脸数据组成;因此基于这些公共数据集提出的方法用在欧美人脸的年龄估计上效果非常好,但是用在亚裔人脸的分析上,效果就会出现明显下降。而且欧美人的人脸轮廓一般要大于亚裔人的人脸轮廓,而亚裔人脸的局部细微变化更多,也即局部区域信息更丰富,所以现有方法并不能很好的捕捉并描述这些信息。The existing face age estimation methods are mainly aimed at European and American faces, but these methods are not very effective in Asian face recognition. This is because researchers in this field use foreign public data sets in the process of method design, feature extraction, testing, etc., and these data sets are composed of European and American face data; therefore, the methods proposed based on these public data sets It is very effective in age estimation of European and American faces, but when used in the analysis of Asian faces, the effect will drop significantly. Moreover, the face contours of Europeans and Americans are generally larger than those of Asians, and Asian faces have more local subtle changes, that is, more information in local areas, so existing methods cannot capture and describe them well. these messages.
发明内容Contents of the invention
本发明的目的在于克服目前亚裔人脸年龄估计方法存在的上述缺陷,通过深入分析亚裔人脸的特点,提出一种亚裔人脸的年龄特征模型生成方法;该方法通过提取人脸的纹差图特征,可以提取到人脸重要器官区域和人脸皱纹区域的特征信息,通过图像差分、尺寸缩减、分块取最大值等操作进一步强化特征,最终得到对年龄估计敏感的特征;从而生成亚裔人脸的年龄特征模型;基于该模型,本发明还提供了一种亚裔人脸的年龄估计方法,能够实现对年龄的精确估计。The purpose of the present invention is to overcome the above-mentioned defect that the current Asian face age estimation method exists, and through in-depth analysis of the characteristics of the Asian face, a method for generating an age feature model of the Asian face is proposed; The feature of the grain difference map can extract the feature information of the important organ area of the face and the wrinkle area of the face, and further strengthen the features through operations such as image difference, size reduction, and block taking the maximum value, and finally obtain features that are sensitive to age estimation; thus An age characteristic model of Asian faces is generated; based on the model, the present invention also provides an age estimation method of Asian faces, which can realize accurate age estimation.
为了实现上述目的,本发明提供了一种亚裔人脸的年龄特征模型的生成方法,所述方法包括:In order to achieve the above object, the present invention provides a method for generating an age feature model of an Asian face, the method comprising:
步骤S1)提取训练集中每张人脸图片的纹差特征图,从而得到每张人脸图片的原始特征向量F(zi);Step S1) extracting the texture difference feature map of each face picture in the training set, thereby obtaining the original feature vector F(zi ) of each face picture;
步骤S2)确定原始特征向量F(zi)的降维维度,基于主成分分析法对第i张训练图片的原始特征向量F(zi)进行降维,得到降维后的特征向量FD(zi);Step S2) Determine the dimension reduction dimension of the original feature vector F(zi ), and perform dimension reduction on the original feature vector F(zi ) of the i-th training image based on the principal component analysis method, and obtain the feature vector FD after dimension reduction (zi );
步骤S3)基于支持向量机回归算法对降维后的特征向量FD(zi)进行训练生成年龄特征模型。Step S3) Based on the support vector machine regression algorithm, the dimensionality-reduced feature vector FD (zi ) is trained to generate an age feature model.
上述技术方案中,所述步骤S1)具体包括:In the above technical solution, the step S1) specifically includes:
步骤S1-1)对人脸图片进行预处理,得到60×60像素大小的原始人脸图像;Step S1-1) Preprocessing the face image to obtain an original face image with a size of 60×60 pixels;
步骤S1-2)基于局部二值模式算子对步骤S1-1)得到的原始人脸图像进行编码计算,得到局部二值模式编码图像;Step S1-2) performing coding calculation on the original face image obtained in step S1-1) based on a local binary mode operator, to obtain a local binary mode coded image;
步骤S1-3)分别对原始人脸图像和局部二值模式编码图像进行Gabor小波变换,得到两组Gabor响应图像簇;Step S1-3) performing Gabor wavelet transform on the original face image and the local binary mode coded image respectively to obtain two groups of Gabor response image clusters;
步骤S1-4)对两组Gabor响应图像簇中图像作图像差分运算,得到差分图像簇{DOT(zi)};Step S1-4) Perform image difference operation on the images in the two groups of Gabor response image clusters to obtain the differential image cluster {DOT(zi )};
步骤S1-5)对差分图像簇中的图像进行尺度缩减操作,得到图像数量减半的新的差分图像簇{DOT′(zi)};Step S1-5) Perform scale reduction operation on the images in the differential image cluster to obtain a new differential image cluster {DOT′(zi )} in which the number of images is halved;
步骤S1-6)对新的差分图像簇{DOT′(zi)}中的图像进行最大池化操作,得到纹差特征图FM(zi);Step S1-6) Perform the maximum pooling operation on the images in the new differential image cluster {DOT′(zi )} to obtain the grain difference feature map FM(zi );
步骤S1-7)将步骤S1-6)得到的纹差特征图FM(zi)由二维图结构的形式转化为向量的形式,再对向量进行归一化,得到每一张训练图片的原始特征向量F(zi)。Step S1-7) Convert the grain difference feature map FM(zi ) obtained in step S1-6) from the form of a two-dimensional graph structure to the form of a vector, and then normalize the vector to obtain the The original feature vector F(zi ).
上述技术方案中,所述步骤S1-1)具体包括:In the above technical solution, the step S1-1) specifically includes:
步骤S1-1-1)对人脸图片进行图像灰度化处理:Step S1-1-1) Perform image grayscale processing on the face picture:
遍历面部图像,对每一个像素点进行处理,得到每个像素的RGB值,通过运算分别提取出红、蓝、绿的值,根据人眼对红蓝绿三种颜色的敏感程度不同,最佳灰度转换公式为:Traverse the facial image, process each pixel, get the RGB value of each pixel, and extract the red, blue, and green values respectively through operations. According to the sensitivity of the human eye to the three colors of red, blue, and green, the best The grayscale conversion formula is:
Grey=(9798R+19235G+3735B)/32768Gray=(9798R+19235G+3735B)/32768
其中Grey代表转换后的灰度值,R、G、B分别代表图像中每一个像素点的红色分量、绿色分量和蓝色分量;Among them, Gray represents the converted gray value, and R, G, and B represent the red component, green component and blue component of each pixel in the image, respectively;
步骤S1-1-2)采用双线性插值法将灰度化图像大小调整至60×60;Step S1-1-2) Adjust the size of the grayscale image to 60×60 by using a bilinear interpolation method;
步骤S1-1-3)对调整大小后的图像进行增强:Step S1-1-3) enhance the resized image:
利用直方图的统计数据进行图像直方图的修改,通过调整图像各灰度级别像素出现概率相等来改变图像中各灰度级别的像素值,从而实现图像增强;Use the statistical data of the histogram to modify the image histogram, and change the pixel value of each gray level in the image by adjusting the probability of each gray level pixel in the image to be equal, so as to achieve image enhancement;
步骤S1-1-4)对增强后的图像提取图像像素矩阵,得到原始人脸图像。Step S1-1-4) Extract the image pixel matrix from the enhanced image to obtain the original face image.
上述技术方案中,所述步骤S1-3)具体包括:In the above technical solution, the step S1-3) specifically includes:
步骤S1-3-1)对步骤S1-1)得到的原始人脸图像进行Gabor小波变换:Step S1-3-1) carries out Gabor wavelet transform to the original face image that step S1-1) obtains:
取p个不同的方向和q个不同尺度的Gabor滤波器对原始人脸图像进行处理,得到一组p×q张Gabor响应图像簇{Gabor-I(zi)};其中zi(i=1…L)表示训练集中第i张图片;Take p different directions and q Gabor filters of different scales to process the original face image, and obtain a set of p×q Gabor response image clusters {Gabor-I(zi )}; where zi (i= 1...L) represents the i-th picture in the training set;
步骤S1-3-2)对步骤S1-2)中得到的LBP编码图像进行Gabor小波变换:取p个不同的方向和q个不同尺度的Gabor滤波器对LBP编码图像进行处理,得到一组p×q张Gabor响应图像簇{Gabor-L(zi)}。Step S1-3-2) Gabor wavelet transform is performed on the LBP coded image obtained in step S1-2): take p different directions and q different scale Gabor filters to process the LBP coded image, and obtain a set of p ×q Gabor response image cluster {Gabor-L(zi )}.
上述技术方案中,所述步骤S1-4)的具体实现过程为:In the above technical solution, the specific implementation process of the step S1-4) is:
对两组Gabor响应图像簇中图像作图像差分运算,得到差分图像簇:The image differential operation is performed on the images in the two sets of Gabor response image clusters to obtain the differential image clusters:
{DOT(zi)}=|{Gabor-I(zi)}-{Gabor-L(zi)}|{DOT(zi )}=|{Gabor-I(zi )}-{Gabor-L(zi )}|
其中,{DOT(zi)}表示训练集中第i张图片zi的差分图像簇;{Gabor-I(zi)}中任意一张图像Gabor-I(zi)k,由原始人脸图像经过尺度为m×m(m为整数)、方向为D的Gabor滤波器处理得到,在{Gabor-L(zi)}中存在对应图像Gabor-L(zi)k,该图像由LBP编码图像经过同样的尺度为m×m、方向为D的Gabor滤波器处理得到。Among them, {DOT(zi )} represents the differential image cluster of the i-th image zi in the training set; any image Gabor-I(zi )k in {Gabor-I(zi )} is composed of the original face The image is processed by a Gabor filter with a scale of m×m (m is an integer) and a direction of D. There is a corresponding image Gabor-L(zi )k in {Gabor-L(zi )}, which is generated by LBP The coded image is processed by the same Gabor filter with a scale of m×m and a direction of D.
则对应的差分图像DOT(zi)k由下式得到:Then the corresponding differential image DOT(zi )k is obtained by the following formula:
DOT(zi)k=|Gabor-I(zi)kΔGabor-L(zi)k|DOT(zi )k =|Gabor-I(zi )k ΔGabor-L(zi )k |
其中“Δ”表示两张图像对应像素一一相减,“||”表示取绝对值。Among them, "Δ" indicates that the corresponding pixels of the two images are subtracted one by one, and "||" indicates that the absolute value is taken.
这样,对两组图像中的每两张对应的图像做差分运算,即可得到差分图像簇{DOT(zi)};{DOT(zi)}中包含p×q张图像。In this way, the difference operation is performed on every two corresponding images in the two groups of images, and the differential image cluster {DOT(zi )} can be obtained; {DOT(zi )} contains p×q images.
上述技术方案中,所述步骤S1-5)具体包括:In the above technical solution, the step S1-5) specifically includes:
步骤S1-5-1)将差分图像簇{DOT(zi)}中p×q张图像按照每相邻两个尺度的滤波器对应的2q张图像划分为一个带,总共得到p/2个带;Step S1-5-1) Divide the p×q images in the differential image cluster {DOT(zi )} into one band according to the 2q images corresponding to the filters of each adjacent two scales, and obtain p/2 in total bring;
步骤S1-5-2)在每个带中,将图片划分为q组,每组两张图片,这两张图片对应相同的方向,尺度不同,假设为和对其进行如下操作:Step S1-5-2) In each band, divide the pictures into q groups, each group has two pictures, these two pictures correspond to the same direction, and the scales are different, assuming and Do it as follows:
其中,DOT′(zi)k表示新的差分图像簇中的第k张图像,“Max()”函数表示在两张图像中,相同位置对应的两个像素点中取最大值作为新的像素值,构成一张新的图像。Among them, DOT′(zi )k represents the kth image in the new differential image cluster, and the “Max()” function represents the maximum value of the two pixels corresponding to the same position in the two images as the new Pixel values to form a new image.
上述技术方案中,所述步骤S1-6)具体包括:In the above technical solution, the step S1-6) specifically includes:
步骤S1-6-1)对新的差分图像簇{DOT′(zi)}中的图像,构成像素矩阵M;Step S1-6-1) Construct a pixel matrix M for the images in the new differential image cluster {DOT′(zi )};
原始人脸图像大小为x×y像素大小,{DOT′(zi)}中共有(p/2×q)张图片,那么按一定顺序排列,{DOT′(zi)}所有图像构成一个大小为(xp/2×yq)的像素矩阵M;The size of the original face image is x×y pixel size, and there are (p/2×q) pictures in {DOT′(zi )}, then arranged in a certain order, all the images in {DOT′(zi )} constitute a A pixel matrix M of size (xp/2×yq);
步骤S1-6-2)构建训练图片的纹差特征图FM(zi)对应的矩阵S;Step S1-6-2) Constructing a matrix S corresponding to the grain difference feature map FM(zi ) of the training picture;
将矩阵M划分为不重叠的2×2大小的小块,每个小块中的4个元素取最大值构成一个新的矩阵S,该矩阵S即为训练图片的纹差特征图FM(zi)。Divide the matrix M into non-overlapping small blocks of 2×2 size, and take the maximum value of the 4 elements in each small block to form a new matrix S, which is the grain difference feature map FM(zi ).
上述技术方案中,所述步骤S2)具体包括:In the above technical solution, the step S2) specifically includes:
步骤S2-1)将原始特征向量F(zi)的维度d划分为n等分,确定新维度d'的取值范围;Step S2-1) Divide the dimension d of the original feature vector F(zi ) into n equal parts, and determine the value range of the new dimension d';
原始特征向量F(zi)的维度为d,将d的取值划分为n等份,取值集合如下:The dimension of the original feature vector F(zi ) is d, and the value of d is divided into n equal parts, and the value set is as follows:
其中,表示整数取整运算;in, Represents an integer rounding operation;
设降维后的得到特征为FD(zi),则其维度d'分别取这n个值:Let the feature obtained after dimensionality reduction be FD (zi ), then its dimension d' takes these n values respectively:
步骤S2-2)d'依次取集合中的每个值,计算训练集所有图片对应的年龄估计平均绝对误差集合{MAEm};Step S2-2)d' takes each value in the set in turn, and calculates the age estimation mean absolute error set {MAEm } corresponding to all pictures in the training set;
对训练集中的L张图片,计算当d'取集合中的每个值时,训练集所有图片对应的年龄估计平均绝对误差如下:For the L pictures in the training set, calculate the average absolute error of age estimation corresponding to all pictures in the training set when d' takes each value in the set as follows:
其中,j表示训练集中第j张图片,k表示d'取集合中第k个值,即lj表示训练集中第j张图片对应的真实年龄,表示训练集中第j张图片的年龄估计值;最终可以得到不同MAE值集合{MAEm}其中m∈1,2,...,n;Among them, j represents the jth picture in the training set, and k represents the kth value of d' in the set, that is lj represents the real age corresponding to the jth picture in the training set, Indicates the estimated age value of the jth picture in the training set; finally, a set of different MAE values {MAEm } can be obtained where m∈1,2,...,n;
步骤S2-3)取集合{MAEm}中最小值MAEmin,以MAEmin对应的d'作为最终的降维维度;Step S2-3) Take the minimum value MAEmin in the set {MAEm }, and use d' corresponding to MAEmin as the final dimensionality reduction dimension;
步骤S2-4)基于步骤S2-3)得到的d',利用主成分分析法对每张训练图片的原始特征向量F(zi)进行降维,得到降维后的特征向量FD(zi)。Step S2-4) Based on the d' obtained in step S2-3), the original feature vector F(zi ) of each training picture is reduced in dimension by principal component analysis, and the reduced feature vector FD (zi ).
上述技术方案中,所述步骤S3)具体包括:In the above technical solution, the step S3) specifically includes:
步骤S3-1)基于支持向量机回归算法构建最优化问题;Step S3-1) constructing an optimization problem based on a support vector machine regression algorithm;
假设训练集样本为{x(i),y(i)}(i=1,2,...,m),x(i)表示每张训练图片降维后的特征向量FD(zi),y(i)表示该训练图片对应的真实年龄,总共有L个样本;假设样本维数为D,则支持向量机回归的目标即是求解预测函数f(x),使f(x(i))与y(i)之间的差值不大于ε,ε是一个极小的数,控制着实际标签值与预测估计值之间的最大误差;最优化问题为:Assuming that the training set samples are {x(i) ,y(i) }(i=1,2,...,m), x(i) represents the feature vector FD (zi ), y(i) represents the real age corresponding to the training picture, and there are a total of L samples; assuming that the sample dimension is D, then The goal of support vector machine regression is to solve the prediction function f(x), so that the difference between f(x(i) ) and y(i) is not greater than ε, ε is a very small number that controls the actual label The maximum error between the value and the predicted estimate; the optimization problem is:
f(x)=w·x+bf(x)=w·x+b
步骤S3-2)利用拉格朗日乘子法对最优化求解问题进行转化,转换为对其对偶问题进行求解,得到预测函数f(x)的表达式;Step S3-2) using the Lagrange multiplier method to transform the optimization problem into solving its dual problem to obtain the expression of the prediction function f(x);
步骤S3-3)所述年龄特征模型为预测函数f(x)。The age characteristic model in step S3-3) is a prediction function f(x).
基于上述的亚裔人脸的年龄特征模型,本发明还提供了一种亚裔人脸的年龄估计方法,所述方法包括:Based on the above-mentioned age feature model of Asian faces, the present invention also provides a method for estimating the age of Asian faces, the method comprising:
步骤T1)提取待估计年龄的亚裔人脸图片的纹差特征图,从而得到人脸图片的原始特征向量F(z);Step T1) Extract the texture difference feature map of the Asian face picture whose age is to be estimated, so as to obtain the original feature vector F(z) of the face picture;
步骤T2)确定原始特征向量F(z)的降维维度,基于主成分分析法对原始特征向量F(z)进行降维,得到降维后的特征向量FD(z);Step T2) Determine the dimensionality reduction dimension of the original feature vector F(z), perform dimensionality reduction on the original feature vector F(z) based on the principal component analysis method, and obtain the feature vector FD (z) after dimensionality reduction;
步骤T3)将步骤T2)得到的特征向量FD(z)输入到步骤S3)生成的年龄特征模型中的预测函数f(x),计算出待估计年龄的亚裔人脸的预测年龄Step T3) Input the feature vector FD (z) obtained in step T2) into the prediction function f(x) in the age feature model generated in step S3), and calculate the predicted age of the Asian face whose age is to be estimated
本发明的方法优点在于:The advantage of the method of the present invention is:
1、本发明的方法可应用在安全监控、视频索引和人口信息统计等诸多场合;1. The method of the present invention can be applied to many occasions such as security monitoring, video indexing and demographic information statistics;
2、本发明提出的年龄特征模型基于人脸重要器官区域和人脸皱纹区域提取特征,更关注人脸局部区域的信息及它们之间的相关性,确保特征中包含有足够的对年龄估计敏感的信息;2. The age feature model proposed by the present invention extracts features based on the important organ area and wrinkle area of the face, pays more attention to the information of the local area of the face and the correlation between them, and ensures that the features contain enough information sensitive to age estimation. Information;
3、本发明提出的年龄特征模型是一种有效的特征提取模型,适用于人脸年龄估计;年龄特征模型结合SVR回归算法实现亚裔人脸年龄估计时可得到更低的误差,满足实际应用场景需要。3. The age feature model proposed by the present invention is an effective feature extraction model, which is suitable for face age estimation; when the age feature model is combined with the SVR regression algorithm to realize Asian face age estimation, a lower error can be obtained, which meets practical application The scene needs.
附图说明Description of drawings
图1为本发明的年龄特征模型生成方法的流程图。FIG. 1 is a flow chart of the age characteristic model generation method of the present invention.
具体实施方式detailed description
下面对本发明涉及的技术概念进行简要的介绍。The following briefly introduces the technical concepts involved in the present invention.
局部二值模式(Local Binary Pattern,LBP)方法是人脸相关研究领域较为常用方法。LBP图像编码定义为在3×3的窗口内,以窗口中心像素为阈值,将相邻的8个像素的灰度值与其进行比较,若周围像素值大于中心像素值,则该像素点的位置被标记为1,否则为0。这样,3×3邻域内的8个点经比较可产生8位二进制数(通常转换为十进制数即LBP码,共256种),即得到该窗口中心像素点的LBP值,并用这个值来反映该区域的纹理信息。The Local Binary Pattern (LBP) method is a relatively common method in the field of face related research. LBP image coding is defined as within a 3×3 window, using the center pixel of the window as the threshold, comparing the gray value of the adjacent 8 pixels with it, if the surrounding pixel value is greater than the central pixel value, the position of the pixel is is marked as 1, otherwise 0. In this way, the 8 points in the 3×3 neighborhood can be compared to generate an 8-bit binary number (usually converted to a decimal number, that is, LBP code, a total of 256 types), that is, to obtain the LBP value of the pixel point in the center of the window, and use this value to reflect Texture information for the region.
Gabor小波被广泛应用于图像识别、图像处理领域,在模式识别领域中,Gabor小波变换也是一种非常有效的特征描述子。在空域,一个2维的Gabor滤波器是一个正弦平面波和高斯核函数的乘积,具有在空间域和频率域同时取得最优局部化的特性,与人类生物视觉特性很相似,因此能够很好地描述对应于空间频率(尺度)、空间位置及方向选择性的局部结构信息。实质上,Gabor小波变换是为了提取信号Fourier变换的局部信息,使用了一个Gauss函数作为窗函数,因为一个Gauss函数的Fourier变换还是一个Gauss函数,所以Fourier逆变换也是局部的。通过频率参数和高斯函数参数的选取,Gabor变换可以选取很多部位的特征信息。输入人脸图像经过Gabor小波变换后,可得到一组Gabor小波响应图像簇。Gabor wavelet is widely used in the field of image recognition and image processing. In the field of pattern recognition, Gabor wavelet transform is also a very effective feature descriptor. In the space domain, a 2-dimensional Gabor filter is the product of a sinusoidal plane wave and a Gaussian kernel function, which has the characteristics of achieving optimal localization in the space domain and frequency domain at the same time, which is very similar to the human biological visual characteristics, so it can be well Describe local structure information corresponding to spatial frequency (scale), spatial position, and direction selectivity. In essence, the Gabor wavelet transform is to extract the local information of the Fourier transform of the signal, using a Gauss function as a window function, because the Fourier transform of a Gauss function is still a Gauss function, so the Fourier inverse transform is also local. Through the selection of frequency parameters and Gaussian function parameters, Gabor transform can select feature information of many parts. After the input face image is transformed by Gabor wavelet, a group of Gabor wavelet response image clusters can be obtained.
主成分分析(Principal Component Analysis,PCA)降维方法的原理为去除原始数据中各数据分量间的相关关系,去除冗余信息,保留最主要的成分。PCA计算原始数据协方差矩阵中最大的几个特征值所对应的特征向量并得到对应的子空间,将特征向量投影到该子空间以达到用较少数量的特征对样本空间进行描述的目的。支持向量机在解决分类问题时(通常是二分类问题),是基于结构风险最小化的准则寻找一个最优分类超平面,将样本一分为二,达到不同类别间具有最大的类间间隔。分类问题中常用离散的整数值表示样本的类别,与分类问题不同,回归问题中的每个样本的标签是连续的实数。故支持向量机回归(Support Vector Regression,SVR)旨在寻找一个超平面,能够准确预测样本的分布,近似样本数据。The principle of Principal Component Analysis (PCA) dimensionality reduction method is to remove the correlation between each data component in the original data, remove redundant information, and retain the most important components. PCA calculates the eigenvectors corresponding to the largest eigenvalues in the original data covariance matrix and obtains the corresponding subspace, and projects the eigenvectors to the subspace to achieve the purpose of describing the sample space with a small number of features. When the support vector machine solves the classification problem (usually a binary classification problem), it finds an optimal classification hyperplane based on the criterion of structural risk minimization, divides the sample into two, and achieves the largest inter-class interval between different categories. In classification problems, discrete integer values are often used to represent the category of samples. Unlike classification problems, the label of each sample in regression problems is a continuous real number. Therefore, Support Vector Regression (SVR) aims to find a hyperplane that can accurately predict the distribution of samples and approximate the sample data.
现结合附图和具体实施例对本发明做进一步的描述。The present invention will be further described in conjunction with the accompanying drawings and specific embodiments.
如图1所示,一种亚裔人脸的年龄特征模型的生成方法,所述方法包括:As shown in Figure 1, a kind of generation method of the age characteristic model of Asian face, described method comprises:
步骤S1)提取训练集中每张人脸图片的纹差特征图(Feature Map of Difference of Texture,FMDT),从而得到每张人脸图片的原始特征向量;具体包括:Step S1) extracting the texture difference feature map (Feature Map of Difference of Texture, FMDT) of each face picture in the training set, thereby obtaining the original feature vector of each face picture; specifically including:
步骤S1-1)对人脸图片进行预处理,得到60×60像素大小的原始人脸图像;具体包括:Step S1-1) Preprocessing the face picture to obtain an original face image with a size of 60×60 pixels; specifically including:
步骤S1-1-1)对人脸图片进行图像灰度化处理:Step S1-1-1) Perform image grayscale processing on the face picture:
遍历面部图像,对每一个像素点进行处理,得到每个像素的RGB值,通过运算分别提取出红、蓝、绿的值,根据人眼对红蓝绿三种颜色的敏感程度不同,最佳灰度转换公式为:Traverse the facial image, process each pixel, get the RGB value of each pixel, and extract the red, blue, and green values respectively through operations. According to the sensitivity of the human eye to the three colors of red, blue, and green, the best The grayscale conversion formula is:
Grey=(9798R+19235G+3735B)/32768Gray=(9798R+19235G+3735B)/32768
其中Grey代表转换后的灰度值,R、G、B分别代表图像中每一个像素点的红色分量、绿色分量和蓝色分量;Among them, Gray represents the converted gray value, and R, G, and B represent the red component, green component and blue component of each pixel in the image, respectively;
步骤S1-1-2)采用双线性插值法将灰度化图像大小调整至60×60;Step S1-1-2) Adjust the size of the grayscale image to 60×60 by using a bilinear interpolation method;
步骤S1-1-3)对调整大小后的图像进行增强:Step S1-1-3) enhance the resized image:
利用直方图的统计数据进行图像直方图的修改,通过调整图像各灰度级别像素出现概率相等来改变图像中各灰度级别的像素值,从而实现图像增强;Use the statistical data of the histogram to modify the image histogram, and change the pixel value of each gray level in the image by adjusting the probability of each gray level pixel in the image to be equal, so as to achieve image enhancement;
步骤S1-1-4)对增强后的图像提取图像像素矩阵,得到原始人脸图像;Step S1-1-4) extracting the image pixel matrix from the enhanced image to obtain the original face image;
步骤S1-2)基于LBP算子对步骤S1-1)得到的原始人脸图像进行编码计算,得到LBP编码图像;Step S1-2) encode and calculate the original face image obtained in step S1-1) based on the LBP operator, to obtain an LBP encoded image;
步骤S1-3)分别对原始人脸图像和LBP编码图像进行Gabor小波变换;具体包括:Step S1-3) carry out Gabor wavelet transform to original face image and LBP coded image respectively; Specifically include:
步骤S1-3-1)对步骤S1-1)得到的原始人脸图像进行Gabor小波变换:Step S1-3-1) carries out Gabor wavelet transform to the original face image that step S1-1) obtains:
取p个不同的方向和q个不同尺度的Gabor滤波器对原始人脸图像进行处理,得到一组p×q张Gabor响应图像簇{Gabor-I(zi)};其中zi表示训练集中第i张图片;Take p different directions and q different scales of Gabor filters to process the original face image, and get a set of p×q Gabor response image clusters {Gabor-I(zi )}; where zi represents i-th picture;
步骤S1-3-2)对步骤S1-2)中得到的LBP编码图像进行Gabor小波变换:Step S1-3-2) carries out Gabor wavelet transform to the LBP encoded image obtained in step S1-2):
取p个不同的方向和q个不同尺度的Gabor滤波器对LBP编码图像进行处理,得到一组p×q张Gabor响应图像簇{Gabor-L(zi)}。Take p different directions and q different scales of Gabor filters to process the LBP coded image, and get a set of p×q Gabor response image clusters {Gabor-L(zi )}.
步骤S1-4)对两组Gabor响应图像簇中图像作图像差分运算,得到差分图像簇:Step S1-4) image differential operation is performed on the images in the two groups of Gabor response image clusters to obtain the differential image clusters:
{DOT(zi)}=|{Gabor-I(zi)}-{Gabor-L(zi)}| (1){DOT(zi )}=|{Gabor-I(zi )}-{Gabor-L(zi )}| (1)
其中,{DOT(zi)}表示训练集中第i张图片zi的差分图像簇;{Gabor-I(zi)}中任意一张图像Gabor-I(zi)k,由原始人脸图像经过尺度为m×m(m为整数)、方向为D的Gabor滤波器处理得到,在{Gabor-L(zi)}中存在对应图像Gabor-L(zi)k,该图像由LBP编码图像经过同样的尺度为m×m、方向为D的Gabor滤波器处理得到。Among them, {DOT(zi )} represents the differential image cluster of the i-th image zi in the training set; any image Gabor-I(zi )k in {Gabor-I(zi )} is composed of the original face The image is processed by a Gabor filter with a scale of m×m (m is an integer) and a direction of D. There is a corresponding image Gabor-L(zi )k in {Gabor-L(zi )}, which is generated by LBP The coded image is processed by the same Gabor filter with a scale of m×m and a direction of D.
则对应的差分图像DOT(zi)k由下式得到:Then the corresponding differential image DOT(zi )k is obtained by the following formula:
DOT(zi)k=|Gabor-I(zi)kΔGabor-L(zi)k| (2)DOT(zi )k =|Gabor-I(zi )k ΔGabor-L(zi )k | (2)
其中“Δ”表示两张图像对应像素一一相减,“||”表示取绝对值。Among them, "Δ" indicates that the corresponding pixels of the two images are subtracted one by one, and "||" indicates that the absolute value is taken.
这样,对两组图像中的每两张对应的图像做差分运算,即可得到差分图像簇{DOT(zi)};{DOT(zi)}中仍然包含p×q张图像。In this way, the differential operation is performed on every two corresponding images in the two groups of images to obtain the differential image cluster {DOT(zi )}; {DOT(zi )} still contains p×q images.
步骤S1-5)对差分图像簇中的图像进行尺度缩减操作,得到图像数量减半的新的差分图像簇{DOT′(zi)};具体包括:Step S1-5) Perform scale reduction operation on the images in the differential image cluster to obtain a new differential image cluster {DOT′(zi )} with the number of images halved; specifically include:
步骤S1-5-1)将差分图像簇{DOT(zi)}中p×q张图像按照每相邻两个尺度的滤波器对应的2q张图像划分为一个带,这样总共得到p/2个带;Step S1-5-1) Divide the p×q images in the differential image cluster {DOT(zi )} into one band according to the 2q images corresponding to the filters of each adjacent two scales, so that a total of p/2 is obtained a belt;
步骤S1-5-2)在每个带中,将图片划分为q组,每组两张图片,这两张图片对应相同的方向,尺度不同,假设为和对其进行如下操作:Step S1-5-2) In each band, divide the pictures into q groups, each group has two pictures, these two pictures correspond to the same direction, and the scales are different, assuming and Do it as follows:
其中,DOT′(zi)k表示新的差分图像簇中的某张图像,“Max()”函数表示在两张图像中,相同位置对应的两个像素点中取最大值作为新的像素值,构成一张新的图像;Among them, DOT′(zi )k represents a certain image in the new differential image cluster, and the “Max()” function represents that in the two images, the maximum value of the two pixels corresponding to the same position is taken as the new pixel value to form a new image;
步骤S1-6)对新的差分图像簇{DOT′(zi)}中的图像,进行最大池化(MAX-pooling) 操作,得到纹差特征图FM(zi);具体包括:Step S1-6) Perform maximum pooling (MAX-pooling) operation on the images in the new differential image cluster {DOT′(zi )} to obtain texture difference feature map FM(zi ); specifically include:
步骤S1-6-1)对新的差分图像簇{DOT′(zi)}中的图像,构成像素矩阵M;Step S1-6-1) Construct a pixel matrix M for the images in the new differential image cluster {DOT′(zi )};
原始人脸图像大小为x×y像素大小,{DOT′(zi)}中共有(p/2×q)张图片,那么按一定顺序排列,{DOT′(zi)}所有图像构成一个大小为(xp/2×yq)的像素矩阵M;The size of the original face image is x×y pixel size, and there are (p/2×q) pictures in {DOT′(zi )}, then arranged in a certain order, all the images in {DOT′(zi )} constitute a A pixel matrix M of size (xp/2×yq);
步骤S1-6-2)构建训练图片的纹差特征图FM(zi)对应的矩阵S;Step S1-6-2) Constructing a matrix S corresponding to the grain difference feature map FM(zi ) of the training picture;
将矩阵M划分为不重叠的2×2大小的小块,每个小块中的4个元素取最大值构成一个新的矩阵S,该矩阵S即为训练图片的纹差特征图FM(zi);Divide the matrix M into non-overlapping small blocks of 2×2 size, and take the maximum value of the 4 elements in each small block to form a new matrix S, which is the grain difference feature map FM(zi );
步骤S1-7)将步骤S1-6)得到的纹差特征图FM(zi)由二维图结构的形式转化为向量的形式,再对向量进行归一化,得到每一张训练图片的原始特征向量F(zi)。Step S1-7) Convert the grain difference feature map FM(zi ) obtained in step S1-6) from the form of a two-dimensional graph structure to the form of a vector, and then normalize the vector to obtain the The original feature vector F(zi ).
步骤S2)确定原始特征向量F(zi)的降维维度,基于主成分分析法(Principal Component Analysis,,PCA)对每张训练图片的原始特征向量F(zi)进行降维;具体包括:Step S2) Determine the dimensionality reduction dimension of the original feature vector F(zi ), and perform dimensionality reduction on the original feature vector F(zi ) of each training picture based on Principal Component Analysis (PCA); specifically include :
步骤S2-1)将原始特征向量F(zi)的维度d划分为n等分,确定新维度d'的取值范围;Step S2-1) Divide the dimension d of the original feature vector F(zi ) into n equal parts, and determine the value range of the new dimension d';
原始特征向量F(zi)的维度为d,将d的取值划分为n等份,取值集合如下:The dimension of the original feature vector F(zi ) is d, and the value of d is divided into n equal parts, and the value set is as follows:
其中,表示整数取整运算;in, Represents an integer rounding operation;
设降维后的得到特征为FD(zi),则其维度d'分别取这n个值:Let the feature obtained after dimensionality reduction be FD (zi ), then its dimension d' takes these n values respectively:
步骤S2-2)d'依次取集合中的每个值,计算训练集所有图片对应的年龄估计平均绝对误差集合{MAEm};Step S2-2)d' takes each value in the set in turn, and calculates the age estimation mean absolute error set {MAEm } corresponding to all pictures in the training set;
对训练集中的L张图片,计算当d'取集合中的每个值时,训练集所有图片对应的年龄估计平均绝对误差如下:For the L pictures in the training set, calculate the average absolute error of age estimation corresponding to all pictures in the training set when d' takes each value in the set as follows:
其中,j表示训练集中第j张图片,k表示d'取集合中第k个值,即lj表示训练集中第j张图片对应的真实年龄,表示训练集中第j张图片的年龄估计值;最终可以得到不同MAE值集合{MAEm}其中m∈1,2,...,n。Among them, j represents the jth picture in the training set, and k represents the kth value of d' in the set, that is lj represents the real age corresponding to the jth picture in the training set, Indicates the estimated age of the jth picture in the training set; finally, a set of different MAE values {MAEm } can be obtained, where m∈1,2,...,n.
步骤S2-3)取集合{MAEm}中最小值MAEmin,以MAEmin对应的d'作为最终的降维维度;Step S2-3) Take the minimum value MAEmin in the set {MAEm }, and use d' corresponding to MAEmin as the final dimensionality reduction dimension;
步骤S2-4)基于步骤S2-3)得到的d',利用主成分分析法(Principal Component Analysis,PCA)对每张训练图片的原始特征向量F(zi)进行降维,得到降维后的特征向量FD(zi);Step S2-4) Based on the d' obtained in step S2-3), use Principal Component Analysis (PCA) to reduce the dimensionality of the original feature vector F(zi ) of each training picture, and obtain the reduced dimensionality The eigenvector FD (zi );
步骤S3)基于支持向量机回归算法(Support Vector Regression,SVR)对降维后的特征FD(zi)进行训练生成年龄特征模型;具体包括如下步骤:Step S3) Based on the support vector machine regression algorithm (Support Vector Regression, SVR), the feature FD (zi ) after dimensionality reduction is trained to generate an age feature model; specifically, the following steps are included:
步骤S3-1)基于支持向量机回归算法构建最优化问题;Step S3-1) constructing an optimization problem based on a support vector machine regression algorithm;
假设训练集样本为{x(i),y(i)}(i=1,2,...,m),x(i)表示每张训练图片降维后的特征向量FD(zi),y(i)表示该训练图片对应的真实年龄,总共有L个样本。假设样本维数为D,则SVR的目标即是求解预测函数f(x),使f(x(i))与y(i)之间的差值不大于ε,ε是一个极小的数,控制着实际标签值与预测估计值之间的最大误差。那么,f(x)的定义如下:Suppose the training set sample is {x(i) ,y(i) }(i=1,2,...,m), x(i) represents the feature vector FD (zi ), y(i) represents the real age corresponding to the training picture, and there are a total of L samples. Assuming that the sample dimension is D, then The goal of SVR is to solve the prediction function f(x), so that the difference between f(x(i) ) and y(i) is not greater than ε, ε is a very small number, which controls the actual label value and the prediction Maximum error between estimates. Then, f(x) is defined as follows:
f(x)=w·x+bf(x)=w·x+b
(5) (5)
其中,“·”表示向量内积;求解的w应使||w||2最小;一般称该超平面模型为ε-SVR;那么ε-SVR的最优化求解问题可表示为如下公式:Among them, "·" represents the vector inner product; the w to be solved should make ||w||2 the minimum; generally the hyperplane model is called ε-SVR; then the optimal solution problem of ε-SVR can be expressed as the following formula:
(6) (6)
s.t.|w·x(i)+b-y(i)|≤ε,i∈(1,2,...,m)st|w x(i) +by(i) |≤ε,i∈(1,2,...,m)
SVR引进惩罚系数和松弛变量来进行调节:SVR introduces penalty coefficients and slack variables to adjust:
s.t.w·x(i)+b-y(i)≤ε+ξi (7)stw x(i) +by(i) ≤ε+ξi (7)
y(i)-w·x(i)-b≤ε+ξi*y(i) -w x(i) -b≤ε+ξi*
ξi,ξi*≥0,i=1,2,...,mξi ,ξi* ≥0,i=1,2,...,m
其中,in,
表示损失函数;represents the loss function;
在本实施例中,设定SVR参数中的惩罚系数C=128,学习参数g=0.1,采用RBF(径向高斯基)核函数进行训练;In the present embodiment, the penalty coefficient C=128 in the SVR parameter is set, the learning parameter g=0.1, and the RBF (radial Gaussian basis) kernel function is used for training;
步骤S3-2)利用拉格朗日乘子法对最优化求解问题进行转化,转换为对其对偶问题进行求解,得到预测函数f(x)的表达式;Step S3-2) using the Lagrange multiplier method to transform the optimization problem into solving its dual problem to obtain the expression of the prediction function f(x);
引入如下拉格朗日函数:Introduce the following Lagrangian function:
其中表示αi和表示ηi和而αi、ηi以及均为拉格朗日乘子。公式(9)的求解属于凸二次规划问题范畴,通过先求解L(w,α,η,b)函数对w,b,ξ的最小值转化,则求解满足L(w,α,η,b)函数对w,b,ξ的偏导分别为0的“鞍点”:in represent αi and represent ηi and And αi , ηi and are Lagrangian multipliers. The solution of formula (9) belongs to the category of convex quadratic programming problems. By first solving the L(w, α, η, b) function to convert the minimum value of w, b, ξ, the solution satisfies L(w, α, η, b) The "saddle point" where the partial derivatives of the function to w, b, ξ are 0:
可得到带回公式(9)中可得:available Bring it back to formula (9) to get:
这样,求解公式(9)的最优解问题转化为以下对偶问题的求解:In this way, the problem of solving the optimal solution of formula (9) is transformed into the solution of the following dual problem:
这样,就将问题转化为只包含一个α参数的最优化问题,得到α的值后便可求出相应的w,最终的f(x)为:In this way, the problem is transformed into an optimization problem containing only one α parameter. After obtaining the value of α, the corresponding w can be obtained. The final f(x) is:
根据KKT条件,SVR的对偶问题只有满足以下条件时,对偶问题的解才等价于原问题的解:According to the KKT condition, the solution of the dual problem of SVR is equivalent to the solution of the original problem only when the following conditions are satisfied:
αi(ε+ξi-y(i)+w·x+b)=0αi (ε+ξi -y(i) +w·x+b)=0
(C-αi)ξi=0 (14)(C-αi )ξi =0 (14)
可看出当αi=C时,ξi才不等于0,此时样本点会落在ε(即离群点),则:It can be seen that when αi =C, ξi is not equal to 0. At this time, the sample point will fall on ε (that is, the outlier point), then:
ε-y(i)+w·x+b≥0,ξi=0 ifαi<Cε-y(i) +w·x+b≥0,ξi =0 ifαi <C
(15) (15)
ε-y(i)+w·x+b<0,ifαi>0ε-y(i) +w·x+b<0, ifαi >0
那么b的取值满足:Then the value of b satisfies:
≤b≤ (16)≤b≤ (16)
由KKT条件还可看出,对于|f(x)-y(i)|=ε+ξi(*)的样本点,其对应的才不等于0;这些样本点即支持向量。It can also be seen from the KKT condition that for the sample points of |f(x)-y(i) |=ε+ξi(*) , the corresponding is not equal to 0; these sample points are the support vectors.
当核函数为K(x(i),x)时,预测函数f(x)变为:When the kernel function is K(x(i) ,x), the prediction function f(x) becomes:
训练得到SVR模型后,只有支持向量对应的点决定着回归的预测值。After training the SVR model, only the point corresponding to the support vector determines the predicted value of the regression.
步骤S3-3)所述年龄特征模型为预测函数f(x)。The age characteristic model in step S3-3) is a prediction function f(x).
基于上述方法生成的亚裔人脸的年龄特征模型,本发明还提供了一种亚裔人脸的年龄估计方法,所述方法包括:Based on the age characteristic model of the Asian face generated by the above method, the present invention also provides a method for estimating the age of the Asian face, the method comprising:
步骤T1)提取待估计年龄的亚裔人脸图片的纹差特征图,从而得到人脸图片的原始特征向量F(z);Step T1) Extract the texture difference feature map of the Asian face picture whose age is to be estimated, so as to obtain the original feature vector F(z) of the face picture;
步骤T2)确定原始特征向量F(z)的降维维度,基于主成分分析法对原始特征向量F(z)进行降维,得到降维后的特征向量FD(z);Step T2) Determine the dimensionality reduction dimension of the original feature vector F(z), perform dimensionality reduction on the original feature vector F(z) based on the principal component analysis method, and obtain the feature vector FD (z) after dimensionality reduction;
步骤T3)将步骤T2)得到的特征向量FD(z)输入到步骤S3)生成的年龄特征模型中的预测函数f(x),计算出待估计年龄的亚裔人脸的预测年龄。Step T3) Input the feature vector FD (z) obtained in step T2) into the prediction function f(x) in the age feature model generated in step S3), and calculate the predicted age of the Asian face whose age is to be estimated.
最后所应说明的是,以上实施例仅用以说明本发明的技术方案而非限制。尽管参照实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,对本发明的技术方案进行修改或者等同替换,都不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention rather than limit them. Although the present invention has been described in detail with reference to the embodiments, those skilled in the art should understand that modifications or equivalent replacements to the technical solutions of the present invention do not depart from the spirit and scope of the technical solutions of the present invention, and all of them should be included in the scope of the present invention. within the scope of the claims.
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| CN201510587167.7ACN106529378B (en) | 2015-09-15 | 2015-09-15 | A kind of the age characteristics model generating method and age estimation method of asian ancestry's face | 
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| CN201510587167.7ACN106529378B (en) | 2015-09-15 | 2015-09-15 | A kind of the age characteristics model generating method and age estimation method of asian ancestry's face | 
| Publication Number | Publication Date | 
|---|---|
| CN106529378Atrue CN106529378A (en) | 2017-03-22 | 
| CN106529378B CN106529378B (en) | 2019-04-02 | 
| Application Number | Title | Priority Date | Filing Date | 
|---|---|---|---|
| CN201510587167.7AActiveCN106529378B (en) | 2015-09-15 | 2015-09-15 | A kind of the age characteristics model generating method and age estimation method of asian ancestry's face | 
| Country | Link | 
|---|---|
| CN (1) | CN106529378B (en) | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| CN108281880A (en)* | 2018-02-27 | 2018-07-13 | 广东欧珀移动通信有限公司 | Control method, control device, terminal, computer device, and storage medium | 
| CN108734127A (en)* | 2018-05-21 | 2018-11-02 | 深圳市梦网科技发展有限公司 | Age identifies value adjustment method, device, equipment and storage medium | 
| CN108762491A (en)* | 2018-05-15 | 2018-11-06 | 苏州妙文信息科技有限公司 | A kind of intelligence self-regulation display | 
| CN109002763A (en)* | 2018-06-15 | 2018-12-14 | 中国科学院半导体研究所 | Method and device based on homologous successional simulation face aging | 
| CN109255289A (en)* | 2018-07-27 | 2019-01-22 | 电子科技大学 | A kind of across aging face identification method generating model based on unified formula | 
| CN109993210A (en)* | 2019-03-05 | 2019-07-09 | 北京工业大学 | A neuroimaging-based method for estimating brain age | 
| CN112668613A (en)* | 2020-12-07 | 2021-04-16 | 中国西安卫星测控中心 | Satellite infrared imaging effect prediction method based on weather forecast and machine learning | 
| CN114882568A (en)* | 2022-05-27 | 2022-08-09 | 广东工业大学 | Face image age estimation method based on multi-view dimensionality reduction and order regression | 
| CN115359546A (en)* | 2022-10-21 | 2022-11-18 | 乐山师范学院 | Human age recognition method and system based on facial recognition | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| CN101477113A (en)* | 2009-01-15 | 2009-07-08 | 上海交通大学 | Disulfide bond link mode detection method based on PCA dimensionality reduction technology | 
| CN101615248A (en)* | 2009-04-21 | 2009-12-30 | 华为技术有限公司 | Age estimation method, device and face recognition system | 
| CN102663413A (en)* | 2012-03-09 | 2012-09-12 | 中盾信安科技(江苏)有限公司 | Multi-gesture and cross-age oriented face image authentication method | 
| CN104598871A (en)* | 2014-12-06 | 2015-05-06 | 电子科技大学 | Correlation regression based face age calculating method | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| CN101477113A (en)* | 2009-01-15 | 2009-07-08 | 上海交通大学 | Disulfide bond link mode detection method based on PCA dimensionality reduction technology | 
| CN101615248A (en)* | 2009-04-21 | 2009-12-30 | 华为技术有限公司 | Age estimation method, device and face recognition system | 
| CN102663413A (en)* | 2012-03-09 | 2012-09-12 | 中盾信安科技(江苏)有限公司 | Multi-gesture and cross-age oriented face image authentication method | 
| CN104598871A (en)* | 2014-12-06 | 2015-05-06 | 电子科技大学 | Correlation regression based face age calculating method | 
| Title | 
|---|
| JIANYI LIU 等: "Hybrid constraint SVR for facial age estimation", 《SIGNAL PROCESSING》* | 
| JUHA YLIONINAS 等: "Age Estimation Using Local Binary Pattern Kernel Density Estimate", 《INTERNATIONAL CONFERENCE ON IMAGE ANALYSIS AND PROCESSING》* | 
| 方尔庆 等: "基于视听信息的自动年龄估计方法", 《软件学报》* | 
| 黄兵: "基于Gabor小波与LBP直方图的人脸图像年龄估计", 《中国优秀硕士学位论文全文数据库 信息科技辑》* | 
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| CN108281880A (en)* | 2018-02-27 | 2018-07-13 | 广东欧珀移动通信有限公司 | Control method, control device, terminal, computer device, and storage medium | 
| CN108762491A (en)* | 2018-05-15 | 2018-11-06 | 苏州妙文信息科技有限公司 | A kind of intelligence self-regulation display | 
| CN108734127A (en)* | 2018-05-21 | 2018-11-02 | 深圳市梦网科技发展有限公司 | Age identifies value adjustment method, device, equipment and storage medium | 
| CN109002763A (en)* | 2018-06-15 | 2018-12-14 | 中国科学院半导体研究所 | Method and device based on homologous successional simulation face aging | 
| CN109002763B (en)* | 2018-06-15 | 2021-09-24 | 中国科学院半导体研究所 | Method and device for simulating human face aging based on homologous continuity | 
| CN109255289A (en)* | 2018-07-27 | 2019-01-22 | 电子科技大学 | A kind of across aging face identification method generating model based on unified formula | 
| CN109255289B (en)* | 2018-07-27 | 2021-10-26 | 电子科技大学 | Cross-aging face recognition method based on unified generation model | 
| CN109993210A (en)* | 2019-03-05 | 2019-07-09 | 北京工业大学 | A neuroimaging-based method for estimating brain age | 
| CN112668613A (en)* | 2020-12-07 | 2021-04-16 | 中国西安卫星测控中心 | Satellite infrared imaging effect prediction method based on weather forecast and machine learning | 
| CN114882568A (en)* | 2022-05-27 | 2022-08-09 | 广东工业大学 | Face image age estimation method based on multi-view dimensionality reduction and order regression | 
| CN115359546A (en)* | 2022-10-21 | 2022-11-18 | 乐山师范学院 | Human age recognition method and system based on facial recognition | 
| CN115359546B (en)* | 2022-10-21 | 2023-01-20 | 乐山师范学院 | Human age identification method and system based on facial identification | 
| Publication number | Publication date | 
|---|---|
| CN106529378B (en) | 2019-04-02 | 
| Publication | Publication Date | Title | 
|---|---|---|
| CN106529378B (en) | A kind of the age characteristics model generating method and age estimation method of asian ancestry's face | |
| Yuan et al. | Factorization-based texture segmentation | |
| CN104392463B (en) | Image salient region detection method based on joint sparse multi-scale fusion | |
| Wang et al. | Radiodiff: An effective generative diffusion model for sampling-free dynamic radio map construction | |
| CN106778788B (en) | A multi-feature fusion method for aesthetic evaluation of images | |
| CN104103082A (en) | Image saliency detection method based on region description and priori knowledge | |
| Sahan et al. | A facial recognition using a combination of a novel one dimension deep CNN and LDA | |
| Ochoa‐Villegas et al. | Addressing the illumination challenge in two‐dimensional face recognition: a survey | |
| CN105608433A (en) | Nuclear coordinated expression-based hyperspectral image classification method | |
| CN101482923A (en) | Human body target detection and sexuality recognition method in video monitoring | |
| CN108038501B (en) | Hyperspectral image classification method based on multi-mode compression bilinear pooling | |
| CN107103266B (en) | Two-dimensional face fraud detection classifier training and face fraud detection method | |
| Altaei et al. | Brain tumor detection and classification using SIFT in MRI images | |
| CN119200711A (en) | Intelligent temperature control system and method for solder paste stirring based on temperature sensor | |
| CN119693652A (en) | A method for image segmentation and feature extraction | |
| Atta et al. | Human identification based on temporal lifting using 5/3 wavelet filters and radon transform | |
| CN109947960B (en) | Face multi-attribute joint estimation model construction method based on depth convolution | |
| Song et al. | 3DCNN-NF: Few-shot hyperspectral image change detection based on 3-D convolution neural network and normalizing flow | |
| CN105740838A (en) | Recognition method in allusion to facial images with different dimensions | |
| Xu et al. | MDBES-net: Building extraction from remote sensing images based on multiscale decoupled body and edge supervision network | |
| CN104077608B (en) | Activity recognition method based on the slow characteristic function of sparse coding | |
| Ren et al. | A novel approach of low-light image used for face recognition | |
| Goel et al. | Gender classification using KPCA and SVM | |
| Chun-man et al. | Face expression recognition based on improved MobileNeXt | |
| CN107958245A (en) | A kind of gender classification method and device based on face characteristic | 
| Date | Code | Title | Description | 
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | Effective date of registration:20221208 Address after:100190, No. 21 West Fourth Ring Road, Beijing, Haidian District Patentee after:INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES Patentee after:NANHAI RESEARCH STATION, INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES Address before:100190, No. 21 West Fourth Ring Road, Beijing, Haidian District Patentee before:INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES | |
| TR01 | Transfer of patent right |