Movatterモバイル変換


[0]ホーム

URL:


CN112001865A - Face recognition method, device and equipment - Google Patents

Face recognition method, device and equipment
Download PDF

Info

Publication number
CN112001865A
CN112001865ACN202010910226.0ACN202010910226ACN112001865ACN 112001865 ACN112001865 ACN 112001865ACN 202010910226 ACN202010910226 ACN 202010910226ACN 112001865 ACN112001865 ACN 112001865A
Authority
CN
China
Prior art keywords
sparse representation
image
preset
face
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010910226.0A
Other languages
Chinese (zh)
Inventor
房小兆
韩娜
刘志虎
周郭许
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of TechnologyfiledCriticalGuangdong University of Technology
Priority to CN202010910226.0ApriorityCriticalpatent/CN112001865A/en
Publication of CN112001865ApublicationCriticalpatent/CN112001865A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种人脸识别方法、装置和设备,方法包括:获取测试人脸图像,并通过预置对抗生成网络对测试人脸图像进行修复,得到修复图像;通过预置训练样本集构建的字典矩阵对修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像;计算各类训练样本对应的重构图像与修复图像的残差值,选取最小的残差值对应的训练样本的类别作为测试人脸图像的人脸识别结果,解决了现有的人脸识别方法在人脸图像存在遮挡、光照、模糊等图像质量情况时,容易出现识别错误,使得人脸识别准确率不高的技术问题。

Figure 202010910226

The present application discloses a face recognition method, device and device. The method includes: acquiring a test face image, and repairing the test face image through a preset confrontation generation network to obtain a repaired image; The sparse representation of the face of the repaired image is carried out by using the dictionary matrix of the sparse representation of the face to obtain the reconstructed images corresponding to various training samples; The category of the sample is used as the face recognition result of the test face image, which solves the problem that the existing face recognition method is prone to recognition errors when there are image quality conditions such as occlusion, illumination, and blur in the face image, which makes the face recognition accuracy rate. Not a high technical problem.

Figure 202010910226

Description

Translated fromChinese
一种人脸识别方法、装置和设备A face recognition method, device and equipment

技术领域technical field

本申请涉及人脸识别技术领域,尤其涉及一种人脸识别方法、装置和设备。The present application relates to the technical field of face recognition, and in particular, to a method, apparatus and device for face recognition.

背景技术Background technique

身份识别或验证方法广泛应用于公共安全、电子商务等领域。现有的身份识别或验证方法主要是依赖于生物特征识别技术。生物特征识别是指依靠人类生理特征,使用智能方法或技术进行身份识别或验证,生物特征包括指纹、掌纹、虹膜等特征,而人脸识别方法是身份识别中最常用的方法。人脸识别方法是将人脸作为生物特征进行身份识别,不同于其他生物特征识别方法,人脸识别技术具有非接触型、便捷快速、识别性能较高等优点。Identification or verification methods are widely used in public security, e-commerce and other fields. Existing identification or verification methods mainly rely on biometric identification technology. Biometric identification refers to relying on human physiological characteristics and using intelligent methods or technologies for identification or verification. Biometrics include fingerprints, palm prints, iris and other characteristics, and face recognition is the most commonly used method in identification. The face recognition method is to use the face as a biological feature for identification. Different from other biological feature recognition methods, the face recognition technology has the advantages of non-contact, convenient and fast, and high recognition performance.

现有技术中的人脸识别方法在人脸图像存在遮挡、光照、模糊等图像质量情况时,容易出现识别错误,使得识别准确率不高的问题。The face recognition method in the prior art is prone to recognition errors when there are image quality conditions such as occlusion, illumination, and blur in the face image, resulting in a low recognition accuracy.

发明内容SUMMARY OF THE INVENTION

本申请提供了一种人脸识别方法、装置和设备,用于解决现有的人脸识别方法在人脸图像存在遮挡、光照、模糊等图像质量情况时,容易出现识别错误,使得人脸识别准确率不高的技术问题。The present application provides a face recognition method, device and equipment, which are used to solve the problem that recognition errors are prone to occur in the existing face recognition methods when there are image quality conditions such as occlusion, illumination, and blur in the face image, which makes the face recognition Technical issues with low accuracy.

有鉴于此,本申请第一方面提供了一种人脸识别方法,包括:In view of this, a first aspect of the present application provides a face recognition method, including:

获取测试人脸图像,并通过预置对抗生成网络对所述测试人脸图像进行修复,得到修复图像;Obtain a test face image, and repair the test face image through a preset confrontation generation network to obtain a repaired image;

通过预置训练样本集构建的字典矩阵对所述修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像;Perform a sparse representation of the face of the repaired image through a dictionary matrix constructed by a preset training sample set, and obtain reconstructed images corresponding to various training samples;

计算各类所述训练样本对应的重构图像与所述修复图像的残差值,选取最小的所述残差值对应的所述训练样本的类别作为所述测试人脸图像的人脸识别结果。Calculate the residual value of the reconstructed image corresponding to the various types of training samples and the repaired image, and select the category of the training sample corresponding to the smallest residual value as the face recognition result of the test face image .

可选的,所述通过预置训练样本集构建的字典矩阵对所述修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像,包括:Optionally, performing sparse representation of faces on the repaired image by using the dictionary matrix constructed by the preset training sample set to obtain reconstructed images corresponding to various training samples, including:

将包含k个类别的预置训练样本集中的每个所述训练样本转换为m维列向量,组合所有类别中的所有所述训练样本对应的m维列向量,得到所述预置训练样本集对应的字典矩阵A,其中,所述字典矩阵A中的第i个元素为所述预置训练样本集中第i个类别的所有所述训练样本对应的m维列向量组合得到;Convert each of the training samples in the preset training sample set containing k categories into an m-dimensional column vector, and combine the m-dimensional column vectors corresponding to all the training samples in all categories to obtain the preset training sample set The corresponding dictionary matrix A, wherein the i-th element in the dictionary matrix A is obtained by combining the m-dimensional column vectors corresponding to all the training samples of the i-th category in the preset training sample set;

通过所述字典矩阵A和所述修复图像构建稀疏表示模型;Build a sparse representation model through the dictionary matrix A and the repaired image;

对所述稀疏表示模型进行求解,得到各类所述训练样本对应的稀疏表示系数;Solving the sparse representation model to obtain sparse representation coefficients corresponding to various types of the training samples;

通过各类所述训练样本对应的稀疏表示系数和所述字典矩阵对所述修复图像进行人脸稀疏表示,得到各类所述训练样本对应的重构图像。The restored image is sparsely represented by using the sparse representation coefficients corresponding to the various types of training samples and the dictionary matrix to obtain reconstructed images corresponding to the various types of the training samples.

可选的,所述稀疏表示模型为:Optionally, the sparse representation model is:

Figure BDA0002662984980000021
Figure BDA0002662984980000021

其中,x为稀疏表示系数,y为修复图像,ε为误差。where x is the sparse representation coefficient, y is the repaired image, and ε is the error.

可选的,所述对所述稀疏表示模型进行求解,得到各类所述训练样本对应的稀疏表示系数,包括:Optionally, the sparse representation model is solved to obtain sparse representation coefficients corresponding to various types of training samples, including:

S1、对所述稀疏表示模型相关的目标参数进行初始化,其中,初始化后的所述目标参数中初始迭代次数t=1、初始残差r0=y、初始稀疏表示系数x=0和索引集Λ0=φ;S1. Initialize the target parameters related to the sparse representation model, wherein, in the initialized target parameters, the initial iteration number t=1, the initial residual r0 =y, the initial sparse representation coefficient x=0, and the index set Λ0 =φ;

S2、将初始化后的所述目标参数中的初始残差和初始稀疏表示系数代入目标函数

Figure BDA0002662984980000022
计算得到脚注λt;S2. Substitute the initial residuals and initial sparse representation coefficients in the initialized target parameters into the target function
Figure BDA0002662984980000022
Calculate the footnote λt ;

S3、基于所述脚注λt更新所述索引集Λt=Λt-1∪λt,并基于更新后的所述索引集计算

Figure BDA0002662984980000023
对应的稀疏表示系数xt;S3. Update the index set Λtt-1 ∪λt based on the footnote λt , and calculate based on the updated index set
Figure BDA0002662984980000023
the corresponding sparse representation coefficient xt ;

S4、更新残差rt=y-Axt,当所述残差rt满足预置收敛条件时,输出所述稀疏表示系数xt,当所述残差rt不满足所述预置收敛条件时,迭代次数加一,并返回步骤S2。S4. Update the residual rt =y-Axt , when the residual rt satisfies the preset convergence condition, output the sparse representation coefficient xt , and when the residual rt does not meet the preset convergence condition When the condition is met, the number of iterations is increased by one, and the process returns to step S2.

可选的,所述预置对抗生成网络通过所述预置训练样本集训练得到,所述预置对抗生成网络的训练优化函数为:Optionally, the preset confrontation generation network is obtained by training the preset training sample set, and the training optimization function of the preset confrontation generation network is:

Figure BDA0002662984980000024
Figure BDA0002662984980000024

其中,G为预置对抗生成网络中的生成网络,D为预置对抗生成网络中的判别网络,D(x)为真实的人类图像,G(z)为生成网络生成的人脸图像,Pdata为真实的人脸图像的数据分布,Pz为噪声分布。Among them, G is the generation network in the preset confrontation generation network, D is the discriminative network in the preset confrontation generation network, D(x) is the real human image, G(z) is the face image generated by the generation network, Pdata is the data distribution of real face images, and Pz is the noise distribution.

本申请第二方面提供了一种人脸识别装置,包括:A second aspect of the present application provides a face recognition device, including:

图像修复单元,用于获取测试人脸图像,并通过预置对抗生成网络对所述测试人脸图像进行修复,得到修复图像;an image repairing unit, configured to obtain a test face image, and repair the test face image through a preset confrontation generation network to obtain a repaired image;

稀疏表示单元,用于通过预置训练样本集构建的字典矩阵对所述修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像;The sparse representation unit is used to sparsely represent the face of the repaired image through the dictionary matrix constructed by the preset training sample set, so as to obtain reconstructed images corresponding to various training samples;

计算单元,用于计算各类所述训练样本对应的重构图像与所述修复图像的残差值,选取最小的所述残差值对应的所述训练样本的类别作为所述测试人脸图像的人脸识别结果。A computing unit, configured to calculate the residual value of the reconstructed image corresponding to the various types of training samples and the repaired image, and select the category of the training sample corresponding to the smallest residual value as the test face image face recognition results.

可选的,所述稀疏表示单元包括:Optionally, the sparse representation unit includes:

组合子单元,用于将包含k个类别的预置训练样本集中的每个所述训练样本转换为m维列向量,组合所有类别中的所有所述训练样本对应的m维列向量,得到所述预置训练样本集对应的字典矩阵A,其中,所述字典矩阵A中的第i个元素为所述预置训练样本集中第i个类别的所有所述训练样本对应的m维列向量组合得到;The combining subunit is used to convert each of the training samples in the preset training sample set containing k categories into an m-dimensional column vector, and combine the m-dimensional column vectors corresponding to all the training samples in all categories to obtain the The dictionary matrix A corresponding to the preset training sample set, wherein the ith element in the dictionary matrix A is the m-dimensional column vector combination corresponding to all the training samples of the ith category in the preset training sample set get;

构建子单元,用于通过所述字典矩阵A和所述修复图像构建稀疏表示模型;constructing a subunit for constructing a sparse representation model through the dictionary matrix A and the repaired image;

求解子单元,用于对所述稀疏表示模型进行求解,得到各类所述训练样本对应的稀疏表示系数;a solving subunit for solving the sparse representation model to obtain sparse representation coefficients corresponding to various types of the training samples;

稀疏表示子单元,用于通过各类所述训练样本对应的稀疏表示系数和所述字典矩阵对所述修复图像进行人脸稀疏表示,得到各类所述训练样本对应的重构图像。The sparse representation subunit is used to sparsely represent the face of the repaired image through the sparse representation coefficients corresponding to the various types of the training samples and the dictionary matrix, so as to obtain reconstructed images corresponding to the various types of the training samples.

可选的,所述稀疏表示模型为:Optionally, the sparse representation model is:

Figure BDA0002662984980000031
Figure BDA0002662984980000031

其中,x为稀疏表示系数,y为修复图像,ε为误差。where x is the sparse representation coefficient, y is the repaired image, and ε is the error.

可选的,所述求解子单元具体用于:Optionally, the solving subunit is specifically used for:

S1、对所述稀疏表示模型相关的目标参数进行初始化,其中,初始化后的所述目标参数中初始迭代次数t=1、初始残差r0=y、初始稀疏表示系数x=0和索引集Λ0=φ;S1. Initialize the target parameters related to the sparse representation model, wherein, in the initialized target parameters, the initial iteration number t=1, the initial residual r0 =y, the initial sparse representation coefficient x=0, and the index set Λ0 =φ;

S2、将初始化后的所述目标参数中的初始残差和初始稀疏表示系数代入目标函数

Figure BDA0002662984980000041
计算得到脚注λt;S2. Substitute the initial residuals and initial sparse representation coefficients in the initialized target parameters into the target function
Figure BDA0002662984980000041
Calculate the footnote λt ;

S3、基于所述脚注λt更新所述索引集Λt=Λt-1∪λt,并基于更新后的所述索引集计算

Figure BDA0002662984980000042
对应的稀疏表示系数xt;S3. Update the index set Λtt-1 ∪λt based on the footnote λt , and calculate based on the updated index set
Figure BDA0002662984980000042
the corresponding sparse representation coefficient xt ;

S4、更新残差rt=y-Axt,当所述残差rt满足预置收敛条件时,输出所述稀疏表示系数xt,当所述残差rt不满足所述预置收敛条件时,迭代次数加一,并返回步骤S2。S4. Update the residual rt =y-Axt , when the residual rt satisfies the preset convergence condition, output the sparse representation coefficient xt , and when the residual rt does not meet the preset convergence condition When the condition is met, the number of iterations is increased by one, and the process returns to step S2.

本申请第三方面提供了一种人脸识别设备,所述设备包括处理器以及存储器:A third aspect of the present application provides a face recognition device, the device includes a processor and a memory:

所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;the memory is used to store program code and transmit the program code to the processor;

所述处理器用于根据所述程序代码中的指令执行第一方面任一种所述的人脸识别方法。The processor is configured to execute any one of the face recognition methods of the first aspect according to the instructions in the program code.

从以上技术方案可以看出,本申请具有以下优点:As can be seen from the above technical solutions, the present application has the following advantages:

本申请提供了一种人脸识别方法,包括:获取测试人脸图像,并通过预置对抗生成网络对测试人脸图像进行修复,得到修复图像;通过预置训练样本集构建的字典矩阵对修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像;计算各类训练样本对应的重构图像与修复图像的残差值,选取最小的残差值对应的训练样本的类别作为测试人脸图像的人脸识别结果。The present application provides a face recognition method, including: acquiring a test face image, and repairing the test face image through a preset confrontation generation network to obtain a repaired image; using a dictionary matrix constructed by a preset training sample set to repair the image The image is sparsely represented by faces, and reconstructed images corresponding to various training samples are obtained; the residual values of the reconstructed images corresponding to various training samples and the repaired images are calculated, and the category of training samples corresponding to the smallest residual value is selected as the test. Face recognition results from face images.

本申请中的人脸识别方法,通过预置对抗生成网络对获取的测试人脸图像进行处理,以修复测试人脸图像存在的遮挡、光照、模糊等问题,提高图像质量,有助于提高人脸识别准确率;通过训练样本构建字典矩阵进行人脸稀疏表示,得到重构图像,然后计算各类训练样本对应的重构图像与修复图像的残差值,得到人脸识别结果,从而解决了现有的人脸识别方法在人脸图像存在遮挡、光照、模糊等图像质量情况时,容易出现识别错误,使得人脸识别准确率不高的技术问题。In the face recognition method in this application, the acquired test face image is processed through a preset confrontation generation network, so as to repair the problems such as occlusion, illumination, blur, etc. existing in the test face image, improve the image quality, and help improve the quality of the human face. Accuracy of face recognition; construct a dictionary matrix through training samples for sparse representation of faces, obtain reconstructed images, and then calculate the residual values of reconstructed images and repaired images corresponding to various training samples to obtain face recognition results, thus solving the problem of The existing face recognition methods are prone to recognition errors when there are image quality conditions such as occlusion, illumination, and blur in the face image, resulting in a technical problem that the accuracy of face recognition is not high.

附图说明Description of drawings

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following briefly introduces the accompanying drawings required for the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application, and for those of ordinary skill in the art, other drawings can also be obtained from these drawings without any creative effort.

图1为本申请实施例提供的一种人脸识别方法的一个流程示意图;1 is a schematic flowchart of a face recognition method provided by an embodiment of the present application;

图2为本申请实施例提供的一种人脸识别装置的一个结构示意图。FIG. 2 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application.

具体实施方式Detailed ways

本申请提供了一种人脸识别方法、装置和设备,用于解决现有的人脸识别方法在人脸图像存在遮挡、光照、模糊等图像质量情况时,容易出现识别错误,使得人脸识别准确率不高的技术问题。The present application provides a face recognition method, device and equipment, which are used to solve the problem that recognition errors are prone to occur in the existing face recognition methods when there are image quality conditions such as occlusion, illumination, and blur in the face image, which makes the face recognition Technical problems with low accuracy.

为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make those skilled in the art better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only It is a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.

为了便于理解,请参阅图1,本申请提供的一种人脸识别方法的一个实施例,包括:For ease of understanding, please refer to FIG. 1 , an embodiment of a face recognition method provided by this application includes:

步骤101、获取测试人脸图像,并通过预置对抗生成网络对测试人脸图像进行修复,得到修复图像。Step 101: Obtain a test face image, and repair the test face image through a preset confrontation generation network to obtain a repaired image.

为了解决人脸图像存在的遮挡、光照、模糊等图像质量问题所导致的容易出现识别错误,人脸识别准确率不高的技术问题。本申请实施例中通过预置对抗生成网络进行人脸图像的修复,提高人脸图像质量。In order to solve the technical problems of low accuracy of face recognition, which are prone to recognition errors caused by image quality problems such as occlusion, illumination, and blur in face images. In the embodiment of the present application, the face image is repaired by using a preset confrontation generation network, and the quality of the face image is improved.

预置对抗生成网络包括生成网络和判别网络,生成网络用于生成一个与原始图像相似的图像,本申请中的生成网络结构采用自编码网络结构,主要分为两个部分:编码器和解码器,编码器可以将原始图像映射到一个隐藏层表示,解码器通过这些映射信息生成一个接近真实图像的人脸图像。编码器由四个卷积层组成,每个卷积层后接一个批归一化层和一个Relu激活层,解码器由四个转置卷积层组成,前三个转置卷积层后都接一个批归一化层和一个Relu激活层,最后一个转置卷积层后接一个Tanh激活层。编码器和解码器之间通过一个残差模块连接,该残差模块由一个卷积层、一个批归一化层和一个Relu激活层依次连接构成。The preset adversarial generation network includes a generation network and a discriminant network. The generation network is used to generate an image similar to the original image. The generation network structure in this application adopts the self-encoding network structure, which is mainly divided into two parts: encoder and decoder. , the encoder can map the original image to a hidden layer representation, and the decoder generates a face image that is close to the real image through these mapping information. The encoder consists of four convolutional layers, each followed by a batch normalization layer and a Relu activation layer, and the decoder consists of four transposed convolutional layers, followed by the first three transposed convolutional layers. Both are followed by a batch normalization layer and a Relu activation layer, and the last transposed convolutional layer is followed by a Tanh activation layer. The encoder and decoder are connected by a residual module, which consists of a convolutional layer, a batch normalization layer and a Relu activation layer connected in sequence.

判别网络用于判别输入的图像是真实图像还是生成图像,其由五个卷积层组成,前四个卷积层后都连接一个批归一化层和LeakyRelu层,最后一个卷积层后接一个Sigmoid激活函数,用于输入图像为真实的图像的概率。The discriminant network is used to determine whether the input image is a real image or a generated image. It consists of five convolutional layers. The first four convolutional layers are connected to a batch normalization layer and a LeakyRelu layer, and the last convolutional layer is followed by a A sigmoid activation function for the probability that the input image is a real image.

进一步,预置对抗生成网络通过预置训练样本集训练得到,预置对抗生成网络的训练优化函数为:Further, the preset confrontation generation network is obtained by training the preset training sample set, and the training optimization function of the preset confrontation generation network is:

Figure BDA0002662984980000061
Figure BDA0002662984980000061

其中,G为预置对抗生成网络中的生成网络,D为预置对抗生成网络中的判别网络,D(x)为真实的人类图像,G(z)为生成网络生成的人脸图像,Pdata为真实的人脸图像的数据分布,Pz为噪声分布,表示遮挡、光照、模糊等图像问题。Among them, G is the generation network in the preset confrontation generation network, D is the discriminative network in the preset confrontation generation network, D(x) is the real human image, G(z) is the face image generated by the generation network, Pdata is the data distribution of real face images, and Pz is the noise distribution, representing image problems such as occlusion, illumination, and blur.

通过预置训练样本集对对抗生成网络进行训练,对抗生成网络的训练过程就是通过竞争,使得生成网络和判别网络同时增强,最终使得生成网络生成的图像能够达到以假乱真的效果,即判别网络无法区分生成的图像与真实的图像。The adversarial generative network is trained by the preset training sample set. The training process of the adversarial generative network is through competition, so that the generative network and the discriminant network are enhanced at the same time, and finally the images generated by the generative network can achieve the effect of being fake and real, that is, the discriminant network cannot distinguish Generated images vs real images.

训练时需要模拟存在遮挡、光照等问题的人脸图像,模拟过程为将完整的人脸图像与其代表的破损二值掩码相乘,得到修复前的人脸图像,破损二值掩码有着和完整的人脸图像相同大小的尺寸,只包含0值和1值,0值代表缺失,1值代表已知区域。训练过程中,将修复前的人脸图像输入到生成网络,生成网络通过编码器和解码器对输入的人脸图像进行处理,得到修复后的人脸图像,判别器通过修复后的人脸图像和原来完整的人脸图像进行判别,得到的判定信息将反过来指导生成网络生成更真实的修复图像,通过对抗训练最终使得生成网络生成更加完整、真实的修复图像。During training, it is necessary to simulate face images with problems such as occlusion and illumination. The simulation process is to multiply the complete face image and its representative damaged binary mask to obtain the face image before repair. The damaged binary mask has and The complete face image has the same size and contains only 0 and 1 values, where 0 represents missing and 1 represents known area. In the training process, the face image before repair is input to the generation network, and the generation network processes the input face image through the encoder and decoder to obtain the repaired face image, and the discriminator passes the repaired face image. It is judged from the original complete face image, and the obtained judgment information will in turn guide the generation network to generate a more realistic restoration image, and finally make the generation network generate a more complete and realistic restoration image through adversarial training.

步骤102、通过预置训练样本集构建的字典矩阵对修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像。Step 102: Perform a sparse representation of faces on the repaired image by using a dictionary matrix constructed by a preset training sample set, and obtain reconstructed images corresponding to various training samples.

通过预置训练样本集构建的字典矩阵对修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像。The repaired images are sparsely represented by the dictionary matrix constructed by the preset training sample set, and the reconstructed images corresponding to various training samples are obtained.

进一步,通过预置训练样本集构建的字典矩阵对修复图像进行人脸稀疏表示的具体过程包括:Further, the specific process of performing sparse face representation on the repaired image through the dictionary matrix constructed by the preset training sample set includes:

1、将包含k个类别的预置训练样本集中的每个训练样本转换为m维列向量,组合所有类别中的所有训练样本对应的m维列向量,得到预置训练样本集对应的字典矩阵A,其中,字典矩阵A中的第i个元素为预置训练样本集中第i个类别的所有训练样本对应的m维列向量组合得到。1. Convert each training sample in the preset training sample set containing k categories into an m-dimensional column vector, combine the m-dimensional column vectors corresponding to all training samples in all categories, and obtain the dictionary matrix corresponding to the preset training sample set A, where the ith element in the dictionary matrix A is obtained by combining the m-dimensional column vectors corresponding to all training samples of the ith category in the preset training sample set.

预置训练样本集包含k个类别,对于第i类训练样本,其对应的训练人脸图像有ni张,每张训练人脸图像大小为w×h,将每张训练人脸图像转换为m维列向量,即

Figure BDA0002662984980000071
第i类训练样本可以表示为
Figure BDA0002662984980000072
预置训练样本集对应的字典矩阵A可以表示为A=[A1,A2,…,…Ai,…,Ak]。The preset training sample set contains k categories. For the i-th training sample, there are ni corresponding training face images, and the size of each training face image is w×h, and each training face image is converted into m-dimensional column vector, i.e.
Figure BDA0002662984980000071
The i-th class training samples can be expressed as
Figure BDA0002662984980000072
The dictionary matrix A corresponding to the preset training sample set can be expressed as A=[A1 , A2 ,...,...Ai ,...,Ak ].

2、通过字典矩阵A和修复图像构建稀疏表示模型。2. Build a sparse representation model from the dictionary matrix A and the inpainted image.

稀疏表示模型为:The sparse representation model is:

Figure BDA0002662984980000073
Figure BDA0002662984980000073

其中,x为稀疏表示系数,y为修复图像,ε为误差。where x is the sparse representation coefficient, y is the repaired image, and ε is the error.

3、对稀疏表示模型进行求解,得到各类训练样本对应的稀疏表示系数。3. Solve the sparse representation model to obtain sparse representation coefficients corresponding to various training samples.

求解过程具体为:The solution process is as follows:

S1、对稀疏表示模型相关的目标参数进行初始化,其中,初始化后的目标参数中初始迭代次数t=1、初始残差r0=y、初始稀疏表示系数x=0和索引集Λ0=φ;S1. Initialize the target parameters related to the sparse representation model, wherein, in the initialized target parameters, the initial iteration number t=1, the initial residual r0 =y, the initial sparse representation coefficient x=0, and the index set Λ0 =φ ;

S2、将初始化后的目标参数中的初始残差和初始稀疏表示系数代入目标函数

Figure BDA0002662984980000074
计算得到脚注λt;S2. Substitute the initial residuals and initial sparse representation coefficients in the initialized target parameters into the target function
Figure BDA0002662984980000074
Calculate the footnote λt ;

S3、基于脚注λt更新索引集Λt=Λt-1∪λt,并基于更新后的索引集计算

Figure BDA0002662984980000075
对应的稀疏表示系数xt;S3. Update the index set Λtt-1 ∪λt based on the footnote λt , and calculate based on the updated index set
Figure BDA0002662984980000075
the corresponding sparse representation coefficient xt ;

基于更新后的索引集计算使得

Figure BDA0002662984980000076
最小的稀疏表示系数x,将其用xt表示。Calculated based on the updated index set such that
Figure BDA0002662984980000076
The smallest sparse representation coefficient x, denoted byxt .

S4、更新残差rt=y-Axt,当残差rt满足预置收敛条件时,输出稀疏表示系数xt,当残差rt不满足预置收敛条件时,迭代次数加一,并返回步骤S2。S4. Update the residual rt =y-Axt , when the residual rt satisfies the preset convergence condition, output the sparse representation coefficient xt , when the residual rt does not meet the preset convergence condition, the number of iterations is increased by one, And return to step S2.

预置收敛条件为,当||rt||<τ,判定收敛,τ为一个很小的常数,可以根据需要进行灵活地设置。当残差rt满足预置收敛条件时,输出稀疏表示系数xt,即当前迭代次数t计算得到的稀疏表示系数x;当残差rt不满足预置收敛条件时,迭代次数加一,即t=t+1,并返回步骤S2。The preset convergence condition is that when ||rt ||<τ, the convergence is judged, and τ is a small constant, which can be set flexibly as required. When the residual rt satisfies the preset convergence conditions, the output sparse representation coefficient xt , that is, the sparse representation coefficient x calculated by the current iteration numbert ; That is, t=t+1, and return to step S2.

4、通过各类训练样本对应的稀疏表示系数和字典矩阵对修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像。4. Perform face sparse representation on the repaired image through sparse representation coefficients and dictionary matrices corresponding to various training samples, and obtain reconstructed images corresponding to various training samples.

通过求解得到的稀疏表示系数x和字典矩阵A对修复图像进行人脸稀疏表示,第i类训练样本对应的重构图像可以表示为yi=Axi,xi为第i类训练样本对应的稀疏表示系数。The repaired image is sparsely represented by the obtained sparse representation coefficient x and the dictionary matrix A. The reconstructed image corresponding to the i-th type of training sample can be expressed as yi =Axi , where xi is the corresponding to thei -th type of training sample Sparse representation coefficients.

步骤103、计算各类训练样本对应的重构图像与修复图像的残差值,选取最小的残差值对应的训练样本的类别作为测试人脸图像的人脸识别结果。Step 103: Calculate the residual values of the reconstructed image and the repaired image corresponding to various training samples, and select the category of the training sample corresponding to the smallest residual value as the face recognition result of the test face image.

计算各类训练样本对应的重构图像与修复图像的残差值,即:Calculate the residual value of the reconstructed image and the repaired image corresponding to various training samples, namely:

ri(y)=||y-yi||2ri (y)=||yyi ||2 ;

其中,y为修复图像,yi为第i类训练样本对应的重构图像。选取最小的残差值ri对应的训练样本的类别作为测试人脸图像的识别结果。Among them, y is the repaired image, and yi is the reconstructed image corresponding to the i-th training sample. The category of the training sample corresponding to the smallest residual valueri is selected as the recognition result of the test face image.

本申请实施例中的人脸识别方法,通过预置对抗生成网络对获取的测试人脸图像进行处理,以修复测试人脸图像存在的遮挡、光照、模糊等问题,提高图像质量,有助于提高人脸识别准确率;通过训练样本构建字典矩阵进行人脸稀疏表示,得到重构图像,然后计算各类训练样本对应的重构图像与修复图像的残差值,得到人脸识别结果,从而解决了现有的人脸识别方法在人脸图像存在遮挡、光照、模糊等图像质量情况时,容易出现识别错误,使得人脸识别准确率不高的技术问题。In the face recognition method in the embodiment of the present application, the acquired test face image is processed through a preset confrontation generation network, so as to repair the problems such as occlusion, illumination, blur, etc. existing in the test face image, improve the image quality, and help Improve the accuracy of face recognition; construct a dictionary matrix for sparse representation of faces through training samples to obtain reconstructed images, and then calculate the residual values of the reconstructed images and repaired images corresponding to various training samples to obtain face recognition results, thereby It solves the technical problem that the existing face recognition method is prone to recognition errors when there are image quality conditions such as occlusion, illumination, and blur in the face image, which makes the face recognition accuracy rate low.

以上为本申请通过的一种人脸识别方法的一个实施例,以下为本申请提供的一种人脸识别装置的一个实施例。The above is an embodiment of a face recognition method adopted by the application, and the following is an embodiment of a face recognition device provided by the application.

为了便于理解,请参阅图2,本申请提供的一种人脸识别装置的一个实施例,包括:For ease of understanding, please refer to FIG. 2 , an embodiment of a face recognition device provided by the present application includes:

图像修复单元201,用于获取测试人脸图像,并通过预置对抗生成网络对测试人脸图像进行修复,得到修复图像;An image repairing unit 201, configured to obtain a test face image, and repair the test face image through a preset confrontation generation network to obtain a repaired image;

稀疏表示单元202,用于通过预置训练样本集构建的字典矩阵对修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像;The sparse representation unit 202 is used to sparsely represent the face of the repaired image through the dictionary matrix constructed by the preset training sample set, and obtain reconstructed images corresponding to various training samples;

计算单元203,用于计算各类训练样本对应的重构图像与修复图像的残差值,选取最小的残差值对应的训练样本的类别作为测试人脸图像的人脸识别结果。The calculating unit 203 is configured to calculate the residual value of the reconstructed image and the repaired image corresponding to various training samples, and select the category of the training sample corresponding to the smallest residual value as the face recognition result of the test face image.

作为进一步地改进,稀疏表示单元202包括:As a further improvement, the sparse representation unit 202 includes:

组合子单元2021,用于将包含k个类别的预置训练样本集中的每个训练样本转换为m维列向量,组合所有类别中的所有训练样本对应的m维列向量,得到预置训练样本集对应的字典矩阵A,其中,字典矩阵A中的第i个元素为预置训练样本集中第i个类别的所有训练样本对应的m维列向量组合得到;The combining subunit 2021 is used to convert each training sample in the preset training sample set containing k categories into an m-dimensional column vector, and combine the m-dimensional column vectors corresponding to all training samples in all categories to obtain a preset training sample The dictionary matrix A corresponding to the set, wherein the ith element in the dictionary matrix A is obtained by combining the m-dimensional column vectors corresponding to all the training samples of the ith category in the preset training sample set;

构建子单元2022,用于通过字典矩阵A和修复图像构建稀疏表示模型;Construction subunit 2022, for constructing sparse representation model by dictionary matrix A and repaired image;

求解子单元2023,用于对稀疏表示模型进行求解,得到各类训练样本对应的稀疏表示系数;The solving subunit 2023 is used to solve the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;

稀疏表示子单元2024,用于通过各类训练样本对应的稀疏表示系数和字典矩阵对修复图像进行人脸稀疏表示,得到各类训练样本对应的重构图像。The sparse representation subunit 2024 is used to sparsely represent the face of the repaired image through sparse representation coefficients and dictionary matrices corresponding to various training samples to obtain reconstructed images corresponding to various training samples.

作为进一步地改进,稀疏表示模型为:As a further improvement, the sparse representation model is:

Figure BDA0002662984980000091
Figure BDA0002662984980000091

其中,x为稀疏表示系数,y为修复图像,ε为误差。where x is the sparse representation coefficient, y is the repaired image, and ε is the error.

作为进一步地改进,求解子单元2023具体用于:As a further improvement, the solving subunit 2023 is specifically used for:

S1、对稀疏表示模型相关的目标参数进行初始化,其中,初始化后的目标参数中初始迭代次数t=1、初始残差r0=y、初始稀疏表示系数x=0和索引集Λ0=φ;S1. Initialize the target parameters related to the sparse representation model, wherein, in the initialized target parameters, the initial iteration number t=1, the initial residual r0 =y, the initial sparse representation coefficient x=0, and the index set Λ0 =φ ;

S2、将初始化后的目标参数中的初始残差和初始稀疏表示系数代入目标函数

Figure BDA0002662984980000092
计算得到脚注λt;S2. Substitute the initial residuals and initial sparse representation coefficients in the initialized target parameters into the target function
Figure BDA0002662984980000092
Calculate the footnote λt ;

S3、基于脚注λt更新索引集Λt=Λt-1∪λt,并基于更新后的索引集计算

Figure BDA0002662984980000093
对应的稀疏表示系数xt;S3. Update the index set Λtt-1 ∪λt based on the footnote λt , and calculate based on the updated index set
Figure BDA0002662984980000093
the corresponding sparse representation coefficient xt ;

S4、更新残差rt=y-Axt,当残差rt满足预置收敛条件时,输出稀疏表示系数xt,当残差rt不满足预置收敛条件时,迭代次数加一,并返回步骤S2。S4. Update the residual rt =y-Axt , when the residual rt satisfies the preset convergence condition, output the sparse representation coefficient xt , when the residual rt does not meet the preset convergence condition, the number of iterations is increased by one, And return to step S2.

本申请实施例还提供一种人脸识别设备,设备包括处理器以及存储器:The embodiment of the present application also provides a face recognition device, the device includes a processor and a memory:

存储器用于存储程序代码,并将程序代码传输给处理器;The memory is used to store the program code and transmit the program code to the processor;

处理器用于根据程序代码中的指令执行前述方法实施例中的人脸识别方法。The processor is configured to execute the face recognition method in the foregoing method embodiments according to the instructions in the program code.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the specific working process of the above-described devices and units, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.

在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以通过一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-OnlyMemory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for executing all or part of the steps of the methods described in the various embodiments of the present application through a computer device (which may be a personal computer, a server, or a network device, etc.). The aforementioned storage media include: U disk, mobile hard disk, read-only memory (full English name: Read-Only Memory, English abbreviation: ROM), random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic disks Or various media such as optical discs that can store program codes.

以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: The technical solutions recorded in the embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the present application.

Claims (10)

1. A face recognition method, comprising:
acquiring a test face image, and repairing the test face image through a preset countermeasure generation network to obtain a repaired image;
performing face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples;
and calculating residual values of the reconstructed images and the repaired images corresponding to the training samples, and selecting the class of the training sample corresponding to the minimum residual value as a face recognition result of the test face image.
2. The face recognition method according to claim 1, wherein the obtaining of the reconstructed images corresponding to various types of training samples by performing face sparse representation on the restored images through a dictionary matrix constructed by presetting a training sample set comprises:
converting each training sample in a preset training sample set containing k classes into m-dimensional column vectors, and combining the m-dimensional column vectors corresponding to all the training samples in all the classes to obtain a dictionary matrix A corresponding to the preset training sample set, wherein the ith element in the dictionary matrix A is obtained by combining the m-dimensional column vectors corresponding to all the training samples in the ith class in the preset training sample set;
constructing a sparse representation model through the dictionary matrix A and the repaired image;
solving the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;
and carrying out face sparse representation on the restored image through sparse representation coefficients corresponding to the various training samples and the dictionary matrix to obtain reconstructed images corresponding to the various training samples.
3. The face recognition method of claim 2, wherein the sparse representation model is:
Figure FDA0002662984970000011
wherein, x is a sparse representation coefficient, y is a repaired image and is an error.
4. The face recognition method according to claim 3, wherein the solving the sparse representation model to obtain sparse representation coefficients corresponding to the training samples of each type comprises:
s1, initializing target parameters related to the sparse representation model, wherein the initialized target parameters comprise an initial iteration number t equal to 1 and an initial residual r0Y, initial sparse representation coefficient x 0 and index set Λ0=φ;
S2, substituting the initialized initial residual error and the initialized initial sparse representation coefficient in the target parameter into the target function
Figure FDA0002662984970000021
Calculating to obtain footnote lambdat
S3, based on the footnote lambdatUpdating the index set Λt=Λt-1∪λtAnd computing based on the updated index set
Figure FDA0002662984970000022
Corresponding sparse representation coefficient xt
S4, updating residual error rt=y-AxtWhen the residual r istWhen the preset convergence condition is met, outputting the sparse representation coefficient xtWhen the residual r istAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
5. The face recognition method of claim 1, wherein the preset confrontation generating network is obtained by training the preset training sample set, and the training optimization function of the preset confrontation generating network is as follows:
Figure FDA0002662984970000023
wherein G is a generation network in the preset confrontation generation network, D is a discrimination network in the preset confrontation generation network, D (x) is a real human image, G (z) is a human face image generated by the generation network, and PdataData distribution for real face images, PzIs a noise distribution.
6. A face recognition apparatus, comprising:
the image restoration unit is used for acquiring a test face image and restoring the test face image through a preset countermeasure generation network to obtain a restored image;
the sparse representation unit is used for carrying out face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples;
and the computing unit is used for computing the residual error values of the reconstructed images and the repaired images corresponding to the various training samples, and selecting the class of the training sample corresponding to the minimum residual error value as the face recognition result of the tested face image.
7. The face recognition apparatus according to claim 6, wherein the sparse representation unit includes:
the combining subunit is configured to convert each training sample in a preset training sample set including k classes into an m-dimensional column vector, combine m-dimensional column vectors corresponding to all the training samples in all the classes, and obtain a dictionary matrix a corresponding to the preset training sample set, where an ith element in the dictionary matrix a is obtained by combining the m-dimensional column vectors corresponding to all the training samples in an ith class in the preset training sample set;
the construction subunit is used for constructing a sparse representation model through the dictionary matrix A and the repaired image;
the solving subunit is used for solving the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;
and the sparse representation subunit is used for performing face sparse representation on the repaired image through the sparse representation coefficients corresponding to the various training samples and the dictionary matrix to obtain reconstructed images corresponding to the various training samples.
8. The face recognition apparatus of claim 7, wherein the sparse representation model is:
Figure FDA0002662984970000031
wherein, x is a sparse representation coefficient, y is a repaired image and is an error.
9. The face recognition device of claim 7, wherein the solving subunit is specifically configured to:
s1, initializing target parameters related to the sparse representation model, wherein the initialized target parameters comprise an initial iteration number t equal to 1 and an initial residual r0Y, initial sparse representation coefficient x 0 and index set Λ0=φ;
S2, substituting the initialized initial residual error and the initialized initial sparse representation coefficient in the target parameter into the target function
Figure FDA0002662984970000032
Calculating to obtain footnote lambdat
S3, based on the footnote lambdatUpdating the index set Λt=Λt-1∪λtAnd computing based on the updated index set
Figure FDA0002662984970000033
Corresponding sparse representation coefficient xt
S4, updating residual error rt=y-AxtWhen the residual r istWhen the preset convergence condition is met, outputting the sparse representation coefficient xtWhen the residual r istAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
10. A face recognition device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the face recognition method according to any one of claims 1 to 5 according to instructions in the program code.
CN202010910226.0A2020-09-022020-09-02Face recognition method, device and equipmentPendingCN112001865A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010910226.0ACN112001865A (en)2020-09-022020-09-02Face recognition method, device and equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010910226.0ACN112001865A (en)2020-09-022020-09-02Face recognition method, device and equipment

Publications (1)

Publication NumberPublication Date
CN112001865Atrue CN112001865A (en)2020-11-27

Family

ID=73465898

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010910226.0APendingCN112001865A (en)2020-09-022020-09-02Face recognition method, device and equipment

Country Status (1)

CountryLink
CN (1)CN112001865A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113554569A (en)*2021-08-042021-10-26哈尔滨工业大学Face image restoration system based on double memory dictionaries
CN115906032A (en)*2023-02-202023-04-04之江实验室Recognition model correction method and device and storage medium
CN116665024A (en)*2023-06-072023-08-29浙江久婵物联科技有限公司Image processing system and method in face recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104063714A (en)*2014-07-202014-09-24詹曙Fast human face recognition algorithm used for video monitoring and based on CUDA parallel computing and sparse representing
WO2016050729A1 (en)*2014-09-302016-04-07Thomson LicensingFace inpainting using piece-wise affine warping and sparse coding
CN109377448A (en)*2018-05-202019-02-22北京工业大学 A face image inpainting method based on generative adversarial network
CN109492610A (en)*2018-11-272019-03-19广东工业大学A kind of pedestrian recognition methods, device and readable storage medium storing program for executing again

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104063714A (en)*2014-07-202014-09-24詹曙Fast human face recognition algorithm used for video monitoring and based on CUDA parallel computing and sparse representing
WO2016050729A1 (en)*2014-09-302016-04-07Thomson LicensingFace inpainting using piece-wise affine warping and sparse coding
CN109377448A (en)*2018-05-202019-02-22北京工业大学 A face image inpainting method based on generative adversarial network
CN109492610A (en)*2018-11-272019-03-19广东工业大学A kind of pedestrian recognition methods, device and readable storage medium storing program for executing again

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
WAI KEUNG WONG,NA HAN, ET. AL.: "Clustering Structure-Induced Robust Multi-View Graph Recovery", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 30, no. 10, 2 October 2019 (2019-10-02), pages 3584, XP011812448, DOI: 10.1109/TCSVT.2019.2945202*
何苗,等: "隐式低秩表示联合稀疏表示的人脸识别方法", 云南师范大学学报(自然科学版), vol. 37, no. 01, 15 January 2017 (2017-01-15), pages 43 - 50*
张文清,等: "基于ROMP算法的人脸识别", 汕头大学学报(自然科学版), vol. 30, no. 01, 15 February 2015 (2015-02-15), pages 48 - 51*
房小兆: "基于稀疏和低秩约束的模型学习研究", 中国博士学位论文全文数据库 信息科技辑 (月刊), no. 02, 15 February 2017 (2017-02-15), pages 138 - 129*
杨荣根,等: "基于稀疏表示的人脸识别方法", 计算机科学, vol. 37, no. 09, 15 September 2010 (2010-09-15), pages 267 - 269*
韩娜: "交叉领域识别中若干问题与方法研究", 中国博士学位论文全文数据库 信息科技辑 (月刊), no. 3, 15 March 2020 (2020-03-15), pages 138 - 26*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113554569A (en)*2021-08-042021-10-26哈尔滨工业大学Face image restoration system based on double memory dictionaries
CN113554569B (en)*2021-08-042022-03-08哈尔滨工业大学Face image restoration system based on double memory dictionaries
CN115906032A (en)*2023-02-202023-04-04之江实验室Recognition model correction method and device and storage medium
CN116665024A (en)*2023-06-072023-08-29浙江久婵物联科技有限公司Image processing system and method in face recognition

Similar Documents

PublicationPublication DateTitle
TW202032423A (en)Method for image processing and apparatus thereof
CN118445379B (en)Training method and device for large language model
CN110276274B (en) A multi-task deep feature space pose face recognition method
CN112001865A (en)Face recognition method, device and equipment
CN117974693B (en)Image segmentation method, device, computer equipment and storage medium
CN105469063B (en)The facial image principal component feature extracting method and identification device of robust
KR102592585B1 (en)Method and apparatus for building a translation model
CN109409504A (en)A kind of data processing method, device, computer and storage medium
Liu et al.Noise robust face hallucination based on smooth correntropy representation
CN108985442B (en)Handwriting model training method, handwritten character recognition method, device, equipment and medium
CN114359638B (en) Residual capsule network classification model, classification method, device and storage medium for images
CN118334045B (en) Small sample medical image segmentation method based on dual attention mechanism and multi-scale fusion
CN114444657A (en)Image processing method, system, equipment and readable storage medium
CN117018632A (en)Game platform intelligent management method, system and storage medium
CN112487231B (en)Automatic image labeling method based on double-image regularization constraint and dictionary learning
CN117852542A (en)Multi-mode named entity recognition method and device, storage medium and electronic device
Duch et al.Initialization and optimization of multilayered perceptrons
CN115861767A (en)Neural network joint quantization method for image classification
Qin et al.Dba: Efficient transformer with dynamic bilinear low-rank attention
CN118536154A (en)Cross-domain heterogeneous multi-mode federal learning-oriented differential privacy defense mechanism performance evaluation method
CN110163339B (en) Network representation generation, encoding method and device in neural network
CN116092138A (en)K neighbor graph iterative vein recognition method and system based on deep learning
CN115495732A (en) Insider threat detection method, device, electronic device and storage medium
CN114418884A (en) Field fingerprint image enhancement method, device and electronic device
Jin et al.Restoring latent vectors from generative adversarial networks using genetic algorithms

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20201127


[8]ページ先頭

©2009-2025 Movatter.jp