Movatterモバイル変換


[0]ホーム

URL:


CN111401272B - A face feature extraction method, device and device - Google Patents

A face feature extraction method, device and device
Download PDF

Info

Publication number
CN111401272B
CN111401272BCN202010197694.8ACN202010197694ACN111401272BCN 111401272 BCN111401272 BCN 111401272BCN 202010197694 ACN202010197694 ACN 202010197694ACN 111401272 BCN111401272 BCN 111401272B
Authority
CN
China
Prior art keywords
user
face
feature extraction
extraction model
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010197694.8A
Other languages
Chinese (zh)
Other versions
CN111401272A (en
Inventor
徐崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co LtdfiledCriticalAlipay Hangzhou Information Technology Co Ltd
Priority to CN202010197694.8ApriorityCriticalpatent/CN111401272B/en
Priority to CN202111156860.0Aprioritypatent/CN113657352B/en
Publication of CN111401272ApublicationCriticalpatent/CN111401272A/en
Priority to PCT/CN2020/140574prioritypatent/WO2021184898A1/en
Application grantedgrantedCritical
Publication of CN111401272BpublicationCriticalpatent/CN111401272B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本说明书实施例公开了一种用于隐私保护的人脸特征提取方法、装置及设备。该方案包括:将待识别用户的人脸图像输入编码器,得到所述编码器输出的所述人脸图像的编码向量。利用人脸特征提取模型中的解码器接收所述编码向量后,向所述人脸特征提取模型中的特征提取模型输出重建人脸图像数据,以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。

Figure 202010197694

The embodiments of this specification disclose a method, device, and device for extracting facial features for privacy protection. The solution includes: inputting the face image of the user to be identified into an encoder, and obtaining an encoding vector of the face image output by the encoder. After receiving the encoding vector by the decoder in the face feature extraction model, the reconstructed face image data is output to the feature extraction model in the face feature extraction model, so that the feature extraction model can perform the reconstructed face image data. After the image data is characterized, the facial feature vector of the user to be recognized is output.

Figure 202010197694

Description

Translated fromChinese
一种人脸特征提取方法、装置及设备A face feature extraction method, device and device

技术领域technical field

本说明书一个或多个实施例涉及计算机技术领域,尤其涉及一种人脸特征提取方法、装置及设备。One or more embodiments of this specification relate to the field of computer technologies, and in particular, to a method, apparatus, and device for extracting facial features.

背景技术Background technique

随着计算机技术以及光学成像技术的发展,基于人脸识别技术的用户识别方式正在日渐普及。目前,通常需要将客户端设备采集的待识别用户的人脸图像发送至服务端设备处,以便于该服务端设备从该待识别用户的人脸图像中提取出人脸特征向量,从而可以基于该人脸特征向量去生成用户识别结果。由于待识别用户的人脸图像属于用户敏感信息,因此,这种需将待识别用户的人脸图像发送至其他设备进行特征提取的方法存在泄漏用户敏感信息的风险。With the development of computer technology and optical imaging technology, user identification methods based on face recognition technology are becoming more and more popular. At present, it is usually necessary to send the face image of the user to be identified collected by the client device to the server device, so that the server device can extract the face feature vector from the face image of the user to be identified, so that it can be based on The face feature vector is used to generate user identification results. Since the face image of the user to be identified belongs to user sensitive information, this method of sending the face image of the user to be identified to other devices for feature extraction has the risk of leaking user sensitive information.

基于此,如何在保证用户人脸信息的隐私性的基础上去提取用户人脸特征,已成为亟待解决的技术问题。Based on this, how to extract user face features on the basis of ensuring the privacy of user face information has become an urgent technical problem to be solved.

发明内容SUMMARY OF THE INVENTION

有鉴于此,本说明书一个或多个实施例提供了一种人脸特征提取方法、装置及设备,用于在保证用户人脸信息的隐私性的基础上去提取用户人脸特征。In view of this, one or more embodiments of this specification provide a method, apparatus and device for extracting facial features, which are used to extract facial features of users on the basis of ensuring the privacy of user facial information.

为解决上述技术问题,本说明书实施例是这样实现的:In order to solve the above-mentioned technical problems, the embodiments of this specification are implemented as follows:

本说明书实施例提供的一种人脸特征提取方法,所述方法使用了用于隐私保护的用户特征提取模型,所述用户特征提取模型包括:编码器及人脸特征提取模型,所述人脸特征提取模型是通过对解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型,其中,所述编码器和所述解码器组成自编码器;An embodiment of this specification provides a method for extracting facial features. The method uses a user feature extraction model for privacy protection. The user feature extraction model includes an encoder and a facial feature extraction model. The feature extraction model is a model obtained by locking a decoder and a feature extraction model based on a convolutional neural network, wherein the encoder and the decoder form an autoencoder;

所述编码器与所述人脸特征提取模型中的解码器连接,所述解码器与所述特征提取模型连接;所述方法包括:The encoder is connected with the decoder in the face feature extraction model, and the decoder is connected with the feature extraction model; the method includes:

将待识别用户的人脸图像输入所述编码器,得到所述编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;Input the face image of the user to be identified into the encoder, and obtain the encoding vector of the face image output by the encoder, where the encoding vector is the vector data obtained after characterizing the face image. ;

所述人脸特征提取模型中的解码器接收所述编码向量后,向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。After the decoder in the face feature extraction model receives the encoding vector, it outputs reconstructed face image data to the feature extraction model; so that the feature extraction model can characterize the reconstructed face image data Then, output the face feature vector of the user to be identified.

本说明书实施例提供的一种针对用于隐私保护的用户特征提取模型的训练方法,所述方法包括:An embodiment of this specification provides a training method for a user feature extraction model for privacy protection, the method comprising:

获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像;obtaining a first training sample set, where the training samples in the first training sample set are face images;

利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器;Use the first training sample set to train the initial autoencoder to obtain the trained autoencoder;

获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据;Obtain a second training sample set, where the training samples in the second training sample set are encoding vectors, and the encoding vectors are obtained by using the encoder in the trained autoencoder to characterize the face image vector data;

将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行锁定而得到的,所述解码器为所述训练后的自编码器中的解码器;The training samples in the second training sample set are input into the decoder of the initial face feature extraction model, so as to utilize the reconstructed face image data output by the decoder to extract the features in the initial face feature extraction model. Perform training based on the initial feature extraction model of the convolutional neural network to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by locking the decoder and the initial feature extraction model , the decoder is the decoder in the trained autoencoder;

根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。According to the encoder and the trained face feature extraction model, a user feature extraction model for privacy protection is generated.

本说明书实施例提供的一种人脸特征提取装置,所述装置使用了用于隐私保护的用户特征提取模型,所述用户特征提取模型包括:编码器及人脸特征提取模型,所述人脸特征提取模型是通过对解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型,其中,所述编码器和所述解码器组成自编码器;所述编码器与所述人脸特征提取模型中的解码器连接,所述解码器与所述特征提取模型连接;所述装置包括:An apparatus for extracting facial features provided by an embodiment of the present specification uses a user feature extraction model for privacy protection. The user feature extraction model includes an encoder and a facial feature extraction model. The feature extraction model is a model obtained by locking a decoder and a feature extraction model based on a convolutional neural network, wherein the encoder and the decoder form an autoencoder; the encoder and the face The decoder in the feature extraction model is connected, and the decoder is connected with the feature extraction model; the device includes:

输入模块,用于将待识别用户的人脸图像输入所述编码器,得到所述编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;The input module is used to input the face image of the user to be identified into the encoder, and obtain the encoding vector of the face image output by the encoder, where the encoding vector is used to characterize the face image. The vector data obtained after;

人脸特征向量生成模块,用于令所述人脸特征提取模型中的解码器接收所述编码向量后,向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。The face feature vector generation module is used to make the decoder in the face feature extraction model output the reconstructed face image data to the feature extraction model after receiving the encoding vector; After characterizing the reconstructed face image data, the face feature vector of the user to be identified is output.

本说明书实施例提供的一种针对用于隐私保护的用户特征提取模型的训练装置,所述装置包括:A training device for a user feature extraction model for privacy protection provided by an embodiment of this specification, the device includes:

第一获取模块,用于获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像;a first acquisition module, configured to acquire a first training sample set, where the training samples in the first training sample set are face images;

第一训练模块,用于利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器;a first training module, configured to use the first training sample set to train an initial autoencoder to obtain a trained autoencoder;

第二获取模块,用于获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据;The second acquisition module is configured to acquire a second training sample set, where the training samples in the second training sample set are coding vectors, and the coding vectors are the use of the encoder in the trained auto-encoder to detect a face The vector data obtained after the image is characterized;

第二训练模块,用于将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行锁定而得到的,所述解码器为所述训练后的自编码器中的解码器;The second training module is used to input the training samples in the second training sample set into the decoder of the initial face feature extraction model, so as to use the reconstructed face image data output by the decoder to The initial feature extraction model based on the convolutional neural network in the facial feature extraction model is trained to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by comparing the decoder and the initial feature. Obtained by locking the extraction model, and the decoder is the decoder in the trained autoencoder;

用户特征提取模型生成模块,用于根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。The user feature extraction model generation module is configured to generate a user feature extraction model for privacy protection according to the encoder and the trained face feature extraction model.

本说明书实施例提供的一种客户端设备,包括:A client device provided by an embodiment of this specification includes:

至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有图像编码器以及可被所述至少一个处理器执行的指令,所述图像编码器为自编码器中的编码器,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores an image encoder and instructions executable by the at least one processor, the image encoder being self-encoding an encoder in a processor, the instructions are executed by the at least one processor to enable the at least one processor to:

将待识别用户的人脸图像输入所述图像编码器,得到所述图像编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;Input the face image of the user to be identified into the image encoder, and obtain the encoding vector of the human face image output by the image encoder, where the encoding vector is obtained after characterizing the face image. vector data;

发送所述编码向量至服务端设备,以便于所述服务端设备利用人脸特征提取模型根据所述编码向量生成所述待识别用户的人脸特征向量,所述人脸特征提取模型是通过对所述自编码中的解码器以及基于卷积神经网络的特征提取模型进行锁定而得到的模型。Send the encoding vector to the server device, so that the server device can use the facial feature extraction model to generate the facial feature vector of the user to be recognized according to the encoding vector, and the facial feature extraction model is obtained by A model obtained by locking the decoder in the self-encoding and the feature extraction model based on the convolutional neural network.

本说明书实施例提供的一种服务端设备,包括:A server device provided by an embodiment of this specification includes:

至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有人脸特征提取模型,所述人脸特征提取模型是通过对自编码中的解码器以及基于卷积神经网络的特征提取模型进行锁定而得到的模型,所述存储器还存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:at least one processor; and, a memory communicatively connected to the at least one processor; wherein the memory stores a facial feature extraction model, the facial feature extraction model is obtained by decoding the decoder in the self-encoding and based on the volume A model obtained by locking a feature extraction model of an integrated neural network, the memory further stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to make the at least one processor The processor can:

获取待识别用户的人脸图像的编码向量,所述编码向量是利用所述自编码器中的编码器对所述人脸图像进行特征化处理而得到的向量数据;Obtain the encoding vector of the face image of the user to be identified, the encoding vector is the vector data obtained by utilizing the encoder in the self-encoder to characterize the face image;

将所述编码向量输入所述人脸特征提取模型中的解码器后,所述解码器向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。After the encoding vector is input into the decoder in the face feature extraction model, the decoder outputs reconstructed face image data to the feature extraction model; After the data is characterized, the facial feature vector of the user to be recognized is output.

本说明书实施例提供的一种针对用于隐私保护的人脸特征提取模型的训练设备,包括:A training device for a face feature extraction model for privacy protection provided by the embodiments of this specification includes:

至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:at least one processor; and, a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor, to enable the at least one processor to:

获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像;obtaining a first training sample set, where the training samples in the first training sample set are face images;

利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器;Use the first training sample set to train the initial autoencoder to obtain the trained autoencoder;

获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据;Obtain a second training sample set, where the training samples in the second training sample set are encoding vectors, and the encoding vectors are obtained by using the encoder in the trained autoencoder to characterize the face image vector data;

将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行锁定而得到的,所述解码器为所述训练后的自编码器中的解码器;The training samples in the second training sample set are input into the decoder of the initial face feature extraction model, so as to utilize the reconstructed face image data output by the decoder to extract the features in the initial face feature extraction model. Perform training based on the initial feature extraction model of the convolutional neural network to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by locking the decoder and the initial feature extraction model , the decoder is the decoder in the trained autoencoder;

根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。According to the encoder and the trained face feature extraction model, a user feature extraction model for privacy protection is generated.

本说明书一个实施例实现了能够达到以下有益效果:An embodiment of the present specification achieves the following beneficial effects:

由于对自编码器中的编码器生成的人脸图像的编码向量进行传输、存储或使用时,不会对用户人脸信息的隐私性及安全性产生影响。因此,服务提供商可以通过获取并处理待识别用户的人脸图像的编码向量,以生成待识别用户的人脸特征向量,而无需去获取待识别用户的原始人脸图像,从而可以在保证用户人脸信息的隐私性及安全性的基础上去提取用户人脸特征向量。Since the encoding vector of the face image generated by the encoder in the self-encoder is transmitted, stored or used, the privacy and security of the user's face information will not be affected. Therefore, the service provider can obtain and process the encoding vector of the face image of the user to be identified to generate the face feature vector of the user to be identified without the need to obtain the original face image of the user to be identified. On the basis of the privacy and security of the face information, the feature vector of the user's face is extracted.

且由于用于提取人脸特征向量的人脸特征提取模型是通过对自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型,使得该人脸特征提取模型在提取用户人脸特征向量的过程中,不会泄露自编码器中的解码器生成的重建人脸图像数据,以确保用户人脸信息的隐私性及安全性。And because the face feature extraction model used to extract the face feature vector is obtained by locking the decoder in the autoencoder and the feature extraction model based on the convolutional neural network, the face feature extraction model is In the process of extracting the user's face feature vector, the reconstructed face image data generated by the decoder in the encoder will not be leaked, so as to ensure the privacy and security of the user's face information.

附图说明Description of drawings

此处所说明的附图用来提供对本说明书一个或多个实施例的进一步理解,构成本说明书一个或多个实施例的一部分,本说明书的示意性实施例及其说明用于解释本说明书一个或多个实施例,并不构成对本说明书一个或多个实施例的不当限定。在附图中:The accompanying drawings described herein are used to provide a further understanding of one or more embodiments of the present specification and constitute a part of the one or more embodiments of the present specification, and the illustrative embodiments and descriptions thereof are used to explain one or more embodiments of the present specification. The multiple embodiments do not constitute an improper limitation to one or more embodiments of the present specification. In the attached image:

图1为本说明书实施例提供的一种人脸特征提取方法的流程示意图;1 is a schematic flowchart of a method for extracting facial features according to an embodiment of the present specification;

图2为本说明书实施例提供的一种用于隐私保护的人脸特征提取模型的结构示意图;2 is a schematic structural diagram of a face feature extraction model for privacy protection provided by an embodiment of the present specification;

图3为本说明书实施例提供的一种针对用于隐私保护的人脸特征提取模型的训练方法的流程示意图;3 is a schematic flowchart of a training method for a face feature extraction model for privacy protection provided by an embodiment of the present specification;

图4为本说明书实施例提供的对应于图1的一种人脸特征提取装置的结构示意图;FIG. 4 is a schematic structural diagram of an apparatus for extracting facial features corresponding to FIG. 1 provided by an embodiment of the present specification;

图5为本说明书实施例提供的对应于图3的一种针对用于隐私保护的人脸特征提取模型的训练装置。FIG. 5 corresponds to a training apparatus for a face feature extraction model for privacy protection provided in an embodiment of the present specification, which corresponds to FIG. 3 .

具体实施方式Detailed ways

为使本说明书一个或多个实施例的目的、技术方案和优点更加清楚,下面将结合本说明书具体实施例及相应的附图对本说明书一个或多个实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本说明书的一部分实施例,而不是全部的实施例。基于本说明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本说明书一个或多个实施例保护的范围。In order to make the objectives, technical solutions and advantages of one or more embodiments of this specification clearer, the technical solutions of one or more embodiments of this specification will be clearly and completely described below with reference to the specific embodiments of this specification and the corresponding drawings. . Obviously, the described embodiments are only some of the embodiments of the present specification, but not all of the embodiments. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments in the present specification without creative efforts fall within the protection scope of one or more embodiments of the present specification.

以下结合附图,详细说明本说明书各实施例提供的技术方案。The technical solutions provided by the embodiments of the present specification will be described in detail below with reference to the accompanying drawings.

现有技术中,在基于人脸识别技术进行用户识别时,通常需要将待识别用户的人脸图像发送至服务器提供商,以令该服务提供商从待识别用户的人脸图像中提取出人脸特征向量,并基于该人脸特征向量进行用户识别。由于这种方法需令服务提供商获取、存储或处理用户的人脸图像,从而容易对用户的人脸信息的隐私性及安全性产生影响。In the prior art, when performing user identification based on face recognition technology, it is usually necessary to send the face image of the user to be identified to the server provider, so that the service provider can extract the person from the face image of the user to be identified. face feature vector, and perform user identification based on the face feature vector. Since this method requires the service provider to acquire, store or process the user's face image, it is easy to have an impact on the privacy and security of the user's face information.

且目前,在从用户人脸图像中提取人脸特征向量时,通常会对用户人脸图像进行预处理以后再去提取人脸特征向量。例如,在基于主成分分析(principal componentanalysis,PCA)的人脸识别方法中,会先从用户人脸图片中提取主成分信息,并丢弃部分细节信息,以基于该主成分信息生成人脸特征向量。基于该方法生成的人脸特征向量存在人脸特征信息丢失的问题,可见,目前提取到的人脸特征向量的准确性也较差。And at present, when extracting a face feature vector from a user's face image, the face feature vector is usually extracted after preprocessing the user's face image. For example, in a face recognition method based on principal component analysis (PCA), the principal component information is first extracted from the user's face image, and part of the detail information is discarded to generate a face feature vector based on the principal component information. . The face feature vector generated based on this method has the problem of loss of face feature information. It can be seen that the accuracy of the currently extracted face feature vector is also poor.

为了解决现有技术中的缺陷,本方案给出了以下实施例:In order to solve the defects in the prior art, this scheme provides the following examples:

图1为本说明书实施例提供的一种人脸特征提取方法的流程示意图。所述方法使用了用于隐私保护的人脸特征提取模型去提取人脸特征向量。FIG. 1 is a schematic flowchart of a method for extracting facial features according to an embodiment of the present specification. The method uses a face feature extraction model for privacy protection to extract face feature vectors.

图2为本说明书实施例提供的一种用于隐私保护的人脸特征提取模型的结构示意图,如图2所示,用于隐私保护的用户特征提取模型201包括:编码器202及人脸特征提取模型203,该人脸特征提取模型是通过对解码器204及基于卷积神经网络的特征提取模型205进行锁定而得到的模型,其中,所述编码器202和所述解码器204组成自编码器。所述编码器202与所述人脸特征提取模型201中的解码器204连接,所述解码器204与所述特征提取模型205连接。FIG. 2 is a schematic structural diagram of a face feature extraction model for privacy protection provided by an embodiment of the present specification. As shown in FIG. 2 , a user feature extraction model 201 for privacy protection includes: anencoder 202 and a face feature Extraction model 203, the face feature extraction model is a model obtained by locking thedecoder 204 and thefeature extraction model 205 based on the convolutional neural network, wherein theencoder 202 and thedecoder 204 form a self-encoding device. Theencoder 202 is connected to thedecoder 204 in the face feature extraction model 201 , and thedecoder 204 is connected to thefeature extraction model 205 .

从程序角度而言,图1中所示的流程的执行主体可以为用户人脸特征提取系统或者搭载于用户人脸特征提取系统上的程序。该用户人脸特征提取系统可以包括客户端设备及服务端设备。其中,该客户端设备中可以搭载有用于隐私保护的人脸特征提取模型中的编码器,该服务端设备中可以搭载有用于隐私保护的人脸特征提取模型中的人脸特征提取模型。From a program perspective, the execution subject of the process shown in FIG. 1 may be a user facial feature extraction system or a program mounted on the user facial feature extraction system. The user face feature extraction system may include a client device and a server device. Wherein, the client device may be equipped with an encoder in a face feature extraction model for privacy protection, and the server device may be equipped with a face feature extraction model in the face feature extraction model for privacy protection.

如图1所示,该流程可以包括以下步骤:As shown in Figure 1, the process can include the following steps:

步骤102:将待识别用户的人脸图像输入所述编码器,得到所述编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据。Step 102: Input the face image of the user to be identified into the encoder, and obtain the encoding vector of the face image output by the encoder, where the encoding vector is obtained by characterizing the face image. vector data.

在本说明书实施例中,用户在使用各类应用程序时,通常需在各个应用处注册账号。当用户对该注册账号进行登录、解锁,或者用户使用该注册账号进行支付等场景,通常需对该注册账号的操作用户(即待识别用户)进行用户识别,并在确定待识别用户为该注册账号的认证用户(即指定用户)后,才允许该待识别用户执行后续操作。或者,针对用户需通过门禁系统的这一场景,通常需对该用户进行识别,并在确定该用户(即待识别用户)为门禁系统的白名单用户(即指定用户)后,才允许该用户通过门禁系统。In the embodiments of this specification, when a user uses various application programs, it is usually necessary to register an account in each application. When the user logs in or unlocks the registered account, or the user uses the registered account for payment, etc., it is usually necessary to identify the operating user of the registered account (that is, the user to be identified), and after determining that the user to be identified is the registered user The user to be identified is allowed to perform subsequent operations only after the account's authenticated user (ie, the designated user). Or, for the scenario where the user needs to pass through the access control system, the user usually needs to be identified, and the user (that is, the user to be identified) is determined to be a whitelisted user (that is, the designated user) of the access control system before the user is allowed. through the access control system.

当基于人脸识别技术对待识别用户进行用户识别时,客户端设备通常需采集该待识别用户的人脸图像,并利用其搭载的编码器提取该人脸图像的编码向量。客户端设备可以将该编码向量发送至服务端设备,以便于服务端设备根据该编码向量生成待识别用户的人脸特征向量,进而基于生成的待识别用户的人脸特征向量进行用户识别。When performing user identification on a user to be identified based on the face recognition technology, the client device usually needs to collect the face image of the user to be identified, and extract the encoding vector of the face image by using an encoder mounted thereon. The client device can send the encoding vector to the server device, so that the server device can generate a face feature vector of the user to be identified according to the encoding vector, and then perform user identification based on the generated face feature vector of the user to be identified.

其中,步骤102中的编码器可以为自编码器(auto encoder,AE)中的编码器。自编码器是深度学习中的一种网络模型结构,其特点是可以使用输入图像自身作为监督信息,以对输入图像进行重建为目标进行网络训练,从而达到对输入图像进行编码(encoding)的目的。由于自编码器不需要除输入图像以外的其他信息作为网络训练中的监督信息,使得自编码器的训练成本较低,经济实用。The encoder instep 102 may be an encoder in an auto encoder (auto encoder, AE). Autoencoder is a network model structure in deep learning. Its characteristic is that it can use the input image itself as supervision information, and perform network training with the goal of reconstructing the input image, so as to achieve the purpose of encoding the input image. . Since the autoencoder does not need other information other than the input image as the supervision information in the network training, the training cost of the autoencoder is low, which is economical and practical.

自编码器通常包含编码器(encoder)和解码器(decoder)两部分。其中,自编码器中的编码器可以用于对人脸图像进行编码处理,以得到人脸图像的编码向量,而自编码器中的解码器则可以根据该编码向量,对该人脸图像进行重建,以得到重建人脸图像。An autoencoder usually consists of an encoder and a decoder. Among them, the encoder in the self-encoder can be used to encode the face image to obtain the encoding vector of the face image, and the decoder in the self-encoder can encode the face image according to the encoding vector. reconstruction to obtain a reconstructed face image.

由于自编码器中的编码器生成的人脸图像的编码向量为对人脸图像进行特征化处理后得到的向量数据,而该人脸图像的编码向量无法体现待识别用户的相貌信息,因此,服务提供商对人脸图像的编码向量进行传输、存储及处理,并不会对待识别用户的人脸信息的安全性及隐私性产成影响。Since the encoding vector of the face image generated by the encoder in the encoder is the vector data obtained by characterizing the face image, and the encoding vector of the face image cannot reflect the appearance information of the user to be identified, therefore, The service provider transmits, stores and processes the coded vector of the face image, and will not affect the security and privacy of the face information of the identified user.

在本说明书实施例中,由于自编码器是一种能够通过无监督学习学到输入数据,并可以对输入数据进行高效精确表示的人工神经网络。因此,基于该自编码器中的编码器生成的人脸图像的编码向量中包含的人脸特征信息较全面,且噪声较小,从而在基于自编码器中的编码器生成的人脸图像的编码向量去提取人脸特征向量时,可以提升得到的人脸特征向量的准确性,进而有利于提升基于该人脸特征向量生成的用户识别结果的准确性。In the embodiments of this specification, since the autoencoder is an artificial neural network that can learn input data through unsupervised learning, and can efficiently and accurately represent the input data. Therefore, the face feature information contained in the encoding vector of the face image generated based on the encoder in the self-encoder is more comprehensive, and the noise is small, so that the face image generated based on the encoder in the self-encoder has less noise. When the encoding vector is used to extract the face feature vector, the accuracy of the obtained face feature vector can be improved, which is beneficial to improve the accuracy of the user identification result generated based on the face feature vector.

在本说明书实施例中,待识别用户的人脸图像可以为多通道人脸图像。在实际应用中,当用户设备采集的待识别用户的人脸图像为单通道人脸图像时,可以先确定该待识别用户的单通道图像数据;根据所述单通道图像数据生成该待识别用户的多通道图像,以便于利用自编码器中的编码器对该待识别用户的多通道人脸图像进行处理,其中,所述待识别用户的多通道人脸图像的各个通道的图像数据均与所述单通道图像数据相同。In the embodiment of this specification, the face image of the user to be identified may be a multi-channel face image. In practical applications, when the face image of the user to be identified collected by the user equipment is a single-channel face image, the single-channel image data of the user to be identified may be determined first; the user to be identified may be generated according to the single-channel image data. The multi-channel image is used to process the multi-channel face image of the user to be identified by the encoder in the self-encoder, wherein the image data of each channel of the multi-channel face image of the user to be identified is the same as the The single-channel image data is the same.

步骤104:所述人脸特征提取模型中的解码器接收所述编码向量后,向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。Step 104: After the decoder in the face feature extraction model receives the encoding vector, it outputs reconstructed face image data to the feature extraction model; so that the feature extraction model can perform the reconstruction on the reconstructed face image data. After the characterization processing, the facial feature vector of the user to be identified is output.

在本说明书实施例中,由于自编码器的训练目标为使得重建人脸图像与原使人脸图像之间的差异最小,而并非是用于对用户人脸进行分类,因此,若使用自编码器中的编码器提取的人脸图像的编码向量直接作为待识别用户的人脸特征向量,去进行用户识别,会使得用户识别结果的准确性较差。In the embodiment of this specification, since the training goal of the autoencoder is to minimize the difference between the reconstructed face image and the original face image, rather than classifying the user's face, if the self-encoder is used If the encoding vector of the face image extracted by the encoder in the encoder is directly used as the face feature vector of the user to be identified, to perform user identification, the accuracy of the user identification result will be poor.

在本说明书实施例中,可以在服务端设备上部署通过对自编码中的解码器及基于卷积神经网络的特征提取模型进行锁定而得到的人脸特征提取模型。由于自编码中的解码器可以用于根据待识别用户的人脸图像的编码向量生成重建人脸图像数据,而基于卷积神经网络的特征提取模型可以对该重建人脸图像数据进行分类,因此,可以将该基于卷积神经网络的特征提取模型的输出向量作为待识别用户的人脸特征向量,以提升基于该待识别用户的人脸特征向量生成的用户识别结果的准确性。In the embodiment of this specification, a face feature extraction model obtained by locking the decoder in the self-encoding and the feature extraction model based on the convolutional neural network may be deployed on the server device. Since the decoder in the auto-encoding can be used to generate the reconstructed face image data according to the encoding vector of the face image of the user to be identified, and the feature extraction model based on the convolutional neural network can classify the reconstructed face image data, so , the output vector of the feature extraction model based on the convolutional neural network can be used as the face feature vector of the user to be identified, so as to improve the accuracy of the user identification result generated based on the face feature vector of the user to be identified.

在本说明书实施例中,由于人脸特征提取模型中的基于卷积神经网络的特征提取模型,用于从重建人脸图像中提取人脸特征向量,因此,该基于卷积神经网络的特征提取模型可以采用现有的基于卷积神经网络的人脸识别模型实现,例如,DeepFace、FaceNet、MTCNN、RetinaFace等。可见,该人脸特征提取模型的兼容性较好。In the embodiment of this specification, since the feature extraction model based on convolutional neural network in the facial feature extraction model is used to extract the facial feature vector from the reconstructed face image, therefore, the feature extraction based on convolutional neural network The model can be implemented using existing convolutional neural network-based face recognition models, such as DeepFace, FaceNet, MTCNN, RetinaFace, etc. It can be seen that the facial feature extraction model has good compatibility.

且由于该人脸特征提取模型中的解码器对待识别用户的人脸图像的编码向量进行解密处理后得到的重建人脸图像数据,与该待识别用户的人脸图像之间的相似度较高,从而使得基于卷积神经网络的特征提取模型提取到的待识别用户的人脸特征向量的准确性较好。And because the decoder in the face feature extraction model decrypts the encoded vector of the face image of the user to be identified, the reconstructed face image data obtained by the decoder has a high similarity with the face image of the user to be identified. , so that the accuracy of the facial feature vector of the user to be identified extracted by the feature extraction model based on the convolutional neural network is better.

在本说明书实施例中,可以使用加密软件对自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定,或者,可以将自编码器中的解码器及基于卷积神经网络的特征提取模型存储于设备的安全硬件模块内,以令用户无法读取该解码器输出的重建人脸图像数据,从而确保用户人脸信息的隐私性。在本说明书实施例中,对自编码器中的解码器及该特征提取模型进行锁定的实现方式有多种,对此不作具体限定,只需保证自编码器中的解码器输出的重建人脸图像数据的使用安全性即可。In the embodiment of this specification, the decoder in the autoencoder and the feature extraction model based on the convolutional neural network can be locked by using encryption software, or the decoder in the autoencoder and the convolutional neural network-based feature extraction model can be locked. The feature extraction model is stored in the security hardware module of the device, so that the user cannot read the reconstructed face image data output by the decoder, thereby ensuring the privacy of the user's face information. In the embodiments of this specification, there are various implementations for locking the decoder in the auto-encoder and the feature extraction model, which are not specifically limited, and only need to ensure the reconstructed face output by the decoder in the auto-encoder The security of the use of image data is sufficient.

在实际应用中,当服务提供商或其他用户取得了针对待识别用户的重建人脸图像数据的读取权限后,也可以基于该读取权限去获取人脸特征提取模型中的解码器输出的重建人脸图像数据,从而有利于提升数据的利用率。In practical applications, after the service provider or other users have obtained the read permission of the reconstructed face image data for the user to be identified, the data output by the decoder in the face feature extraction model can also be obtained based on the read permission. Reconstructing face image data, which is beneficial to improve the utilization of data.

应当理解,本说明书一个或多个实施例所述的方法其中部分步骤的顺序可以根据实际需要相互交换,或者其中的部分步骤也可以省略或删除。It should be understood that the order of some steps in the method described in one or more embodiments of this specification may be interchanged according to actual needs, or some steps may be omitted or deleted.

图1中的方法,由于服务提供商可以从待识别用户的人脸图像的编码向量中提取出人脸特征向量,从而无需获取待识别用户的人脸图像,避免了服务提供商对待识别用户的人脸图像的传输、存储及使用,以确保待识别用户的人脸信息的隐私性及安全性。In the method in Fig. 1, since the service provider can extract the face feature vector from the coding vector of the face image of the user to be identified, there is no need to obtain the face image of the user to be identified, thereby avoiding the need for the service provider to treat the user to be identified. The transmission, storage and use of face images to ensure the privacy and security of the face information of the user to be identified.

且由于人脸特征提取模型中的解码器生成的重建人脸图像数据与待识别用户的人脸图像之间的相似度较高,从而可以令基于卷积神经网络的特征提取模型从重建人脸图像中提取出的待识别用户的人脸特征向量的准确性较好。And because of the high similarity between the reconstructed face image data generated by the decoder in the face feature extraction model and the face image of the user to be identified, the feature extraction model based on the convolutional neural network can be used to reconstruct the face image. The accuracy of the facial feature vector of the user to be recognized extracted from the image is good.

基于图1的方法,本说明书实施例还提供了该方法的一些具体实施方案,下面进行说明。Based on the method of FIG. 1 , some specific implementations of the method are also provided in the examples of this specification, which will be described below.

本说明书实施例中,所述编码器可以包括:自编码器中的输入层、第一隐藏层及瓶颈层,所述解码器可以包括:自编码器中的第二隐藏层及输出层。In the embodiment of this specification, the encoder may include: an input layer, a first hidden layer and a bottleneck layer in the self-encoder, and the decoder may include: a second hidden layer and an output layer in the self-encoder.

其中,所述编码器的输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接,所述编码器的瓶颈层与所述解码器的第二隐藏层连接,所述第二隐藏层与所述输出层连接,所述输出层与所述特征提取模型连接。Wherein, the input layer of the encoder is connected to the first hidden layer, the first hidden layer is connected to the bottleneck layer, the bottleneck layer of the encoder is connected to the second hidden layer of the decoder, The second hidden layer is connected to the output layer, and the output layer is connected to the feature extraction model.

所述输入层,可以用于接收所述待识别用户的人脸图像。The input layer can be used to receive the face image of the user to be recognized.

所述第一隐藏层,可以用于对所述人脸图像进行编码处理,得到第一特征向量。The first hidden layer can be used to encode the face image to obtain a first feature vector.

所述瓶颈层,可以用于对所述第一特征向量进行降维处理,得到所述人脸图像的编码向量,该编码向量的维度数量小于所述第一特征向量的维度数量。The bottleneck layer can be used to perform dimension reduction processing on the first feature vector to obtain a coding vector of the face image, where the number of dimensions of the coding vector is smaller than the number of dimensions of the first feature vector.

所述第二隐藏层,可以用于对编码向量进行解码处理,得到第二特征向量。The second hidden layer can be used to decode the encoded vector to obtain the second feature vector.

所述输出层,可以用于根据所述第二特征向量生成重建人脸图像数据。The output layer can be used to generate reconstructed face image data according to the second feature vector.

本说明书实施例中,由于自编码器中的编码器需对图像进行编码处理,而自编码器中的解码器需生成重建人脸图像,因此,为保证编码效果及解码效果,所述第一隐藏层及所述第二隐藏层中可以包括多个卷积层,所述第一隐藏层及所述第二隐藏层中还可以包括池化层及全连接层。所述瓶颈层(Bottleneck layer)可以用于降低特征维度。与瓶颈层连接的隐藏层输出的特征向量的维度均高于该瓶颈层输出的特征向量的维度。In the embodiment of this specification, since the encoder in the self-encoder needs to encode the image, and the decoder in the self-encoder needs to generate a reconstructed face image, in order to ensure the encoding effect and decoding effect, the first The hidden layer and the second hidden layer may include a plurality of convolutional layers, and the first hidden layer and the second hidden layer may further include a pooling layer and a fully connected layer. The Bottleneck layer can be used to reduce the feature dimension. The dimension of the feature vector output by the hidden layer connected to the bottleneck layer is higher than the dimension of the feature vector output by the bottleneck layer.

本说明书实施例中,所述基于卷积神经网络的特征提取模型可以包括:输入层、卷积层、全连接层及输出层;其中,所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接,所述全连接层与所述输出层连接。In the embodiment of this specification, the feature extraction model based on the convolutional neural network may include: an input layer, a convolutional layer, a fully connected layer, and an output layer; wherein, the input layer is connected to the output of the decoder, and the The input layer is also connected to the convolution layer, the convolution layer is connected to the fully connected layer, and the fully connected layer is connected to the output layer.

所述输入层,可以用于接收所述解码器输出的重建人脸图像数据;The input layer can be used to receive the reconstructed face image data output by the decoder;

所述卷积层,可以用于对所述重建人脸图像数据进行局部特征提取,得到所述待识别用户的人脸局部特征向量;The convolutional layer can be used to perform local feature extraction on the reconstructed face image data to obtain the face local feature vector of the user to be identified;

所述全连接层,可以用于根据所述人脸局部特征向量,生成所述待识别用户的人脸特征向量。The fully connected layer can be used to generate the face feature vector of the user to be identified according to the face local feature vector.

所述输出层,可以用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果。The output layer may be configured to generate a face classification result according to the face feature vector of the user to be identified output by the fully connected layer.

本说明书实施例中,所述待识别用户的人脸特征向量可以为与所述输出层相邻的全连接层的输出向量;或者,当基于卷积神经网络的特征提取模型中的全连接层有多个时,所述待识别用户的人脸特征向量也可以为与所述输出层间隔N个网络层的全连接层的输出向量;对此不做具体限定。In the embodiment of this specification, the face feature vector of the user to be identified may be the output vector of the fully connected layer adjacent to the output layer; or, when the fully connected layer in the feature extraction model based on convolutional neural network When there are more than one, the face feature vector of the user to be identified may also be an output vector of a fully connected layer separated from the output layer by N network layers; this is not specifically limited.

本说明书实施例中,由于步骤104中生成的待识别用户的人脸特征向量可以用于用户识别场景。因此,所述人脸特征提取模型还可以包括用户匹配模型,所述用户匹配模型的输入可以与所述人脸特征提取模型中的基于卷积神经网络的特征提取模型的输出连接。In the embodiment of this specification, since the face feature vector of the user to be identified generated instep 104 can be used for the user identification scene. Therefore, the facial feature extraction model may further include a user matching model, and the input of the user matching model may be connected with the output of the convolutional neural network-based feature extraction model in the facial feature extraction model.

步骤104之后,还可以包括:令所述用户匹配模型接收所述待识别用户的人脸特征向量及指定用户的人脸特征向量,并根据所述待识别用户的人脸特征向量和所述指定用户的人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出信息,其中,所述指定用户的人脸特征向量为利用所述编码器及所述人脸特征提取模型对所述指定用户的人脸图像进行处理而得到的。Afterstep 104, it can also include: making the user matching model receive the face feature vector of the user to be identified and the face feature vector of the designated user, and according to the face feature vector of the user to be identified and the designated user. The vector distance between the user's face feature vectors to generate output information indicating whether the to-be-identified user is the designated user, wherein the designated user's face feature vector is obtained by using the encoder and the person. The face feature extraction model is obtained by processing the face image of the designated user.

本说明书实施例中,待识别用户的人脸特征向量和指定用户的人脸特征向量之间的向量距离,可以用于表示待识别用户的人脸特征向量和指定用户的人脸特征向量之间的相似度。具体的,当该向量距离小于等于阈值时,可以确定待识别用户与指定用户为同一用户。而当该向量距离大于阈值时,可以确定待识别用户与指定用户为不同用户。该阈值可以根据实际需求确定,对此不做具体限定。In the embodiment of this specification, the vector distance between the face feature vector of the user to be identified and the face feature vector of the designated user can be used to represent the distance between the face feature vector of the user to be identified and the face feature vector of the designated user similarity. Specifically, when the vector distance is less than or equal to the threshold, it can be determined that the user to be identified and the designated user are the same user. When the vector distance is greater than the threshold, it can be determined that the user to be identified and the designated user are different users. The threshold can be determined according to actual requirements, and is not specifically limited.

本说明书实施例中,可以采用图1中的方法去生成待识别用户的人脸特征向量和指定用户的人脸特征向量。由于基于基于图1中方法生成的用户人脸特征向量的准确性较好,因此,有利于提升用户识别结果的准确性。In the embodiment of the present specification, the method in FIG. 1 may be used to generate the face feature vector of the user to be identified and the face feature vector of the specified user. Since the accuracy of the user face feature vector generated based on the method in FIG. 1 is better, it is beneficial to improve the accuracy of the user identification result.

图3为本说明书实施例提供的一种人脸识别模型的训练方法的流程示意图。从程序角度而言,该流程的执行主体可以为服务器或者搭载于服务器上的程序。如图3所示,该流程可以包括以下步骤:FIG. 3 is a schematic flowchart of a training method for a face recognition model according to an embodiment of the present specification. From a program perspective, the execution body of the process may be a server or a program mounted on the server. As shown in Figure 3, the process can include the following steps:

步骤302:获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像。Step 302: Obtain a first training sample set, where the training samples in the first training sample set are face images.

本说明书实施例中,所述第一训练样本集合中的训练样本为已取得使用权限的人脸图像。例如,公开人脸数据库中的人脸图像或者用户授权的人脸图片等,以确保人脸识别模型训练过程不会对用户人脸信息的隐私性产生影响。In the embodiment of the present specification, the training samples in the first training sample set are face images for which use rights have been obtained. For example, the face images in the face database or the face pictures authorized by the user are disclosed to ensure that the face recognition model training process will not affect the privacy of the user's face information.

本说明书实施例中,所述第一训练样本集合中的训练样本可以为多通道人脸图像,当公开人脸数据库中的人脸图像或者用户授权的人脸图片为单通道人脸图像时,可以先确定该人脸图像的单通道图像数据;根据所述单通道图像数据生成多通道图像,以将该多通道图像作为第一训练样本集合中的训练样本,所述多通道人脸图像的各个通道的图像数据均与所述单通道图像数据相同,从而保证第一训练样本集合中的训练样本的一致性。In the embodiment of this specification, the training samples in the first training sample set may be multi-channel face images. When the face images in the public face database or the face images authorized by the user are single-channel face images, The single-channel image data of the face image can be determined first; a multi-channel image is generated according to the single-channel image data, so that the multi-channel image is used as a training sample in the first training sample set, and the multi-channel face image is The image data of each channel is the same as the single-channel image data, so as to ensure the consistency of the training samples in the first training sample set.

步骤304:利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器。Step 304: Use the first training sample set to train an initial autoencoder to obtain a trained autoencoder.

本说明书实施例中,步骤304具体可以包括:针对所述第一训练样本集合中的每个训练样本,将所述训练样本输入所述初始自编码器,得到重建人脸图像数据,以最小化图像重建损失为目标,对所述初始自编码器的模型参数进行优化,得到训练后的自编码器;所述图像重建损失为所述重建人脸图像数据与所述训练样本之间的差异值。In the embodiment of this specification,step 304 may specifically include: for each training sample in the first training sample set, inputting the training sample into the initial autoencoder to obtain reconstructed face image data, so as to minimize the The image reconstruction loss is the target, and the model parameters of the initial autoencoder are optimized to obtain the trained autoencoder; the image reconstruction loss is the difference between the reconstructed face image data and the training sample .

本说明书实施例中,所述自编码器中的输入层、第一隐藏层及瓶颈层组成编码器,所述自编码器中的第二隐藏层与输出层组成解码器。该编码器可以用于对人脸图像进行编码处理,以得到该人脸图像的编码向量。而该解码器则可以对编码器生成的编码向量进行解码处理,得到重建人脸图像。所述自编码器的各个层的功能与图1中方法的实施例中提及的自编码器的各个层的功能可以是相同的,对此,不再赘述。In the embodiment of this specification, the input layer, the first hidden layer and the bottleneck layer in the self-encoder form an encoder, and the second hidden layer and the output layer in the self-encoder form a decoder. The encoder can be used for encoding the face image to obtain the encoding vector of the face image. The decoder can decode the encoded vector generated by the encoder to obtain a reconstructed face image. The functions of each layer of the autoencoder may be the same as the functions of each layer of the autoencoder mentioned in the embodiment of the method in FIG. 1 , which will not be repeated here.

步骤306:获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据。Step 306: Obtain a second training sample set, where the training samples in the second training sample set are coding vectors, and the coding vectors are used to characterize the face image using the encoder in the trained auto-encoder The resulting vector data after processing.

本说明书实施例中,所述第二训练样本集合中的训练样本可以是利用训练后的自编码器中的编码器,对需进行隐私保护的用户的人脸图像进行特征化处理而得到的向量数据。其中,需进行隐私保护的用户可以根据实际需求而确定。例如,应用处的已注册账号的操作用户及认证用户。或者,基于人脸识别技术的门禁处的待识别用户及白名单用户等。In the embodiment of this specification, the training samples in the second training sample set may be vectors obtained by using the encoder in the trained autoencoder to characterize the face image of the user who needs to perform privacy protection. data. Among them, users who need to perform privacy protection can be determined according to actual needs. For example, the operating user and authentication user of the registered account at the application. Or, users to be identified and whitelisted users at the access control based on face recognition technology.

本说明书实施例中,可以预先使用训练后的自编码器中的编码器去生成并存储第二训练样本集合中的训练样本。当执行步骤306时,只需从数据库去提取预先生成的第二训练样本集合中的训练样本即可。由于数据库中存储的第二训练样本集合中的训练样本为用户人脸图像的编码向量,而该编码向量无法体现待识别用户的相貌信息,因此,服务提供商对该第二训练样本集合中的训练样本进行传输、存储及处理,均不会对用户人脸信息的隐私性产成影响。In the embodiment of the present specification, the encoder in the trained autoencoder may be used in advance to generate and store the training samples in the second training sample set. Whenstep 306 is performed, it is only necessary to extract the training samples in the pre-generated second training sample set from the database. Since the training samples in the second training sample set stored in the database are coding vectors of user face images, and the coding vectors cannot reflect the appearance information of the user to be identified, the service provider shall The transmission, storage and processing of training samples will not affect the privacy of user face information.

步骤308:将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行锁定而得到的,所述解码器为所述训练后的自编码器中的解码器。Step 308: Input the training samples in the second training sample set into the decoder of the initial face feature extraction model, so as to use the reconstructed face image data output by the decoder to extract the initial face features. The initial feature extraction model based on the convolutional neural network in the model is trained to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by locking the decoder and the initial feature extraction model. Then, the decoder is the decoder in the trained autoencoder.

本说明书实施例中,在对初始人脸特征提取模型进行训练时,无需对该初始人脸特征提取模型中的解码器的模型参数进行优化,而只需对基于卷积神经网络的初始特征提取模型的模型参数进行优化即可。In the embodiment of this specification, when training the initial facial feature extraction model, it is not necessary to optimize the model parameters of the decoder in the initial facial feature extraction model, but only the initial feature extraction based on the convolutional neural network is required. The model parameters of the model can be optimized.

所述利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,具体可以包括:Using the reconstructed face image data output by the decoder to train the initial feature extraction model based on the convolutional neural network in the initial face feature extraction model may specifically include:

利用所述初始特征提取模型对所述重建人脸图像数据进行分类处理,得到所述重建人脸图像数据的类别标签预测值;获取针对所述重建人脸图像数据的类别标签预设值;以最小化分类损失为目标,对所述初始特征提取模型的模型参数进行优化,所述分类损失为所述类别标签预测值与所述类别标签预设值之间的差异值。Use the initial feature extraction model to classify the reconstructed face image data to obtain the predicted value of the category label of the reconstructed face image data; obtain the preset value of the category label for the reconstructed face image data; The objective is to minimize the classification loss, and the model parameters of the initial feature extraction model are optimized, and the classification loss is the difference between the predicted value of the class label and the preset value of the class label.

步骤310:根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。Step 310: Generate a user feature extraction model for privacy protection according to the encoder and the trained face feature extraction model.

本说明书实施例中,所述编码器的输入用于接收待识别用户的人脸图像,所述编码器的输出与所述训练后的人脸特征提取模型中的解码器的输入连接,所述解码器的输出与所述训练后的人脸特征提取模型中的基于卷积神经网络的特征提取模型的输入连接,所述基于卷积神经网络的特征提取模型的输出为待识别用户的人脸特征向量。In the embodiment of this specification, the input of the encoder is used to receive the face image of the user to be recognized, the output of the encoder is connected with the input of the decoder in the trained face feature extraction model, the The output of the decoder is connected with the input of the feature extraction model based on the convolutional neural network in the face feature extraction model after training, and the output of the feature extraction model based on the convolutional neural network is the face of the user to be identified. Feature vector.

本说明书实施例中,通过对自编码器及初始人脸特征提取模型进行训练,以基于该训练后的自编码器及训练后的初始人脸特征提取模型搭建用于隐私保护的人脸特征提取模型。由于自编码器不需要除输入图像以外的其他信息作为网络训练中的监督信息,从而可以降低该用于隐私保护的人脸特征提取模型的训练成本,经济实用。In the embodiments of this specification, by training the autoencoder and the initial facial feature extraction model, a facial feature extraction for privacy protection is built based on the trained autoencoder and the trained initial facial feature extraction model. Model. Since the autoencoder does not need other information other than the input image as supervision information in network training, the training cost of the face feature extraction model for privacy protection can be reduced, which is economical and practical.

基于图3的方法,本说明书实施例还提供了该方法的一些具体实施方案,下面进行说明。Based on the method of FIG. 3 , some specific implementations of the method are also provided in the examples of this specification, which will be described below.

本说明书实施例中,图3中方法生成的用户特征提取模型可以应用于用户识别场景。当利用该用户特征提取模型提取出用户人脸特征向量后,通常还需要对用户人脸特征向量进行比较,以生成最终的用户识别结果。In the embodiment of this specification, the user feature extraction model generated by the method in FIG. 3 can be applied to a user identification scenario. After the user face feature vector is extracted by using the user feature extraction model, it is usually necessary to compare the user face feature vector to generate the final user identification result.

因此,步骤310中所述生成用于隐私保护的人脸特征提取模型之前,还可以包括:建立用户匹配模型,所述用户匹配模型用于根据待识别用户的第一人脸特征向量与指定用户的第二人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出结果,所述第一人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述待识别用户的人脸图像进行处理得到的,所述第二人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述指定用户的人脸图像进行处理得到的;Therefore, before generating the facial feature extraction model for privacy protection instep 310, the method may further include: establishing a user matching model, where the user matching model is used to match the specified user with the first facial feature vector of the user to be identified and the specified user. The vector distance between the second facial feature vectors of the The face feature extraction model is obtained by processing the face image of the user to be identified, and the second face feature vector is obtained by using the encoder and the trained face feature extraction model to identify the specified user. obtained by processing the face image;

步骤310具体可以包括:生成由所述编码器、所述训练后的人脸特征提取模型及所述用户匹配模型构成的用于隐私保护的用户特征提取模型。Step 310 may specifically include: generating a user feature extraction model for privacy protection composed of the encoder, the trained face feature extraction model, and the user matching model.

基于同样的思路,本说明书实施例还提供了上述方法对应的装置。图4为本说明书实施例提供的对应于图1的一种人脸特征提取装置的结构示意图。所述装置使用了用于隐私保护的用户特征提取模型,所述用户特征提取模型可以包括:编码器及人脸特征提取模型,所述人脸特征提取模型是通过对解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型,其中,所述编码器和所述解码器组成自编码器;所述编码器与所述人脸特征提取模型中的解码器连接,所述解码器与所述特征提取模型连接;所述装置可以包括:Based on the same idea, the embodiments of the present specification also provide a device corresponding to the above method. FIG. 4 is a schematic structural diagram of a face feature extraction apparatus corresponding to FIG. 1 according to an embodiment of the present specification. The device uses a user feature extraction model for privacy protection, and the user feature extraction model may include: an encoder and a face feature extraction model, and the face feature extraction model is based on a decoder and a convolution neural network. A model obtained by locking the feature extraction model of the network, wherein the encoder and the decoder form an autoencoder; the encoder is connected to the decoder in the face feature extraction model, and the decoder connected with the feature extraction model; the device may include:

输入模块402,可以用于将待识别用户的人脸图像输入所述编码器,得到所述编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;Theinput module 402 can be used to input the face image of the user to be identified into the encoder to obtain an encoding vector of the face image output by the encoder, where the encoding vector is used to characterize the face image. The vector data obtained after processing;

人脸特征向量生成模块404,可以用于令所述人脸特征提取模型中的解码器接收所述编码向量后,向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。The face featurevector generation module 404 can be used to make the decoder in the face feature extraction model output the reconstructed face image data to the feature extraction model after receiving the encoding vector; in order to facilitate the feature extraction model After characterizing the reconstructed face image data, the face feature vector of the user to be identified is output.

可选的,所述编码器可以包括:自编码器中的输入层、第一隐藏层及瓶颈层,所述解码器可以包括:自编码器中的第二隐藏层及输出层;其中,所述编码器的输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接,所述编码器的瓶颈层与所述解码器的第二隐藏层连接,所述第二隐藏层与所述输出层连接,所述输出层与所述特征提取模型连接。Optionally, the encoder may include: an input layer, a first hidden layer and a bottleneck layer in the self-encoder, and the decoder may include: a second hidden layer and an output layer in the self-encoder; wherein, the The input layer of the encoder is connected to the first hidden layer, the first hidden layer is connected to the bottleneck layer, the bottleneck layer of the encoder is connected to the second hidden layer of the decoder, and the first hidden layer is connected to the bottleneck layer. The second hidden layer is connected to the output layer, and the output layer is connected to the feature extraction model.

所述自编码器中的输入层,可以用于接收所述待识别用户的人脸图像;所述第一隐藏层,可以用于对所述人脸图像进行编码处理,得到第一特征向量;所述瓶颈层,可以用于对所述第一特征向量进行降维处理,得到所述人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量;所述第二隐藏层,可以用于对所述编码向量进行解码处理,得到第二特征向量;所述输出层,可以用于根据所述第二特征向量生成重建人脸图像数据。The input layer in the self-encoder can be used to receive the face image of the user to be recognized; the first hidden layer can be used to encode the face image to obtain a first feature vector; The bottleneck layer can be used to perform dimensionality reduction processing on the first feature vector to obtain an encoding vector of the face image, where the number of dimensions of the encoding vector is smaller than the number of dimensions of the first feature vector; the The second hidden layer can be used to decode the encoded vector to obtain a second feature vector; the output layer can be used to generate reconstructed face image data according to the second feature vector.

可选的,所述基于卷积神经网络的特征提取模型可以包括:输入层、卷积层及全连接层;其中,所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接。Optionally, the feature extraction model based on the convolutional neural network may include: an input layer, a convolutional layer and a fully connected layer; wherein the input layer is connected to the output of the decoder, and the input layer is also connected to the output of the decoder. The convolutional layer is connected, and the convolutional layer is connected with the fully connected layer.

所述基于卷积神经网络的特征提取模型输入层,可以用于接收所述解码器输出的重建人脸图像数据;所述卷积层,可以用于对所述重建人脸图像数据进行局部特征提取,得到所述待识别用户的人脸局部特征向量;所述全连接层,用于根据所述人脸局部特征向量,可以生成所述待识别用户的人脸特征向量。The input layer of the feature extraction model based on the convolutional neural network can be used to receive the reconstructed face image data output by the decoder; the convolution layer can be used to perform local features on the reconstructed face image data. Extraction to obtain the face local feature vector of the to-be-identified user; the fully-connected layer is used to generate the to-be-recognized user's face feature vector according to the face local feature vector.

可选的,所述用户特征提取模型还可以包括用户匹配模型,所述用户匹配模型与所述特征提取模型连接;所述装置还可以包括:Optionally, the user feature extraction model may further include a user matching model, and the user matching model is connected to the feature extraction model; the apparatus may further include:

用户匹配模块,用于令所述用户匹配模型接收所述待识别用户的人脸特征向量及指定用户的人脸特征向量后,根据所述待识别用户的人脸特征向量和所述指定用户的人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出信息,其中,所述指定用户的人脸特征向量为利用所述编码器及所述人脸特征提取模型对所述指定用户的人脸图像进行处理而得到的。The user matching module is used to make the user matching model receive the face feature vector of the user to be identified and the face feature vector of the designated user, and then perform the matching according to the face feature vector of the user to be identified and the designated user's face feature vector. The vector distance between the face feature vectors to generate output information indicating whether the user to be identified is the designated user, wherein the face feature vector of the designated user is obtained by using the encoder and the face feature The extraction model is obtained by processing the face image of the specified user.

基于同样的思路,本说明书实施例还提供了上述方法对应的装置。图5为本说明书实施例提供的对应于图3的一种针对用于隐私保护的人脸特征提取模型的训练装置。如图5所示,该装置可以包括:Based on the same idea, the embodiments of the present specification also provide a device corresponding to the above method. FIG. 5 corresponds to a training apparatus for a face feature extraction model for privacy protection provided in an embodiment of the present specification, which corresponds to FIG. 3 . As shown in Figure 5, the apparatus may include:

第一获取模块502,用于获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像;afirst acquisition module 502, configured to acquire a first training sample set, where the training samples in the first training sample set are face images;

第一训练模块504,用于利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器;Afirst training module 504, configured to use the first training sample set to train an initial autoencoder to obtain a trained autoencoder;

第二获取模块506,用于获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据;The second obtainingmodule 506 is configured to obtain a second set of training samples, where the training samples in the second set of training samples are coding vectors, and the coding vectors are used for encoding human The vector data obtained after the face image is characterized;

第二训练模块508,用于将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行锁定而得到的,所述解码器为所述训练后的自编码器中的解码器;Thesecond training module 508 is configured to input the training samples in the second training sample set into the decoder of the initial face feature extraction model, so as to use the reconstructed face image data output by the decoder to The initial facial feature extraction model based on the convolutional neural network in the initial facial feature extraction model is trained to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by comparing the decoder and the initial obtained by locking the feature extraction model, and the decoder is the decoder in the trained autoencoder;

用户特征提取模型生成模块510,用于根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。The user feature extractionmodel generation module 510 is configured to generate a user feature extraction model for privacy protection according to the encoder and the trained face feature extraction model.

可选的,所述第一训练模块502,具体可以用于:Optionally, thefirst training module 502 can be specifically used for:

针对所述第一训练样本集合中的每个训练样本,将所述训练样本输入所述初始自编码器,得到重建人脸图像数据;以最小化图像重建损失为目标,对所述初始自编码器的模型参数进行优化,得到训练后的自编码器;所述图像重建损失为所述重建人脸图像数据与所述训练样本之间的差异值。For each training sample in the first training sample set, input the training sample into the initial auto-encoder to obtain reconstructed face image data; with the goal of minimizing image reconstruction loss, the initial auto-encoder is The model parameters of the encoder are optimized to obtain a trained autoencoder; the image reconstruction loss is the difference value between the reconstructed face image data and the training sample.

可选的,所述利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,具体可以包括:利用所述初始特征提取模型对所述重建人脸图像数据进行分类处理,得到所述重建人脸图像数据的类别标签预测值;获取针对所述重建人脸图像数据的类别标签预设值;以最小化分类损失为目标,对所述初始特征提取模型的模型参数进行优化,所述分类损失为所述类别标签预测值与所述类别标签预设值之间的差异值。Optionally, using the reconstructed face image data output by the decoder to train an initial feature extraction model based on a convolutional neural network in the initial face feature extraction model may specifically include: using the The initial feature extraction model performs classification processing on the reconstructed face image data to obtain the predicted value of the category label of the reconstructed face image data; obtains the preset value of the category label for the reconstructed face image data; to minimize the classification The loss is the target, and the model parameters of the initial feature extraction model are optimized, and the classification loss is the difference between the predicted value of the class label and the preset value of the class label.

可选的,图5中的装置还可以包括:用户匹配模型建立模块,用于建立用户匹配模型,所述用户匹配模型用于根据待识别用户的第一人脸特征向量与指定用户的第二人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出结果,所述第一人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述待识别用户的人脸图像进行处理得到的,所述第二人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述指定用户的人脸图像进行处理得到的。Optionally, the device in FIG. 5 may further include: a user matching model establishment module, for establishing a user matching model, the user matching model is used according to the first face feature vector of the user to be identified and the second of the designated user. The vector distance between the face feature vectors to generate an output result indicating whether the user to be identified is the designated user, and the first face feature vector is the use of the encoder and the trained face feature The extraction model is obtained by processing the face image of the user to be identified, and the second face feature vector is obtained by using the encoder and the trained face feature extraction model to extract the face of the designated user. image processed.

所述用户特征提取模型生成模块510,具体可以用于:生成由所述编码器、所述训练后的人脸特征提取模型及所述用户匹配模型构成的用于隐私保护的用户特征提取模型。The user feature extractionmodel generation module 510 may be specifically configured to: generate a user feature extraction model for privacy protection composed of the encoder, the trained face feature extraction model and the user matching model.

基于同样的思路,本说明书实施例还提供了上述方法对应的一种客户端设备。该客户端设备,可以包括:Based on the same idea, the embodiments of this specification also provide a client device corresponding to the above method. The client device can include:

至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有图像编码器以及可被所述至少一个处理器执行的指令,所述图像编码器为自编码器中的编码器,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores an image encoder and instructions executable by the at least one processor, the image encoder being self-encoding an encoder in a processor, the instructions are executed by the at least one processor to enable the at least one processor to:

将待识别用户的人脸图像输入所述图像编码器,得到所述图像编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据。Input the face image of the user to be identified into the image encoder, and obtain the encoding vector of the human face image output by the image encoder, where the encoding vector is obtained after characterizing the face image. vector data.

发送所述编码向量至服务端设备,以便于所述服务端设备利用人脸特征提取模型根据所述编码向量生成所述待识别用户的人脸特征向量,所述人脸特征提取模型是通过对所述自编码中的解码器以及基于卷积神经网络的特征提取模型进行锁定而得到的模型。Send the encoding vector to the server device, so that the server device can use the facial feature extraction model to generate the facial feature vector of the user to be recognized according to the encoding vector, and the facial feature extraction model is obtained by A model obtained by locking the decoder in the self-encoding and the feature extraction model based on the convolutional neural network.

在本说明书实施例中,通过令客户端设备可以利用其搭载的自编码器中的编码器去生成待识别用户的人脸图像的编码向量,从而令客户端设备可以向服务端设备设备发送该待识别用户的人脸图像的编码向量以进行用户识别,而无需向服务端设备设备发送待识别用户的人脸图像,避免了对待识别用户的人脸图像的传输,以保证待识别用户的人脸信息的隐私性及安全性。In the embodiment of this specification, the client device can use the encoder in the self-encoder to generate the encoding vector of the face image of the user to be recognized, so that the client device can send the The encoding vector of the face image of the user to be identified is used for user identification without sending the face image of the user to be identified to the server device, avoiding the transmission of the face image of the user to be identified, so as to ensure that the person to be identified Privacy and security of face information.

基于同样的思路,本说明书实施例还提供了上述方法对应的一种服务端设备。该服务端设备,可以包括:Based on the same idea, the embodiments of this specification also provide a server device corresponding to the above method. The server device may include:

至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,at least one processor; and, a memory communicatively coupled to the at least one processor; wherein,

所述存储器存储有人脸特征提取模型,所述人脸特征提取模型是通过对自编码中的解码器以及基于卷积神经网络的特征提取模型进行锁定而得到的模型,所述存储器还存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:The memory stores a facial feature extraction model, the facial feature extraction model is a model obtained by locking the decoder in the self-encoding and the feature extraction model based on the convolutional neural network, and the memory also stores instructions executed by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:

获取待识别用户的人脸图像的编码向量,所述编码向量是利用所述自编码器中的编码器对所述人脸图像进行特征化处理而得到的向量数据;Obtain the encoding vector of the face image of the user to be identified, the encoding vector is the vector data obtained by utilizing the encoder in the self-encoder to characterize the face image;

将所述编码向量输入所述人脸特征提取模型中的解码器后,所述解码器向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。After the encoding vector is input into the decoder in the face feature extraction model, the decoder outputs reconstructed face image data to the feature extraction model; After the data is characterized, the facial feature vector of the user to be recognized is output.

在本说明书实施例中,通过令服务端设备可以基于对其搭载的人脸特征提取模型,去根据待识别用户的人脸图像的编码向量生成待识别用户的人脸特征向量,从而令服务端设备无需去获取待识别用户的人脸图像也以可进行用户识别,不仅避免了对待识别用户的人脸图像的传输操作,还可以避免服务端设备存储、处理待识别用户的人脸图像,以提升待识别用户的人脸信息的隐私性及安全性。In the embodiment of this specification, the server device can generate the face feature vector of the user to be recognized according to the encoding vector of the face image of the user to be recognized based on the facial feature extraction model carried on it, so that the server device can The device can also perform user identification without acquiring the face image of the user to be identified, which not only avoids the transmission operation of the face image of the user to be identified, but also avoids the server device from storing and processing the face image of the user to be identified. Improve the privacy and security of the face information of the user to be identified.

基于同样的思路,本说明书实施例还提供了图3中方法对应的一种针对用于隐私保护的人脸特征提取模型的训练设备。该设备,可以包括:Based on the same idea, the embodiments of this specification also provide a training device for a face feature extraction model for privacy protection corresponding to the method in FIG. 3 . The equipment can include:

至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:at least one processor; and, a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor, to enable the at least one processor to:

获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像。Obtain a first training sample set, where the training samples in the first training sample set are face images.

利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器。The initial autoencoder is trained by using the first training sample set to obtain a trained autoencoder.

获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据。Obtain a second training sample set, where the training samples in the second training sample set are encoding vectors, and the encoding vectors are obtained by using the encoder in the trained autoencoder to characterize the face image vector data.

将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行锁定而得到的,所述解码器为所述训练后的自编码器中的解码器。The training samples in the second training sample set are input into the decoder of the initial face feature extraction model, so as to utilize the reconstructed face image data output by the decoder to extract the features in the initial face feature extraction model. Perform training based on the initial feature extraction model of the convolutional neural network to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by locking the decoder and the initial feature extraction model , the decoder is the decoder in the trained autoencoder.

根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。According to the encoder and the trained face feature extraction model, a user feature extraction model for privacy protection is generated.

上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of the present specification. Other embodiments are within the scope of the appended claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. Additionally, the processes depicted in the figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.

在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(FieldProgrammableGateArray,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced BooleanExpression Language)、AHDL(Altera Hardware DescriptionLanguage)、Confluence、CUPL(Cornell UniversityProgramming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(RubyHardware DescriptionLanguage)等,目前最普遍使用的是VHDL(Very-High-SpeedIntegrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。In the 1990s, improvements in a technology could be clearly differentiated between improvements in hardware (eg, improvements to circuit structures such as diodes, transistors, switches, etc.) or improvements in software (improvements in method flow). However, with the development of technology, the improvement of many methods and processes today can be regarded as a direct improvement of the hardware circuit structure. Designers almost get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by hardware entity modules. For example, a Programmable Logic Device (PLD) (eg, Field Programmable Gate Array (FPGA)) is an integrated circuit whose logic function is determined by user programming of the device. It is programmed by the designer to "integrate" a digital system on a PLD without having to ask the chip manufacturer to design and manufacture a dedicated integrated circuit chip. And, instead of making integrated circuit chips by hand, these days, much of this programming is done using software called a "logic compiler", which is similar to the software compiler used in program development and writing, but before compiling The original code also has to be written in a specific programming language, which is called Hardware Description Language (HDL), and there is not only one HDL, but many, such as ABEL (Advanced Boolean Expression AHDL(Altera Hardware DescriptionLanguage), Confluence, CUPL(Cornell University Programming Language), HDCal, JHDL(Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, RHDL(RubyHardware DescriptionLanguage), etc. VHDL(Very -High-SpeedIntegrated Circuit Hardware Description Language) and Verilog. It should also be clear to those skilled in the art that a hardware circuit for implementing the logic method process can be easily obtained by simply programming the method process in the above-mentioned several hardware description languages and programming it into the integrated circuit.

控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、AtmelAT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。The controller may be implemented in any suitable manner, for example, the controller may take the form of eg a microprocessor or processor and a computer readable medium storing computer readable program code (eg software or firmware) executable by the (micro)processor , logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers, examples of controllers include but are not limited to the following microcontrollers: ARC 625D, AtmelAT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory. Those skilled in the art also know that, in addition to implementing the controller in the form of pure computer-readable program code, the controller can be implemented as logic gates, switches, application-specific integrated circuits, programmable logic controllers and embedded devices by logically programming the method steps. The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included therein for realizing various functions can also be regarded as a structure within the hardware component. Or even, the means for implementing various functions can be regarded as both a software module implementing a method and a structure within a hardware component.

上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。The systems, devices, modules or units described in the above embodiments may be specifically implemented by computer chips or entities, or by products with certain functions. A typical implementation device is a computer. Specifically, the computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or A combination of any of these devices.

为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书一个或多个实施例时可以把各单元的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above device, the functions are divided into various units and described respectively. Of course, when implementing one or more embodiments of the present specification, the functions of each unit may be implemented in one or more software and/or hardware.

本领域内的技术人员应明白,本说明书一个或多个实施例可提供为方法、系统、或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by one skilled in the art, one or more embodiments of this specification may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of this specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present specification may employ a computer program implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein form of the product.

本说明书一个或多个实施例是参照根据本说明书一个或多个实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。One or more embodiments of this specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to one or more embodiments of this specification. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.

这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.

内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory in the form of, for example, read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.

计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带式磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridges, tape-based disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.

还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements includes not only those elements, but also Other elements not expressly listed, or which are inherent to such a process, method, article of manufacture, or apparatus are also included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article of manufacture, or device that includes the element.

本说明书一个或多个实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书一个或多个实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。One or more embodiments of this specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of this specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including storage devices.

本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, as for the system embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for related parts, please refer to the partial descriptions of the method embodiments.

以上所述仅为本说明书的实施例而已,并不用于限制本说明书一个或多个实施例。对于本领域技术人员来说,本说明书一个或多个实施例可以有各种更改和变化。凡在本说明书一个或多个实施例的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书一个或多个实施例的权利要求范围之内。The above descriptions are merely embodiments of the present specification, and are not intended to limit one or more embodiments of the present specification. Various modifications and variations of the one or more embodiments of this specification are possible for those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present specification should be included within the scope of the claims of one or more embodiments of the present specification.

Claims (22)

Translated fromChinese
1.一种人脸特征提取方法,所述方法使用了用于隐私保护的用户特征提取模型,所述用户特征提取模型包括:编码器及人脸特征提取模型,所述人脸特征提取模型是通过对解码器及基于卷积神经网络的特征提取模型进行加密锁定而得到的模型,其中,所述编码器和所述解码器组成自编码器;1. a face feature extraction method, the method has used a user feature extraction model for privacy protection, the user feature extraction model comprises: an encoder and a face feature extraction model, and the face feature extraction model is A model obtained by encrypting and locking a decoder and a feature extraction model based on a convolutional neural network, wherein the encoder and the decoder form an autoencoder;所述编码器与所述人脸特征提取模型中的解码器连接,所述解码器与所述特征提取模型连接;所述方法包括:The encoder is connected with the decoder in the face feature extraction model, and the decoder is connected with the feature extraction model; the method includes:将待识别用户的人脸图像输入所述编码器,得到所述编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;其中,待识别用户的人脸图像为多通道人脸图像,所述待识别用户的多通道人脸图像的各个通道的图像数据均与单通道图像数据相同;Input the face image of the user to be identified into the encoder, and obtain the encoding vector of the face image output by the encoder, where the encoding vector is the vector data obtained after characterizing the face image. ; wherein, the face image of the user to be identified is a multi-channel face image, and the image data of each channel of the multi-channel face image of the user to be identified is the same as the single-channel image data;所述人脸特征提取模型中的解码器接收所述编码向量后,向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。After the decoder in the face feature extraction model receives the encoding vector, it outputs reconstructed face image data to the feature extraction model; so that the feature extraction model can characterize the reconstructed face image data Then, output the face feature vector of the user to be identified.2.如权利要求1所述的方法,所述编码器包括:输入层、第一隐藏层及瓶颈层,所述解码器包括:第二隐藏层及输出层;2. The method of claim 1, the encoder comprising: an input layer, a first hidden layer and a bottleneck layer, the decoder comprising: a second hidden layer and an output layer;其中,所述编码器的输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接,所述编码器的瓶颈层与所述解码器的第二隐藏层连接,所述第二隐藏层与所述输出层连接,所述输出层与所述特征提取模型连接;Wherein, the input layer of the encoder is connected to the first hidden layer, the first hidden layer is connected to the bottleneck layer, the bottleneck layer of the encoder is connected to the second hidden layer of the decoder, the second hidden layer is connected to the output layer, and the output layer is connected to the feature extraction model;所述输入层,用于接收所述待识别用户的人脸图像;the input layer, for receiving the face image of the user to be identified;所述第一隐藏层,用于对所述人脸图像进行编码处理,得到第一特征向量;The first hidden layer is used to encode the face image to obtain a first feature vector;所述瓶颈层,用于对所述第一特征向量进行降维处理,得到所述人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量;The bottleneck layer is used to perform dimensionality reduction processing on the first feature vector to obtain a coding vector of the face image, where the number of dimensions of the coding vector is less than the number of dimensions of the first feature vector;所述第二隐藏层,用于对所述编码向量进行解码处理,得到第二特征向量;The second hidden layer is used for decoding the encoding vector to obtain a second feature vector;所述输出层,用于根据所述第二特征向量生成重建人脸图像数据。The output layer is used for generating reconstructed face image data according to the second feature vector.3.如权利要求1所述的方法,所述基于卷积神经网络的特征提取模型包括:输入层、卷积层及全连接层;3. The method of claim 1, wherein the feature extraction model based on a convolutional neural network comprises: an input layer, a convolutional layer and a fully connected layer;其中,所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接;Wherein, the input layer is connected with the output of the decoder, the input layer is also connected with the convolutional layer, and the convolutional layer is connected with the fully connected layer;所述输入层,用于接收所述解码器输出的重建人脸图像数据;the input layer, for receiving the reconstructed face image data output by the decoder;所述卷积层,用于对所述重建人脸图像数据进行局部特征提取,得到所述待识别用户的人脸局部特征向量;The convolutional layer is used to extract local features from the reconstructed face image data to obtain a local feature vector of the face of the user to be identified;所述全连接层,用于根据所述人脸局部特征向量,生成所述待识别用户的人脸特征向量。The fully connected layer is configured to generate the face feature vector of the user to be identified according to the face local feature vector.4.如权利要求3所述的方法,所述基于卷积神经网络的特征提取模型还包括输出层,所述输出层与所述全连接层连接;所述输出层,用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果;4. The method according to claim 3, wherein the feature extraction model based on the convolutional neural network further comprises an output layer, the output layer is connected with the fully connected layer; the output layer is used for connecting the face feature vector of the user to be identified outputted by the connection layer to generate a face classification result;所述待识别用户的人脸特征向量为与所述输出层相邻的全连接层的输出向量。The face feature vector of the user to be identified is the output vector of the fully connected layer adjacent to the output layer.5.如权利要求1所述的方法,所述用户特征提取模型还包括用户匹配模型,所述用户匹配模型与所述特征提取模型连接;所述方法还包括:5. The method of claim 1, wherein the user feature extraction model further comprises a user matching model, the user matching model is connected with the feature extraction model; the method further comprises:所述用户匹配模型接收所述待识别用户的人脸特征向量及指定用户的人脸特征向量,并根据所述待识别用户的人脸特征向量和所述指定用户的人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出信息,其中,所述指定用户的人脸特征向量为利用所述编码器及所述人脸特征提取模型对所述指定用户的人脸图像进行处理而得到的。The user matching model receives the face feature vector of the user to be identified and the face feature vector of the designated user, and according to the relationship between the face feature vector of the user to be identified and the face feature vector of the designated user vector distance, generating output information indicating whether the user to be identified is the designated user, wherein the face feature vector of the designated user is the use of the encoder and the face feature extraction model for the designated user. obtained by processing face images.6.一种针对用于隐私保护的用户特征提取模型的训练方法,所述方法包括:6. A training method for a user feature extraction model for privacy protection, the method comprising:获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像;其中人脸图像为多通道人脸图像,所述多通道人脸图像的各个通道的图像数据均与单通道图像数据相同;Obtain a first training sample set, and the training samples in the first training sample set are face images; wherein the face images are multi-channel face images, and the image data of each channel of the multi-channel face images are the same as the single-channel face images. The image data is the same;利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器;Use the first training sample set to train the initial autoencoder to obtain the trained autoencoder;获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据;Obtain a second training sample set, where the training samples in the second training sample set are encoding vectors, and the encoding vectors are obtained by using the encoder in the trained autoencoder to characterize the face image vector data;将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行加密锁定而得到的,所述解码器为所述训练后的自编码器中的解码器;The training samples in the second training sample set are input into the decoder of the initial face feature extraction model, so as to utilize the reconstructed face image data output by the decoder to extract the features in the initial face feature extraction model. Perform training based on the initial feature extraction model of the convolutional neural network to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by encrypting and locking the decoder and the initial feature extraction model , the decoder is the decoder in the trained autoencoder;根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。According to the encoder and the trained face feature extraction model, a user feature extraction model for privacy protection is generated.7.如权利要求6所述的方法,所述利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器,具体包括:7. The method according to claim 6, wherein the initial self-encoder is trained by using the first training sample set to obtain a trained self-encoder, specifically comprising:针对所述第一训练样本集合中的每个训练样本,将所述训练样本输入所述初始自编码器,得到重建人脸图像数据;For each training sample in the first training sample set, input the training sample into the initial autoencoder to obtain reconstructed face image data;以最小化图像重建损失为目标,对所述初始自编码器的模型参数进行优化,得到训练后的自编码器;所述图像重建损失为所述重建人脸图像数据与所述训练样本之间的差异值。With the goal of minimizing image reconstruction loss, the model parameters of the initial auto-encoder are optimized to obtain a trained auto-encoder; the image reconstruction loss is the difference between the reconstructed face image data and the training sample difference value.8.如权利要求6所述的方法,所述利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,具体包括:8. The method according to claim 6, wherein the reconstructed face image data output by the decoder is used to train an initial feature extraction model based on a convolutional neural network in the initial face feature extraction model, Specifically include:利用所述初始特征提取模型对所述重建人脸图像数据进行分类处理,得到所述重建人脸图像数据的类别标签预测值;Using the initial feature extraction model to classify the reconstructed face image data to obtain a predicted value of the category label of the reconstructed face image data;获取针对所述重建人脸图像数据的类别标签预设值;Obtaining the preset value of the category label for the reconstructed face image data;以最小化分类损失为目标,对所述初始特征提取模型的模型参数进行优化,所述分类损失为所述类别标签预测值与所述类别标签预设值之间的差异值。The model parameters of the initial feature extraction model are optimized with the goal of minimizing the classification loss, where the classification loss is the difference between the predicted value of the class label and the preset value of the class label.9.如权利要求6所述的方法,所述第一训练样本集合中的训练样本为已取得使用权限的人脸图像。9 . The method according to claim 6 , wherein the training samples in the first training sample set are face images for which use rights have been obtained. 10 .10.如权利要求6所述的方法,所述第二训练样本集合中的训练样本是利用所述编码器对需要进行隐私保护的用户的人脸图像进行特征化处理而得到的向量数据。10 . The method of claim 6 , wherein the training samples in the second training sample set are vector data obtained by using the encoder to characterize the face images of users who need privacy protection. 11 .11.如权利要求6所述的方法,所述生成用于隐私保护的用户特征提取模型之前,还包括:11. The method of claim 6, before the generating a user feature extraction model for privacy protection, further comprising:建立用户匹配模型,所述用户匹配模型用于根据待识别用户的第一人脸特征向量与指定用户的第二人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出结果,所述第一人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述待识别用户的人脸图像进行处理得到的,所述第二人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述指定用户的人脸图像进行处理得到的;A user matching model is established, and the user matching model is used to generate an expression indicating whether the user to be identified is the The output result of the specified user, the first face feature vector is obtained by using the encoder and the trained face feature extraction model to process the face image of the user to be identified, and the second face feature vector is obtained. The face feature vector is obtained by using the encoder and the trained face feature extraction model to process the face image of the designated user;所述生成用于隐私保护的用户特征提取模型,具体包括:The generating a user feature extraction model for privacy protection specifically includes:生成由所述编码器、所述训练后的人脸特征提取模型及所述用户匹配模型构成的用于隐私保护的用户特征提取模型。Generate a user feature extraction model for privacy protection composed of the encoder, the trained face feature extraction model and the user matching model.12.一种人脸特征提取装置,所述装置使用了用于隐私保护的用户特征提取模型,所述用户特征提取模型包括:编码器及人脸特征提取模型,所述人脸特征提取模型是通过对解码器及基于卷积神经网络的特征提取模型进行加密锁定而得到的模型,其中,所述编码器和所述解码器组成自编码器;所述编码器与所述人脸特征提取模型中的解码器连接,所述解码器与所述特征提取模型连接;所述装置包括:12. A face feature extraction device, the device uses a user feature extraction model for privacy protection, the user feature extraction model comprises: an encoder and a face feature extraction model, and the face feature extraction model is A model obtained by encrypting and locking a decoder and a feature extraction model based on a convolutional neural network, wherein the encoder and the decoder form an auto-encoder; the encoder and the face feature extraction model The decoder in the device is connected, and the decoder is connected with the feature extraction model; the device includes:输入模块,用于将待识别用户的人脸图像输入所述编码器,得到所述编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;其中,待识别用户的人脸图像为多通道人脸图像,所述待识别用户的多通道人脸图像的各个通道的图像数据均与单通道图像数据相同;The input module is used to input the face image of the user to be identified into the encoder, and obtain the encoding vector of the face image output by the encoder, where the encoding vector is used to characterize the face image. The vector data obtained later; wherein, the face image of the user to be identified is a multi-channel face image, and the image data of each channel of the multi-channel face image of the user to be identified is the same as the single-channel image data;人脸特征向量生成模块,用于令所述人脸特征提取模型中的解码器接收所述编码向量后,向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。The face feature vector generation module is used to make the decoder in the face feature extraction model output the reconstructed face image data to the feature extraction model after receiving the encoding vector; After characterizing the reconstructed face image data, the face feature vector of the user to be identified is output.13.如权利要求12所述的装置,所述编码器包括:输入层、第一隐藏层及瓶颈层,所述解码器包括:第二隐藏层及输出层;13. The apparatus of claim 12, the encoder comprising: an input layer, a first hidden layer and a bottleneck layer, the decoder comprising: a second hidden layer and an output layer;其中,所述编码器的输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接,所述编码器的瓶颈层与所述解码器的第二隐藏层连接,所述第二隐藏层与所述输出层连接,所述输出层与所述特征提取模型连接;Wherein, the input layer of the encoder is connected to the first hidden layer, the first hidden layer is connected to the bottleneck layer, the bottleneck layer of the encoder is connected to the second hidden layer of the decoder, the second hidden layer is connected to the output layer, and the output layer is connected to the feature extraction model;所述输入层,用于接收所述待识别用户的人脸图像;the input layer, for receiving the face image of the user to be identified;所述第一隐藏层,用于对所述人脸图像进行编码处理,得到第一特征向量;The first hidden layer is used to encode the face image to obtain a first feature vector;所述瓶颈层,用于对所述第一特征向量进行降维处理,得到所述人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量;The bottleneck layer is used to perform dimensionality reduction processing on the first feature vector to obtain a coding vector of the face image, where the number of dimensions of the coding vector is less than the number of dimensions of the first feature vector;所述第二隐藏层,用于对所述编码向量进行解码处理,得到第二特征向量;The second hidden layer is used for decoding the encoding vector to obtain a second feature vector;所述输出层,用于根据所述第二特征向量生成重建人脸图像数据。The output layer is used for generating reconstructed face image data according to the second feature vector.14.如权利要求12所述的装置,所述基于卷积神经网络的特征提取模型包括:输入层、卷积层及全连接层;14. The apparatus according to claim 12, wherein the feature extraction model based on convolutional neural network comprises: an input layer, a convolutional layer and a fully connected layer;其中,所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接;Wherein, the input layer is connected with the output of the decoder, the input layer is also connected with the convolutional layer, and the convolutional layer is connected with the fully connected layer;所述输入层,用于接收所述解码器输出的重建人脸图像数据;the input layer, for receiving the reconstructed face image data output by the decoder;所述卷积层,用于对所述重建人脸图像数据进行局部特征提取,得到所述待识别用户的人脸局部特征向量;The convolutional layer is used to extract local features from the reconstructed face image data to obtain a local feature vector of the face of the user to be identified;所述全连接层,用于根据所述人脸局部特征向量,生成所述待识别用户的人脸特征向量。The fully connected layer is configured to generate the face feature vector of the user to be identified according to the face local feature vector.15.如权利要求12所述的装置,所述用户特征提取模型还包括用户匹配模型,所述用户匹配模型与所述特征提取模型连接;所述装置还包括:15. The apparatus of claim 12, wherein the user feature extraction model further comprises a user matching model, the user matching model is connected with the feature extraction model; the apparatus further comprises:用户匹配模块,用于令所述用户匹配模型接收所述待识别用户的人脸特征向量及指定用户的人脸特征向量后,根据所述待识别用户的人脸特征向量和所述指定用户的人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出信息,其中,所述指定用户的人脸特征向量为利用所述编码器及所述人脸特征提取模型对所述指定用户的人脸图像进行处理而得到的。The user matching module is used to make the user matching model receive the face feature vector of the user to be identified and the face feature vector of the designated user, and then perform the matching according to the face feature vector of the user to be identified and the designated user's face feature vector. The vector distance between the face feature vectors to generate output information indicating whether the user to be identified is the designated user, wherein the face feature vector of the designated user is obtained by using the encoder and the face feature The extraction model is obtained by processing the face image of the specified user.16.一种针对用于隐私保护的用户特征提取模型的训练装置,所述装置包括:16. A training device for a user feature extraction model for privacy protection, the device comprising:第一获取模块,用于获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像;其中人脸图像为多通道人脸图像,所述多通道人脸图像的各个通道的图像数据均与单通道图像数据相同;a first acquisition module, configured to acquire a first training sample set, where the training samples in the first training sample set are face images; wherein the face images are multi-channel face images, and each channel of the multi-channel face images The image data are the same as the single-channel image data;第一训练模块,用于利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器;a first training module, configured to use the first training sample set to train an initial autoencoder to obtain a trained autoencoder;第二获取模块,用于获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据;The second acquisition module is configured to acquire a second training sample set, where the training samples in the second training sample set are coding vectors, and the coding vectors are the use of the encoder in the trained auto-encoder to detect a face The vector data obtained after the image is characterized;第二训练模块,用于将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行加密锁定而得到的,所述解码器为所述训练后的自编码器中的解码器;The second training module is used to input the training samples in the second training sample set into the decoder of the initial face feature extraction model, so as to use the reconstructed face image data output by the decoder to The initial feature extraction model based on the convolutional neural network in the facial feature extraction model is trained to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by comparing the decoder and the initial feature. The extraction model is obtained by encrypting and locking, and the decoder is the decoder in the trained self-encoder;用户特征提取模型生成模块,用于根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。The user feature extraction model generation module is configured to generate a user feature extraction model for privacy protection according to the encoder and the trained face feature extraction model.17.如权利要求16所述的装置,所述第一训练模块,具体用于:17. The apparatus of claim 16, wherein the first training module is specifically used for:针对所述第一训练样本集合中的每个训练样本,将所述训练样本输入所述初始自编码器,得到重建人脸图像数据;For each training sample in the first training sample set, input the training sample into the initial autoencoder to obtain reconstructed face image data;以最小化图像重建损失为目标,对所述初始自编码器的模型参数进行优化,得到训练后的自编码器;所述图像重建损失为所述重建人脸图像数据与所述训练样本之间的差异值。With the goal of minimizing image reconstruction loss, the model parameters of the initial auto-encoder are optimized to obtain a trained auto-encoder; the image reconstruction loss is the difference between the reconstructed face image data and the training sample difference value.18.如权利要求16所述的装置,所述利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,具体包括:18. The apparatus according to claim 16, wherein the reconstructed face image data output by the decoder is used to train an initial feature extraction model based on a convolutional neural network in the initial face feature extraction model, Specifically include:利用所述初始特征提取模型对所述重建人脸图像数据进行分类处理,得到所述重建人脸图像数据的类别标签预测值;Using the initial feature extraction model to classify the reconstructed face image data to obtain a predicted value of the category label of the reconstructed face image data;获取针对所述重建人脸图像数据的类别标签预设值;Obtaining the preset value of the category label for the reconstructed face image data;以最小化分类损失为目标,对所述初始特征提取模型的模型参数进行优化,所述分类损失为所述类别标签预测值与所述类别标签预设值之间的差异值。The model parameters of the initial feature extraction model are optimized with the goal of minimizing the classification loss, where the classification loss is the difference between the predicted value of the class label and the preset value of the class label.19.如权利要求16所述的装置,还包括:19. The apparatus of claim 16, further comprising:用户匹配模型建立模块,用于建立用户匹配模型,所述用户匹配模型用于根据待识别用户的第一人脸特征向量与指定用户的第二人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出结果,所述第一人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述待识别用户的人脸图像进行处理得到的,所述第二人脸特征向量是利用所述编码器及所述训练后的人脸特征提取模型对所述指定用户的人脸图像进行处理得到的;The user matching model building module is used to establish a user matching model, and the user matching model is used to generate a representation of the user according to the vector distance between the first face feature vector of the user to be identified and the second face feature vector of the specified user. Whether the user to be identified is the output result of the designated user, and the first facial feature vector is to use the encoder and the trained facial feature extraction model to perform a facial image of the user to be identified. obtained by processing, the second face feature vector is obtained by using the encoder and the trained face feature extraction model to process the face image of the designated user;所述用户特征提取模型生成模块,具体用于:The user feature extraction model generation module is specifically used for:生成由所述编码器、所述训练后的人脸特征提取模型及所述用户匹配模型构成的用于隐私保护的用户特征提取模型。Generate a user feature extraction model for privacy protection composed of the encoder, the trained face feature extraction model and the user matching model.20.一种客户端设备,包括:20. A client device comprising:至少一个处理器;以及,at least one processor; and,与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,所述存储器存储有图像编码器以及可被所述至少一个处理器执行的指令,所述图像编码器为自编码器中的编码器,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:The memory stores an image encoder and instructions executable by the at least one processor, the image encoder being an encoder in an autoencoder, the instructions being executed by the at least one processor to cause all the The at least one processor is capable of:将待识别用户的人脸图像输入所述图像编码器,得到所述图像编码器输出的所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;其中,待识别用户的人脸图像为多通道人脸图像,所述待识别用户的多通道人脸图像的各个通道的图像数据均与单通道图像数据相同;Input the face image of the user to be identified into the image encoder, and obtain the encoding vector of the human face image output by the image encoder, where the encoding vector is obtained after characterizing the face image. Vector data; wherein, the face image of the user to be identified is a multi-channel face image, and the image data of each channel of the multi-channel face image of the user to be identified is the same as the single-channel image data;发送所述编码向量至服务端设备,以便于所述服务端设备利用人脸特征提取模型根据所述编码向量生成所述待识别用户的人脸特征向量,所述人脸特征提取模型是通过对所述自编码中的解码器以及基于卷积神经网络的特征提取模型进行加密锁定而得到的模型。Send the encoding vector to the server device, so that the server device can use the facial feature extraction model to generate the facial feature vector of the user to be recognized according to the encoding vector, and the facial feature extraction model is obtained by The decoder in the self-encoding and the model obtained by encrypting and locking the feature extraction model based on the convolutional neural network.21.一种服务端设备,包括:21. A server device, comprising:至少一个处理器;以及,at least one processor; and,与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,所述存储器存储有人脸特征提取模型,所述人脸特征提取模型是通过对自编码中的解码器以及基于卷积神经网络的特征提取模型进行加密锁定而得到的模型,所述存储器还存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:The memory stores a face feature extraction model, which is a model obtained by encrypting and locking the decoder in the self-encoding and the feature extraction model based on the convolutional neural network, and the memory also stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:获取待识别用户的人脸图像的编码向量,所述编码向量是利用所述自编码器中的编码器对所述人脸图像进行特征化处理而得到的向量数据;其中,待识别用户的人脸图像为多通道人脸图像,所述待识别用户的多通道人脸图像的各个通道的图像数据均与单通道图像数据相同;Obtain the encoding vector of the face image of the user to be identified, the encoding vector is vector data obtained by using the encoder in the self-encoder to characterize the face image; wherein, the person of the user to be identified The face image is a multi-channel face image, and the image data of each channel of the multi-channel face image of the user to be identified is the same as the single-channel image data;将所述编码向量输入所述人脸特征提取模型中的解码器后,所述解码器向所述特征提取模型输出重建人脸图像数据;以便于所述特征提取模型对所述重建人脸图像数据进行特征化处理后,输出所述待识别用户的人脸特征向量。After the encoding vector is input into the decoder in the face feature extraction model, the decoder outputs reconstructed face image data to the feature extraction model; After the data is characterized, the facial feature vector of the user to be recognized is output.22.一种针对用于隐私保护的人脸特征提取模型的训练设备,包括:22. A training device for a facial feature extraction model for privacy protection, comprising:至少一个处理器;以及,at least one processor; and,与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:获取第一训练样本集合,所述第一训练样本集合中的训练样本为人脸图像;其中人脸图像为多通道人脸图像,所述多通道人脸图像的各个通道的图像数据均与单通道图像数据相同;Obtain a first training sample set, and the training samples in the first training sample set are face images; wherein the face images are multi-channel face images, and the image data of each channel of the multi-channel face images are the same as the single-channel face images. The image data is the same;利用所述第一训练样本集合对初始自编码器进行训练,得到训练后的自编码器;Use the first training sample set to train the initial autoencoder to obtain the trained autoencoder;获取第二训练样本集合,所述第二训练样本集合中的训练样本为编码向量,所述编码向量为利用所述训练后的自编码器中的编码器对人脸图像进行特征化处理后得到的向量数据;Obtain a second training sample set, where the training samples in the second training sample set are encoding vectors, and the encoding vectors are obtained by using the encoder in the trained autoencoder to characterize the face image the vector data of ;将所述第二训练样本集合中的训练样本输入初始人脸特征提取模型的解码器中,以便于利用所述解码器输出的重建人脸图像数据,对所述初始人脸特征提取模型中的基于卷积神经网络的初始特征提取模型进行训练,得到训练后的人脸特征提取模型;所述初始人脸特征提取模型是通过对所述解码器及所述初始特征提取模型进行加密锁定而得到的,所述解码器为所述训练后的自编码器中的解码器;The training samples in the second training sample set are input into the decoder of the initial face feature extraction model, so as to utilize the reconstructed face image data output by the decoder to extract the features in the initial face feature extraction model. Perform training based on the initial feature extraction model of the convolutional neural network to obtain a trained facial feature extraction model; the initial facial feature extraction model is obtained by encrypting and locking the decoder and the initial feature extraction model , the decoder is the decoder in the trained autoencoder;根据所述编码器及所述训练后的人脸特征提取模型,生成用于隐私保护的用户特征提取模型。According to the encoder and the trained face feature extraction model, a user feature extraction model for privacy protection is generated.
CN202010197694.8A2020-03-192020-03-19 A face feature extraction method, device and deviceActiveCN111401272B (en)

Priority Applications (3)

Application NumberPriority DateFiling DateTitle
CN202010197694.8ACN111401272B (en)2020-03-192020-03-19 A face feature extraction method, device and device
CN202111156860.0ACN113657352B (en)2020-03-192020-03-19 A method, device and equipment for extracting facial features
PCT/CN2020/140574WO2021184898A1 (en)2020-03-192020-12-29Facial feature extraction method, apparatus and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010197694.8ACN111401272B (en)2020-03-192020-03-19 A face feature extraction method, device and device

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111156860.0ADivisionCN113657352B (en)2020-03-192020-03-19 A method, device and equipment for extracting facial features

Publications (2)

Publication NumberPublication Date
CN111401272A CN111401272A (en)2020-07-10
CN111401272Btrue CN111401272B (en)2021-08-24

Family

ID=71432637

Family Applications (2)

Application NumberTitlePriority DateFiling Date
CN202111156860.0AActiveCN113657352B (en)2020-03-192020-03-19 A method, device and equipment for extracting facial features
CN202010197694.8AActiveCN111401272B (en)2020-03-192020-03-19 A face feature extraction method, device and device

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
CN202111156860.0AActiveCN113657352B (en)2020-03-192020-03-19 A method, device and equipment for extracting facial features

Country Status (2)

CountryLink
CN (2)CN113657352B (en)
WO (1)WO2021184898A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113657352B (en)*2020-03-192025-03-04蚂蚁区块链科技(上海)有限公司 A method, device and equipment for extracting facial features
CN111401273B (en)*2020-03-192022-04-29支付宝(杭州)信息技术有限公司 A user feature extraction system and device for privacy protection
CN111783965A (en)*2020-08-142020-10-16支付宝(杭州)信息技术有限公司Method, device and system for biometric identification and electronic equipment
CN112016480B (en)*2020-08-312024-05-28中移(杭州)信息技术有限公司Face feature representing method, system, electronic device and storage medium
KR102767128B1 (en)*2021-02-022025-02-13주식회사 딥브레인에이아이Apparatus and method for synthesizing image capable of improving image quality
CN112949545B (en)*2021-03-172022-12-30中国工商银行股份有限公司Method, apparatus, computing device and medium for recognizing face image
CN113657350B (en)*2021-05-122024-06-14支付宝(杭州)信息技术有限公司Face image processing method and device
CN113657498B (en)*2021-08-172023-02-10展讯通信(上海)有限公司Biological feature extraction method, training method, authentication method, device and equipment
CN114118218B (en)*2021-11-012025-05-02北京三快在线科技有限公司 A method and device for model training
CN113989901A (en)*2021-11-112022-01-28卫盈联信息技术(深圳)有限公司 Face recognition method, device, client and storage medium
CN113946858B (en)*2021-12-202022-03-18湖南丰汇银佳科技股份有限公司Identity security authentication method and system based on data privacy calculation
CN114267064B (en)*2021-12-232024-12-06成都阿加犀智能科技有限公司 Face recognition method, device, electronic device and storage medium
CN114662144B (en)*2022-03-072025-07-15支付宝(杭州)信息技术有限公司 A biological detection method, device and equipment
CN114861236A (en)*2022-03-082022-08-05银保信科技(北京)有限公司Image data processing method and device, storage medium and terminal
CN114821751B (en)*2022-06-272022-09-27北京瑞莱智慧科技有限公司Image recognition method, device, system and storage medium
CN114842544B (en)*2022-07-042022-09-06江苏布罗信息技术有限公司Intelligent face recognition method and system suitable for facial paralysis patient
CN115190217B (en)*2022-07-072024-03-26国家计算机网络与信息安全管理中心Data security encryption method and device integrating self-coding network
CN117151250B (en)*2023-08-102025-07-25支付宝(杭州)信息技术有限公司Model training method, device, equipment and readable storage medium
CN116844217B (en)*2023-08-302023-11-14成都睿瞳科技有限责任公司Image processing system and method for generating face data
CN118213048B (en)*2023-11-202024-11-05清华大学 Image processing methods, model training methods, equipment, media and products

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104866900A (en)*2015-01-292015-08-26北京工业大学Deconvolution neural network training method
CN107220594A (en)*2017-05-082017-09-29桂林电子科技大学It is a kind of to retain the human face posture reconstruction and recognition methods for stacking self-encoding encoder based on similarity
CN108664967A (en)*2018-04-172018-10-16上海交通大学A kind of multimedia page vision significance prediction technique and system
CN109495476A (en)*2018-11-192019-03-19中南大学A kind of data flow difference method for secret protection and system based on edge calculations
CN110310351A (en)*2019-07-042019-10-08北京信息科技大学 A Method for Automatic Generation of 3D Human Skeleton Animation Based on Sketch
CN110321777A (en)*2019-04-252019-10-11重庆理工大学A kind of face identification method based on the sparse denoising self-encoding encoder of stack convolution
CN110766048A (en)*2019-09-182020-02-07平安科技(深圳)有限公司Image content identification method and device, computer equipment and storage medium

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105518717B (en)*2015-10-302019-03-01厦门中控智慧信息技术有限公司A kind of face identification method and device
JP6318211B2 (en)*2016-10-032018-04-25株式会社Preferred Networks Data compression apparatus, data reproduction apparatus, data compression method, data reproduction method, and data transfer method
CN107679451A (en)*2017-08-252018-02-09百度在线网络技术(北京)有限公司Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model
US10803347B2 (en)*2017-12-012020-10-13The University Of ChicagoImage transformation with a hybrid autoencoder and generative adversarial network machine learning architecture
US11171977B2 (en)*2018-02-192021-11-09Nec CorporationUnsupervised spoofing detection from traffic data in mobile networks
CN108537120A (en)*2018-03-062018-09-14安徽电科恒钛智能科技有限公司A kind of face identification method and system based on deep learning
AU2019320080B2 (en)*2018-08-102024-10-10Leidos Security Detection & Automation, Inc.Systems and methods for image processing
CN109117801A (en)*2018-08-202019-01-01深圳壹账通智能科技有限公司Method, apparatus, terminal and the computer readable storage medium of recognition of face
CN109769080B (en)*2018-12-062021-05-11西北大学Encrypted image cracking method and system based on deep learning
CN109711546B (en)*2018-12-212021-04-06深圳市商汤科技有限公司Neural network training method and device, electronic equipment and storage medium
CN110147721B (en)*2019-04-112023-04-18创新先进技术有限公司Three-dimensional face recognition method, model training method and device
CN110598580A (en)*2019-08-252019-12-20南京理工大学Human face living body detection method
CN110717977B (en)*2019-10-232023-09-26网易(杭州)网络有限公司Method, device, computer equipment and storage medium for processing game character face
CN110826056B (en)*2019-11-112024-01-30南京工业大学 A recommendation system attack detection method based on attention convolutional autoencoder
CN113657352B (en)*2020-03-192025-03-04蚂蚁区块链科技(上海)有限公司 A method, device and equipment for extracting facial features

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104866900A (en)*2015-01-292015-08-26北京工业大学Deconvolution neural network training method
CN107220594A (en)*2017-05-082017-09-29桂林电子科技大学It is a kind of to retain the human face posture reconstruction and recognition methods for stacking self-encoding encoder based on similarity
CN108664967A (en)*2018-04-172018-10-16上海交通大学A kind of multimedia page vision significance prediction technique and system
CN109495476A (en)*2018-11-192019-03-19中南大学A kind of data flow difference method for secret protection and system based on edge calculations
CN110321777A (en)*2019-04-252019-10-11重庆理工大学A kind of face identification method based on the sparse denoising self-encoding encoder of stack convolution
CN110310351A (en)*2019-07-042019-10-08北京信息科技大学 A Method for Automatic Generation of 3D Human Skeleton Animation Based on Sketch
CN110766048A (en)*2019-09-182020-02-07平安科技(深圳)有限公司Image content identification method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Reducing the Dimensionality of Data with Neural Networks;G. E. Hinton 等;《Science》;20060628;参见第504-507页*

Also Published As

Publication numberPublication date
CN113657352B (en)2025-03-04
WO2021184898A1 (en)2021-09-23
CN113657352A (en)2021-11-16
CN111401272A (en)2020-07-10

Similar Documents

PublicationPublication DateTitle
CN111401272B (en) A face feature extraction method, device and device
CN111401273B (en) A user feature extraction system and device for privacy protection
CN111368795B (en)Face feature extraction method, device and equipment
CN113239852B (en)Privacy image processing method, device and equipment based on privacy protection
CN115546908A (en) A living body detection method, device and equipment
CN118428404A (en)Knowledge distillation method, device and equipment for model
CN115048661B (en)Model processing method, device and equipment
CN114445918B (en)Living body detection method, device and equipment
CN114880706A (en)Information processing method, device and equipment
CN112395448A (en)Face retrieval method and device
CN117612269A (en)Biological attack detection method, device and equipment
CN114662144B (en) A biological detection method, device and equipment
CN115577336A (en)Biological identification processing method, device and equipment
CN114238910B (en)Data processing method, device and equipment
CN115358777A (en) Advertisement placement processing method and device in virtual world
HK40033202B (en)Face feature extraction method, device and equipment
HK40033202A (en)Face feature extraction method, device and equipment
HK40032980B (en)Face feature extraction method, device and equipment
HK40032980A (en)Face feature extraction method, device and equipment
CN118898760A (en) Image recognition method and device for network security
HK40033188B (en)User feature extraction system and device for privacy protection
CN114882290B (en) A certification method, training method, device and equipment
CN118211132B (en) A method and device for generating three-dimensional human body surface data based on point cloud
CN114547682B (en)Image processing method, device, equipment and storage medium based on privacy protection
CN115186278A (en) Data processing method, device and equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40032980

Country of ref document:HK

GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20241113

Address after:Room 803, floor 8, No. 618 Wai Road, Huangpu District, Shanghai 200010

Patentee after:Ant blockchain Technology (Shanghai) Co.,Ltd.

Country or region after:China

Address before:310000 801-11 section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province

Patentee before:Alipay (Hangzhou) Information Technology Co.,Ltd.

Country or region before:China


[8]ページ先頭

©2009-2025 Movatter.jp