Movatterモバイル変換


[0]ホーム

URL:


CN107392213B - Face portrait synthesis method based on depth map model feature learning - Google Patents

Face portrait synthesis method based on depth map model feature learning
Download PDF

Info

Publication number
CN107392213B
CN107392213BCN201710602696.9ACN201710602696ACN107392213BCN 107392213 BCN107392213 BCN 107392213BCN 201710602696 ACN201710602696 ACN 201710602696ACN 107392213 BCN107392213 BCN 107392213B
Authority
CN
China
Prior art keywords
photo
face
blocks
training
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710602696.9A
Other languages
Chinese (zh)
Other versions
CN107392213A (en
Inventor
王楠楠
朱明瑞
李洁
高新波
查文锦
张玉倩
郝毅
曹兵
马卓奇
刘德成
辛经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian UniversityfiledCriticalXidian University
Priority to CN201710602696.9ApriorityCriticalpatent/CN107392213B/en
Publication of CN107392213ApublicationCriticalpatent/CN107392213A/en
Application grantedgrantedCritical
Publication of CN107392213BpublicationCriticalpatent/CN107392213B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

一种基于深度图模型特征学习的人脸画像合成方法。其步骤为:(1)生成样本集合;(2)生成图像块集合;(3)提取深度特征;(4)求解人脸画像重构块系数;(5)重构人脸画像块;(6)合成人脸画像。本发明使用深度卷积网络提取人脸照片块的深度特征,利用马尔科夫图模型求解深度特征图系数与人脸画像块重构系数,使用人脸画像块重构系数对人脸画像块加权求和得到重构人脸画像块,拼接重构人脸画像块得到合成人脸画像。本发明使用从深度卷积网络中提取的深度特征来代替图像块的原始像素值信息,对光照等环境噪声具有更好的鲁棒性,能合成质量极高的人脸画像。

Figure 201710602696

A face portrait synthesis method based on deep graph model feature learning. The steps are: (1) generating a sample set; (2) generating a set of image blocks; (3) extracting depth features; (4) solving the coefficients of face image reconstruction blocks; (5) reconstructing face image blocks; (6) ) to synthesize face portraits. The invention uses a deep convolution network to extract the depth features of the face photo blocks, uses the Markov graph model to solve the depth feature map coefficients and the face image block reconstruction coefficients, and uses the face image block reconstruction coefficients to weight the face image blocks. Summing up the reconstructed face image blocks, and splicing the reconstructed face image blocks to obtain a synthetic face image. The invention uses the depth feature extracted from the deep convolution network to replace the original pixel value information of the image block, has better robustness to environmental noises such as illumination, and can synthesize a very high-quality face portrait.

Figure 201710602696

Description

Face portrait synthesis method based on depth map model feature learning
Technical Field
The invention belongs to the technical field of image processing, and further relates to a face portrait synthesis method based on depth map model feature learning in the technical field of pattern recognition and computer vision. The invention can be used for face retrieval and identification in the field of public security.
Background
In criminal investigation pursuit, the public security department is provided with a citizen photo database, and the citizen photo database is combined with a face recognition technology to determine the identity of a criminal suspect, but in practice, the criminal suspect photo is difficult to obtain, but a sketch portrait of the criminal suspect can be obtained under the cooperation of a painter and a witness, so that subsequent face retrieval and recognition can be carried out. Because of the great difference between the portrait and the common face photograph, it is difficult to obtain satisfactory recognition effect by directly using the traditional face recognition method. The photos in the citizen photo database are combined into the portraits, so that the difference of the textures of the citizen can be effectively reduced, and the recognition rate is further improved.
Gao et al, in their published paper "X.Gao, J.Zhou, D.Tao, and X.Li," neuro facial sketch, vol.71, No.10-12, pp.1921-1930, Jun.2008, propose to use an embedded hidden Markov model to generate a pseudo-portrait. The method comprises the steps of firstly blocking photos and portraits in a training library, then modeling corresponding photo blocks and portraits by using an embedded hidden Markov model, giving a photo randomly, blocking the photos randomly, and selecting models generated by partial blocks for generating pseudo portraits and fusing the pseudo portraits by using a selective integration idea for any block to obtain a final pseudo portraits. The method has the disadvantages that because the method adopts a selective integration technology, the generated pseudo-portrait is subjected to weighted average, so that the background is not clean and the details are not clear, and the quality of the generated portrait is further reduced.
Zhou et al, published In the paper "H.Zhou, Z.Kuang, and K.Wong.Markov weight fields for Face Sketch Synthesis" (In Proc.IEEE int.conference on computer vision, pp.1091-1097,2012), propose a Face Sketch Synthesis method based on the Markov weight field. The method comprises the steps of uniformly partitioning a training image and an input test image, and searching a plurality of neighbors of any test image block to obtain a candidate block in the form of an image to be synthesized. And then modeling the test image block, the neighbor block and the candidate image block by using a Markov picture model to obtain a reconstruction weight. And finally, reconstructing a composite image block by using the reconstruction weight and the candidate image block, and splicing to obtain a composite image. The method has the disadvantages that the image block characteristics use original pixel information, the representation capability is insufficient, and the influence of environmental noise such as illumination is large.
A face portrait synthesizing method based on a directional diagram model is disclosed in a patent of 'face portrait synthesizing method based on a directional diagram model' applied by the university of electronic science and technology of Western Ann (application number: CN201610171867.2, application date: 2016.03.24, application publication number: CN 105869134A). The method comprises the steps of uniformly partitioning a training image and an input test image, and searching a plurality of adjacent photo blocks and corresponding adjacent photo blocks of any test photo block. Then extracting direction characteristics of the test photo block and the adjacent photo blocks. Then, a Markov picture model is used for modeling the direction characteristics of the test picture block and the adjacent picture blocks, and the reconstruction weight of the synthesized picture block reconstructed by the adjacent picture blocks is obtained. And finally, reconstructing a composite image block by using the reconstruction weight and the neighboring image block, and splicing to obtain the composite image. The method has the disadvantages that the image block features use high-frequency features of artificial design, the self-adaptive capacity is insufficient, and the features are not fully learned.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned deficiencies of the prior art, and to provide a method for synthesizing a face image based on depth map model feature learning, which can synthesize a high-quality image that is not affected by environmental noise such as illumination.
The specific steps for realizing the purpose of the invention are as follows:
(1) generating a sample set:
(1a) m face photos are taken out from the face photo sample set to form a training face photo sample set, wherein M is more than or equal to 2 and less than or equal to U-1, and U represents the total number of the face photos in the sample set;
(1b) forming a testing face photo set by the remaining face photos in the face photo sample set;
(1c) taking face pictures corresponding to the face photos of the training face photo sample set one by one from the face picture sample set to form a training face picture sample set;
(2) generating an image block set:
(2a) randomly selecting a test face photo from the test face photo set, dividing the test face photo into photo blocks with the same size and the same overlapping degree, and forming a test photo block set;
(2b) dividing each photo in the training face photo sample set into photo blocks with the same size and the same overlapping degree to form a training photo sample block set;
(2c) dividing each portrait in a training face portrait sample set into portrait blocks with the same size and the same overlapping degree to form a training portrait sample block set;
(3) extracting depth features:
(3a) inputting all photo blocks in the training photo block set and the test photo block set into a deep convolution network VGG for object recognition which is trained on an object recognition database ImageNet, and carrying out forward propagation;
(3b) taking a 128-layer feature map output by the middle layer of the deep convolutional network VGG as the depth feature of the photo block, wherein the coefficient of each layer of the feature map is ui,lAnd is and
Figure GDA0002243201490000021
where, Σ denotes a summation operation, i denotes a sequence number of a test picture block, i ═ 1, 2.., N denotes a total number of test picture blocks, l denotes a sequence number of a feature map, and l ═ 1.., 128;
(4) solving the face image block reconstruction coefficient:
(4a) using K neighbor search algorithm, finding out 10 neighbor training photo blocks which are most similar to each test photo block in the training photo sample block set, and simultaneously selecting out the training photo blocks which are most similar to the neighbor training photo blocks from the training photo sample block set10 neighboring training image blocks corresponding to the image blocks one by one, wherein the coefficient of each neighboring training image block is wi,kWherein, in the step (A),
Figure GDA0002243201490000031
k represents a training image block number, k is 1.
(4b) Using a Markov graph model formula to carry out depth feature on all test photo blocks, depth features of all neighbor training photo blocks, all neighbor training photo blocks and coefficients u of a depth feature graphi,lCoefficient w of neighboring training image blocki,kModeling;
(4c) solving the Markov graph model formula to obtain the face image block reconstruction coefficient wi,k
(6) Reconstructing the face image block:
10 neighboring training image blocks corresponding to each test photo block and respective coefficients wi,kMultiplying, summing results after multiplying, and taking the result as a reconstructed face image block corresponding to each test image block;
(7) synthesizing a face portrait:
and splicing the reconstructed face image blocks corresponding to all the test image blocks to obtain a synthesized face image.
Compared with the prior art, the invention has the following advantages:
1, because the depth features extracted from the depth convolution network are used for replacing original pixel value information of the image block, the problems that the feature representation capability used in the prior art is insufficient and is greatly influenced by environmental noise such as illumination and the like are solved, and the method has the advantage of robustness to the environmental noise such as illumination and the like.
2, because the invention uses the Markov picture model to jointly model the depth characteristic picture coefficient and the face image block reconstruction coefficient, the problems of unclean background and unclear details of the face image synthesized by the prior art are overcome, and the invention has the advantages of clean background and clear details of the synthesized face image.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of simulation effect of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the specific steps of the present invention are as follows.
Step 1, generating a sample set.
And taking M face photos from the face photo sample set to form a training face photo sample set, wherein M is more than or equal to 2 and less than or equal to U-1, and U represents the total number of the face photos in the sample set.
And forming a test face photo set by the face photos left in the face photo sample set.
And (4) taking the face pictures which correspond to the face pictures of the training face picture sample set one by one from the face picture sample set to form a training face picture sample set.
And 2, generating an image block set.
And randomly selecting a test face photo from the test face photo set, dividing the test face photo into photo blocks with the same size and the same overlapping degree, and forming a test photo block set.
Dividing each photo in the training face photo sample set into photo blocks with the same size and the same overlapping degree to form a training photo sample block set.
Each face image in the training face image sample set is divided into image blocks with the same size and the same overlapping degree to form a training image sample block set.
The overlapping degree means that the area of the overlapping area between two adjacent image blocks is 1/2 of the area of each image block.
And 3, extracting depth features.
And inputting all the photo blocks in the training photo block set and the test photo block set into a deep convolution network VGG for object recognition which is trained on an object recognition database ImageNet, and carrying out forward propagation.
Taking a 128-layer feature map output by the middle layer of the deep convolutional network VGG as the depth feature of the photo block, wherein the coefficient of each layer of the feature map is ui,lAnd is andwhere, Σ denotes a summation operation, i denotes a sequence number of the test picture block, i ═ 1, 2.., N denotes a total number of the test picture blocks, l denotes a sequence number of the feature map, and l ═ 1.., 128.
The middle layer refers to the activation function layer of the deep convolutional network VGG.
And 4, solving the face image block reconstruction coefficient.
Using K neighbor search algorithm, finding out 10 neighbor training picture blocks which are most similar to each test picture block from the training picture sample block set, and simultaneously selecting 10 neighbor training picture blocks which are in one-to-one correspondence with the neighbor training picture blocks from the training picture sample block set, wherein the coefficient of each neighbor training picture block is wi,kWherein, in the step (A),
Figure GDA0002243201490000042
k denotes a training image block number, k 1.
The K neighbor search algorithm comprises the following specific steps:
step one, calculating Euclidean distances between the depth feature vector of each test photo block and the depth feature vectors of all training photo blocks;
secondly, sequencing all the training photo blocks according to the sequence of the Euclidean distance values from small to large;
and thirdly, selecting the first 10 training photo blocks as neighbor training photo blocks.
Using a Markov graph model formula to carry out depth feature on all test photo blocks, depth features of all neighbor training photo blocks, all neighbor training photo blocks and coefficients u of a depth feature graphi,lCoefficient w of neighboring training image blocki,kAnd (6) modeling.
The formula of the Markov graph model is as follows:
Figure GDA0002243201490000051
wherein min represents minimum operation, Σ represents summation operation, | y | | | Y calculation2Representing a modulo squaring operation, wi,kCoefficient of k-th neighboring training image block representing ith test image block, oi,kA pixel value vector, w, representing the overlapping portion of the k-th neighboring training image block of the i-th test image blockj,kCoefficient of j-th neighboring training image block representing j-th test image block, oj,kA pixel value vector, u, representing the overlapping portion of the k-th neighboring training image block of the jth test image blocki,lCoefficients of the l-th layer depth feature map representing the depth features of the i-th test picture block, dl(xi) L-th layer feature map representing depth features of the i-th test photo block, dl(xi,k) An l-th layer feature map representing depth features of a k-th neighboring training picture block of the i-th test picture block.
Solving the Markov graph model formula to obtain the face image block reconstruction coefficient wi,k
And 5, reconstructing the face image block.
10 neighboring training image blocks corresponding to each test photo block and respective coefficients wi,kMultiplying, and summing the results after multiplication to obtain a reconstructed face image block corresponding to each test image block.
And 6, synthesizing the face portrait.
And splicing the reconstructed face image blocks corresponding to all the test image blocks to obtain a synthesized face image.
The method for splicing the reconstructed image blocks corresponding to all the test image blocks comprises the following steps:
firstly, placing reconstructed picture blocks corresponding to all test picture blocks at different positions of the picture according to the positions of the reconstructed picture blocks;
secondly, taking the average value of the pixel values of the overlapped parts between two adjacent reconstructed face image blocks;
and thirdly, replacing the pixel value of the overlapped part between the two adjacent reconstructed face image blocks by the average value of the pixel values of the overlapped part between the two adjacent reconstructed face image blocks to obtain the synthesized face image.
The effects of the present invention are further illustrated by the following simulation experiments.
1. Simulation experiment conditions are as follows:
the computer configuration environment of the simulation experiment is Intel (R) Core i7-47903.6GHZ and an internal memory 16G, Linux operating system, the programming language uses Python, and the database adopts the CUHK student database of hong Kong Chinese university.
The prior art comparison method used in the simulation experiment of the present invention includes the following two methods:
one is a method based on local linear embedding, and is marked as LLE in an experiment; the reference is "Q.Liu, X.Tang, H.jin, H.Lu, and S.Ma" (A Nonlinear apparatus for Face Sketch Synthesis and registration. in Proc. IEEE int. conference on Computer Vision, pp.1005-1010,2005);
the other method is a Markov weight field model-based method, and is marked as MWF in the experiment; the reference is "H.Zhou, Z.Kuang, and K.Wong.Markov Weight Fields for Face Sketch Synthesis" (InProc.IEEE int. conference on Computer Vision, pp.1091-1097,2012).
2. Simulation experiment contents:
the invention has a group of simulation experiments.
And (3) synthesizing an image on a CUHK student database, and comparing the image with an image synthesized by a local linear embedded LLE and Markov weight field model MWF method.
3. Simulation experiment results and analysis:
the results of the simulation experiment of the present invention are shown in FIG. 2, in which FIG. 2(a) is a test photograph taken arbitrarily from a sample set of test photographs, FIG. 2(b) is a picture synthesized using the prior art local linear embedding LLE method, FIG. 2(c) is a picture synthesized using the prior art Markov weight field model MWF method, and FIG. 2(d) is a picture synthesized using the method of the present invention.
As can be seen from fig. 2, because the depth feature is used to replace the original pixel value information of the image block, the method has better robustness to environmental noise such as illumination, and therefore, for a picture greatly influenced by illumination, compared with the local linear embedding LLE and markov weight field model MWF methods, the synthesized picture has higher quality and less noise.

Claims (6)

Translated fromChinese
1.一种基于深度图模型特征学习的人脸画像合成方法,包括如下步骤:1. A face portrait synthesis method based on deep graph model feature learning, comprising the following steps:(1)生成样本集合:(1) Generate a sample set:(1a)从人脸照片样本集中取出M张人脸照片组成训练人脸照片样本集,2≤M≤U-1,U表示样本集中人脸照片总数;(1a) Take out M face photos from the face photo sample set to form a training face photo sample set, 2≤M≤U-1, U represents the total number of face photos in the sample set;(1b)将人脸照片样本集中剩余的人脸照片组成测试人脸照片集;(1b) The remaining face photos in the face photo sample set are formed into a test face photo set;(1c)从人脸画像样本集中取出与训练人脸照片样本集的人脸照片一一对应的人脸画像,组成训练人脸画像样本集;(1c) Take out the face portraits corresponding to the face photos in the training face photo sample set from the face portrait sample set, and form the training face portrait sample set;(2)生成图像块集合:(2) Generate a set of image blocks:(2a)从测试人脸照片集中任意选取一张测试人脸照片,将测试人脸照片划分成大小相同,且重叠度相同的照片块,组成测试照片块集合;(2a) arbitrarily select a test face photo from the test face photo collection, divide the test face photo into photo blocks with the same size and the same degree of overlap, and form a test photo block set;(2b)将训练人脸照片样本集中的每一张照片,划分成大小相同,且重叠度相同的照片块,组成训练照片样本块集合;(2b) Divide each photo in the training face photo sample set into photo blocks with the same size and the same degree of overlap to form a training photo sample block set;(2c)将训练人脸画像样本集中的每一张画像,划分成大小相同,且重叠度相同的画像块,组成训练画像样本块集合;(2c) Divide each portrait in the training face portrait sample set into portrait blocks with the same size and the same degree of overlap to form a set of training portrait sample blocks;(3)提取深度特征:(3) Extracting depth features:(3a)将训练照片块集合与测试照片块集合中的所有照片块,输入已经在物体识别数据库ImageNet上训练好的用于物体识别的深度卷积网络VGG中,进行正向传播;(3a) All the photo blocks in the training photo block set and the test photo block set are input into the deep convolutional network VGG for object recognition that has been trained on the object recognition database ImageNet, and forward propagation is performed;(3b)将深度卷积网络VGG的中间层输出的128层特征图作为照片块的深度特征,特征图每层的系数为ui,l,且
Figure FDA0002243201480000011
其中,∑表示求和操作,i表示测试照片块的序号,i=1,2,...,N,N表示测试照片块的总数,l表示特征图的序号,l=1,...,128;(3b) The 128-layer feature map output by the intermediate layer of the deep convolutional network VGG is used as the depth feature of the photo block, and the coefficient of each layer of the feature map is ui,l , and
Figure FDA0002243201480000011
Among them, ∑ represents the summation operation, i represents the serial number of the test photo block, i=1, 2,..., N, N represents the total number of test photo blocks, l represents the serial number of the feature map, l=1,... ,128;(4)求解人脸画像块重构系数:(4) Solve the reconstruction coefficient of face image block:(4a)使用K近邻搜索算法,在训练照片样本块集合中找出与每个测试照片块最相似的10个近邻训练照片块,同时从训练画像样本块集合中选出与近邻训练照片块一一对应的10个近邻训练画像块,每个近邻训练图像块的系数为wi,k,其中,
Figure FDA0002243201480000021
k表示训练图像块序号,k=1,...,10;
(4a) Using the K-nearest neighbor search algorithm, find out the 10 nearest neighbor training photo blocks that are most similar to each test photo block in the training photo sample block set, and at the same time select the nearest neighbor training photo block from the training image sample block set. A corresponding 10 neighboring training image blocks, the coefficient of each neighboring training image block is wi,k , where,
Figure FDA0002243201480000021
k represents the sequence number of the training image block, k=1,...,10;
(4b)使用马尔科夫图模型公式,对所有测试照片块深度特征、所有近邻训练照片块的深度特征、所有近邻训练画像块、深度特征图的系数ui,l、近邻训练图像块的系数wi,k建模;(4b) Using the Markov graph model formula, the depth features of all test photo blocks, the depth features of all neighboring training photo blocks, all neighboring training image blocks, the coefficients ui,l of the depth feature map, and the coefficients of the neighboring training image blocks wi,k modeling;(4c)对马尔科夫图模型公式进行求解,得到人脸画像块重构系数wi,k(4c) solving the Markov graph model formula to obtain the face image block reconstruction coefficient wi,k ;(5)重构人脸画像块:(5) Reconstructing face image blocks:将每个测试照片块对应的10个近邻训练画像块与各自系数wi,k相乘,相乘后结果求和,作为每个测试照片块对应的重构人脸画像块;Multiply the 10 nearest neighbor training image blocks corresponding to each test photo block by their respective coefficients wi,k , and sum the multiplied results as the reconstructed face image block corresponding to each test photo block;(6)合成人脸画像:(6) Synthesized face portrait:拼接所有测试照片块对应的重构人脸画像块,得到合成人脸画像。The reconstructed face image blocks corresponding to all the test photo blocks are spliced to obtain a synthetic face image.2.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(2a)、步骤(2b)、步骤(2c)中所述的重叠度是指,相邻两个图像块之间重叠区域的面积为每个图像块面积的1/2。2. the face portrait synthesis method based on depth map model feature learning according to claim 1, is characterized in that: the overlapping degree described in step (2a), step (2b), step (2c) refers to, phase The area of the overlapping area between two adjacent image blocks is 1/2 of the area of each image block.3.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(3b)中所述的中间层是指深度卷积网络VGG的激活函数层。3. The face portrait synthesis method based on depth graph model feature learning according to claim 1, is characterized in that: the middle layer described in step (3b) refers to the activation function layer of deep convolutional network VGG.4.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(4a)中所述K近邻搜索算法的具体步骤如下:4. the face portrait synthesis method based on depth graph model feature study according to claim 1, is characterized in that: the concrete steps of K nearest neighbor search algorithm described in step (4a) are as follows:第一步,计算每一个测试照片块的深度特征向量与所有训练照片块的深度特征向量之间的欧氏距离;The first step is to calculate the Euclidean distance between the depth feature vector of each test photo block and the depth feature vectors of all training photo blocks;第二步,按照欧氏距离值得从小到大顺序,对所有训练照片块进行排序;The second step is to sort all training photo blocks according to the Euclidean distance value from small to large;第三步,选取前10个训练照片块,作为近邻训练照片块。In the third step, the first 10 training photo blocks are selected as the nearest neighbor training photo blocks.5.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(4b)中所述的马尔科夫图模型公式如下:5. the face portrait synthesis method based on depth graph model feature study according to claim 1, is characterized in that: the Markov graph model formula described in the step (4b) is as follows:
Figure FDA0002243201480000031
Figure FDA0002243201480000031
其中,min表示求最小值操作,∑表示求和操作,|| ||2表示求模平方操作,wi,k表示第i个测试照片块的第k个近邻训练画像块的系数,oi,k表示第i个测试照片块的第k个近邻训练画像块的重叠部分的像素值向量,wj,k表示第j个测试照片块的第j个近邻训练画像块的系数,oj,k表示第j个测试照片块的第k个近邻训练画像块的重叠部分的像素值向量,ui,l表示第i个测试照片块的深度特征的第l层深度特征图的系数,dl(xi)表示第i个测试照片块的深度特征的第l层特征图,dl(xi,k)表示第i个测试照片块的第k个近邻训练照片块的深度特征的第l层特征图。Among them, min represents the minimum value operation, ∑ represents the summation operation, || ||2 represents the modulo square operation, wi,k represents the coefficient of the kth nearest neighbor training image block of the ith test photo block, oi ,k represents the pixel value vector of the overlapping part of the k-th neighbor training image block of the i-th test photo block, wj,k represents the coefficient of the j-th neighbor training image block of the j-th test photo block, oj, k represents the pixel value vector of the overlapping part of the kth nearest neighbor training image block of the jth test photo block, ui,l represents the coefficient of thelth layer depth feature map of the depth feature of the ith test photo block, dl (xi ) represents the l-th layer feature map of the depth feature of the i-th test photo patch, dl (xi,k ) represents the l-th depth feature of the k-th nearest training photo patch of the i-th test photo patch Layer feature map.
6.根据权利要求1所述的一种基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(6)中所述的拼接所有测试照片块对应的重构画像块的方法如下:6. a kind of face portrait synthesis method based on depth map model feature learning according to claim 1, is characterized in that: the method for the reconstruction portrait block corresponding to all test photo blocks described in step (6) is as follows :第一步,将位于画像不同位置的所有测试照片块对应的重构画像块按照其所在位置进行放置;The first step is to place the reconstructed image blocks corresponding to all the test photo blocks located in different positions of the image according to their positions;第二步,取相邻两重构人脸画像块间重叠部分的像素值的平均值;The second step is to take the average value of the pixel values of the overlapping parts between two adjacent reconstructed face image blocks;第三步,用相邻两重构人脸画像块间重叠部分的像素值的平均值替换相邻两重构人脸画像块间重叠部分的像素值,得到合成人脸画像。The third step is to replace the pixel value of the overlapping portion between two adjacent reconstructed face portrait blocks with the average value of the pixel values of the overlapping portion between the two adjacent reconstructed face portrait blocks to obtain a synthetic face portrait.
CN201710602696.9A2017-07-212017-07-21Face portrait synthesis method based on depth map model feature learningActiveCN107392213B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710602696.9ACN107392213B (en)2017-07-212017-07-21Face portrait synthesis method based on depth map model feature learning

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710602696.9ACN107392213B (en)2017-07-212017-07-21Face portrait synthesis method based on depth map model feature learning

Publications (2)

Publication NumberPublication Date
CN107392213A CN107392213A (en)2017-11-24
CN107392213Btrue CN107392213B (en)2020-04-07

Family

ID=60335789

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710602696.9AActiveCN107392213B (en)2017-07-212017-07-21Face portrait synthesis method based on depth map model feature learning

Country Status (1)

CountryLink
CN (1)CN107392213B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108154133B (en)*2018-01-102020-04-14西安电子科技大学 Face portrait-photo recognition method based on asymmetric joint learning
CN109145704B (en)*2018-06-142022-02-22西安电子科技大学 A face portrait recognition method based on face attributes
CN109920021B (en)*2019-03-072023-05-23华东理工大学Face sketch synthesis method based on regularized width learning network
CN110069992B (en)*2019-03-182021-02-09西安电子科技大学 A face image synthesis method, device, electronic device and storage medium
TWI775006B (en)2019-11-012022-08-21財團法人工業技術研究院Imaginary face generation method and system, and face recognition method and system using the same
CN115034957B (en)*2022-05-062024-09-06西安电子科技大学 A face sketch editing method based on text description

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103984954A (en)*2014-04-232014-08-13西安电子科技大学宁波信息技术研究院Image synthesis method based on multi-feature fusion
CN104700380A (en)*2015-03-122015-06-10陕西炬云信息科技有限公司 Face portrait synthesis method based on single photo and portrait pair
CN105608450A (en)*2016-03-012016-05-25天津中科智能识别产业技术研究院有限公司Heterogeneous face identification method based on deep convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9672416B2 (en)*2014-04-292017-06-06Microsoft Technology Licensing, LlcFacial expression tracking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103984954A (en)*2014-04-232014-08-13西安电子科技大学宁波信息技术研究院Image synthesis method based on multi-feature fusion
CN104700380A (en)*2015-03-122015-06-10陕西炬云信息科技有限公司 Face portrait synthesis method based on single photo and portrait pair
CN105608450A (en)*2016-03-012016-05-25天津中科智能识别产业技术研究院有限公司Heterogeneous face identification method based on deep convolutional neural network

Also Published As

Publication numberPublication date
CN107392213A (en)2017-11-24

Similar Documents

PublicationPublication DateTitle
CN107392213B (en)Face portrait synthesis method based on depth map model feature learning
Gurrola-Ramos et al.A residual dense u-net neural network for image denoising
Güçlütürk et al.Convolutional sketch inversion
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN110929736B (en)Multi-feature cascading RGB-D significance target detection method
CN111369522A (en) A light field saliency target detection method based on generative adversarial convolutional neural network
CN114626476B (en) Bird fine-grained image recognition method and device based on Transformer and component feature fusion
CN110023989B (en) A method and device for generating a sketch image
CN111161158B (en)Image restoration method based on generated network structure
Song et al.Image forgery detection based on motion blur estimated using convolutional neural network
CN109740679A (en)A kind of target identification method based on convolutional neural networks and naive Bayesian
CN110503157B (en)Image steganalysis method of multitask convolution neural network based on fine-grained image
CN119091117B (en)Lightweight camouflage target detection method based on deep learning
Shahreza et al.Template inversion attack against face recognition systems using 3d face reconstruction
Li et al.A review of advances in image inpainting research
CN116824330A (en)Small sample cross-domain target detection method based on deep learning
Kumari et al.Video forgery detection and localization using optimized attention squeezenet adversarial network
Rao et al.ResTran: Long distance relationship on image forgery detection
Sharma et al.Multilevel progressive recursive dilated networks with correlation filter (MPRDNCF) for image super-resolution
De La Croix et al.A Scheme Based on Convolutional Neural Network and Fuzzy Logic to Identify the Location of Possible Secret Data in a Digital Image.
CN119762433A (en) No-reference image quality assessment method based on multi-scale attention mechanism of inverted pyramid structure
Guo et al.Domain alignment embedding network for sketch face recognition
CN114418003A (en) A dual-image recognition and classification method based on attention mechanism and multi-scale information extraction
CN117635973B (en) A method for re-identification of people with changed clothes based on multi-layer dynamic aggregation and local pyramid aggregation
Sreelekshmi et al.Deep forgery detect: Enhancing social media security through deep learning-based forgery detection

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right

Effective date of registration:20220711

Address after:518057 2304, block a, building 2, Shenzhen International Innovation Valley, Dashi 1st Road, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong Province

Patentee after:SHENZHEN AIMO TECHNOLOGY Co.,Ltd.

Address before:710071 Taibai South Road, Yanta District, Xi'an, Shaanxi Province, No. 2

Patentee before:XIDIAN University

TR01Transfer of patent right

[8]ページ先頭

©2009-2025 Movatter.jp