Movatterモバイル変換


[0]ホーム

URL:


CN102360513A - Object illumination moving method based on gradient operation - Google Patents

Object illumination moving method based on gradient operation
Download PDF

Info

Publication number
CN102360513A
CN102360513ACN2011102997162ACN201110299716ACN102360513ACN 102360513 ACN102360513 ACN 102360513ACN 2011102997162 ACN2011102997162 ACN 2011102997162ACN 201110299716 ACN201110299716 ACN 201110299716ACN 102360513 ACN102360513 ACN 102360513A
Authority
CN
China
Prior art keywords
target object
layer
image
gradient
reference object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011102997162A
Other languages
Chinese (zh)
Other versions
CN102360513B (en
Inventor
陈小武
赵沁平
金鑫
陈萌萌
吴洪宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang UniversityfiledCriticalBeihang University
Priority to CN 201110299716priorityCriticalpatent/CN102360513B/en
Publication of CN102360513ApublicationCriticalpatent/CN102360513A/en
Application grantedgrantedCritical
Publication of CN102360513BpublicationCriticalpatent/CN102360513B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明是一种基于梯度操作的对象光照迁移方法,为基于视频素材的虚拟场景光照效果生成提供了新的技术方案。本发明包括:构建基于梯度操作的对象光照迁移方法的总体流程:图像对齐、图像分层、加权最小二乘滤波、梯度操作、泊松积分、图像组合生成光照迁移结果;改进加权最小二乘滤波器,根据图像的不平滑程度自适应的计算加权最小二乘滤波参数,使得在不平滑区域执行更高级别滤波,保存更多的细节信息在细节层;在梯度操作中加入灰度域约束,使得光照迁移结果的整体灰度尽可能与参考对象接近。本发明可广泛应用推广到交互式数字娱乐、影视节目制作、艺术设计与创作等领域。

Figure 201110299716

The invention is an object illumination transfer method based on a gradient operation, and provides a new technical solution for generating illumination effects of a virtual scene based on video material. The present invention includes: constructing the overall flow of the object illumination migration method based on the gradient operation: image alignment, image layering, weighted least squares filtering, gradient operation, Poisson integral, and image combination to generate illumination migration results; improved weighted least squares filtering According to the non-smooth degree of the image, the weighted least squares filter parameters are adaptively calculated, so that higher-level filtering is performed in the non-smooth area, and more detailed information is saved in the detail layer; gray-scale constraints are added to the gradient operation, Make the overall grayscale of the light migration result as close as possible to the reference object. The invention can be widely applied to the fields of interactive digital entertainment, film and television program production, art design and creation, and the like.

Figure 201110299716

Description

Translated fromChinese
基于梯度操作的对象光照迁移方法Object Illumination Migration Method Based on Gradient Operation

技术领域technical field

本发明涉及虚拟现实和计算机视觉领域,具体地说是一种基于梯度操作的对象光照迁移方法。The invention relates to the fields of virtual reality and computer vision, in particular to an object illumination migration method based on gradient operations.

背景技术Background technique

基于视频素材的虚实融合场景生成是虚拟现实的重要组成部分,也是虚拟现实、增强现实、计算机视觉及相关研究方向有机交叉的研究热点。由于构成虚拟场景的视频场景与场景对象经常来自不同的视频素材,场景对象和视频场景的光照效果可能会存在较大的差异,然而视频虚拟场景需要各个场景对象具有一致的光照效果,但是目前的视频素材光照融合方法难以满足视频虚拟场景的需要。视频场景对象的光照迁移问题,即如何将目的视频场景中参考对象的光照效果迁移到场景对象,生成场景对象在目的视频场景光照条件下的光照效果。The virtual-real fusion scene generation based on video material is an important part of virtual reality, and it is also a research hotspot in the organic intersection of virtual reality, augmented reality, computer vision and related research directions. Because the video scene and scene objects that make up the virtual scene often come from different video materials, the lighting effects of the scene objects and the video scene may be quite different. However, the video virtual scene requires that each scene object has a consistent lighting effect, but the current The light fusion method of video material is difficult to meet the needs of video virtual scenes. The problem of lighting migration of video scene objects is how to transfer the lighting effect of the reference object in the target video scene to the scene object, so as to generate the lighting effect of the scene object under the lighting conditions of the target video scene.

目前,有一些基于视频的人脸图像光照效果合成方法,这些方法均需要使用到复杂的光照采集设备。2000年美国南加州大学的Debevec等提出一种固定视角下静态场景光照迁移方法。采集2048种点光源光照条件下静态人脸图像,线性组合所采集的图像数据生成静态人脸在新光照条件下的图像,该方法局限于固定视角下的静态对象光照效果合成。2007年南加州大学的Peers等提出了一种利用商图对人脸进行光照迁移的方法。通过采集静态参考人脸对象在不同光照条件下反射场,并利用同一对象在不同光照条件下的人脸图像与在正面均匀光照条件下的人脸图像之间比值,构建相应光照条件下该对象的材质属性图,也称作商图。同时将期望光照条件所对应的商图进行变形,并迁移到目标人脸上,以生成目标图像/视频场景的光照效果。该方法用于对人脸视频/图像进行后期光照迁移处理。该方法局限是:目标人脸和数据库人脸具有相近的几何特性和材质属性,因此该方法暂时只能处理人脸光照迁移问题,还不能用于整个人体的光照迁移问题。At present, there are some video-based methods for synthesizing lighting effects of face images, and these methods all require the use of complex lighting collection devices. In 2000, Debevec et al. of the University of Southern California proposed a static scene lighting migration method under a fixed viewing angle. Collect static face images under 2048 point light source lighting conditions, and linearly combine the collected image data to generate static face images under new lighting conditions. This method is limited to the synthesis of static object lighting effects under a fixed viewing angle. In 2007, Peers et al. of the University of Southern California proposed a method of using quotient maps to perform light migration on faces. By collecting the reflection field of a static reference face object under different lighting conditions, and using the ratio between the face image of the same object under different lighting conditions and the face image under the front uniform lighting condition, the object under the corresponding lighting condition is constructed The material property map of is also called the quotient map. At the same time, the quotient map corresponding to the desired lighting condition is deformed and transferred to the target face to generate the lighting effect of the target image/video scene. This method is used to perform post-light migration processing on face videos/images. The limitation of this method is that the target face and the database face have similar geometric characteristics and material properties, so this method can only deal with the problem of light migration of the face for the time being, and cannot be used for the problem of light migration of the whole human body.

目前,有一些基于几何估计或几何假设的光照迁移方法,这些方法通常做出了朗伯光照模型假设。2007年卡内基梅隆大学的Yang Wang等提出了一种在未知人脸几何信息和反照率信息情况下的人脸图像光照迁移方法。该方法输入仅需要一张在任意苛刻光照环境下的人脸图像。利用了一种可变形人脸模型来估计人脸几何信息,利用球谐函数基来表示光照条件,并通过马尔科夫随机场模型来模拟子区域的统计分布和人脸纹理的空间一致性。该方法通过使用一个可变性人脸模型来估计人脸图像的光照条件,而且能处理复杂光照条件或者人脸图像过度曝光和曝光不足的情况,能极大的提高人脸识别的准确率。但该方法作了人脸朗伯表面的假设,并用一个可变形人脸模型来拟合人脸图像估计几何信息,因此生成的人脸图像与真实图像相比缺乏一定的真实感。Currently, there are some lighting transfer methods based on geometric estimation or geometric assumptions, which usually make Lambertian lighting model assumptions. In 2007, Yang Wang et al. of Carnegie Mellon University proposed a face image illumination migration method in the case of unknown face geometry information and albedo information. The input of this method requires only one face image in any harsh lighting environment. A deformable face model is used to estimate the geometric information of the face, a spherical harmonic function basis is used to represent the illumination conditions, and a Markov random field model is used to simulate the statistical distribution of sub-regions and the spatial consistency of face texture. This method uses a variable face model to estimate the lighting conditions of the face image, and can deal with complex lighting conditions or the overexposure and underexposure of the face image, which can greatly improve the accuracy of face recognition. However, this method assumes the Lambertian surface of the face, and uses a deformable face model to fit the face image to estimate the geometric information, so the generated face image lacks a certain sense of reality compared with the real image.

另外有一些方法通过将图像分成和组合来生成光照迁移结果,这些方法将图像分解为光照相关部分和光照无关部分。2009年休斯顿大学Qing Li等提出了一种基于对数整体变分模型的人脸光照迁移技术,能够把参考人脸的光照迁移到目标人脸上去。该方法无需知道人脸的几何和光照信息。通过将人脸图像分解为光照信息和光照无关的人脸特征两个部分。并利用参考对象的光照信息替换目标对象的光照信息实现光照迁移。算法流程是:首先利用图像变形技术对参考人脸变形;然后将目标人脸和变形后的参考人脸都分解成为依光照有关的部分和于光照无关的部分;最后通过交换两幅人脸图像光照有关的部分,得到目标对象光照迁移的结果。该方法仅需要一幅参考对象用于光照迁移,并能处理彩色图像与灰度图像的光照迁移。该方法局限性在于:1.使用了目标人脸和参考人脸具有相似的复杂度的假设;2.光照迁移结果受到目标人脸与参考人脸的几何差异影响严重。There are also methods that generate lighting transfer results by splitting and combining images, which decompose images into lighting-dependent and lighting-independent parts. In 2009, Qing Li of the University of Houston proposed a face illumination migration technology based on the logarithmic global variational model, which can migrate the illumination of the reference face to the target face. This method does not need to know the geometry and lighting information of the face. By decomposing the face image into two parts: illumination information and illumination-independent facial features. And use the lighting information of the reference object to replace the lighting information of the target object to realize the lighting transfer. The algorithm flow is as follows: firstly, the reference face is deformed by using image deformation technology; then the target face and the deformed reference face are decomposed into parts related to illumination and parts not related to illumination; finally, by exchanging two face images For the part related to lighting, the result of light migration of the target object is obtained. This method only needs one reference object for light migration, and can handle the light migration of color images and grayscale images. The limitations of this method are: 1. The assumption that the target face and the reference face have similar complexity is used; 2. The illumination migration result is seriously affected by the geometric difference between the target face and the reference face.

发明内容Contents of the invention

根据上述实际需求和关键问题,本发明的目的在于:提出一种基于梯度操作的对象光照迁移方法,该方法不需要对目标对象的几何进行估计或假设,也不需要对光照模型做出假设,只需要单幅参考对象,在梯度域将参考对象的光照信息迁移到目标对象上。According to the above actual needs and key issues, the purpose of the present invention is to propose a gradient operation-based object illumination migration method, which does not need to estimate or assume the geometry of the target object, nor does it need to make assumptions about the illumination model. Only a single reference object is needed, and the illumination information of the reference object is transferred to the target object in the gradient domain.

本发明只对图像中感兴趣的物体做处理,该物体被称为对象,例如:目标对象和参考对象;对象是图像中构成该物体像素的集合,是图像的一部分。The present invention only processes the object of interest in the image, and the object is called an object, for example: a target object and a reference object; an object is a collection of pixels constituting the object in the image, and is a part of the image.

本发明采用的技术方案是:构建一种基于梯度操作的对象光照迁移方法主要包括:图像对齐,首先通过自动检测特征点或者手工标记特征点,并通过图像变形方法将目标对象对象和参考对象对齐,以找到目标对象和参考对象之间的映射关系;图像分层,首先将图像从RGB颜色空间到CIELAB颜色空间,将图像分为明度层与色度层(所述图像分层是将对象区域的图像进行分层,也即将目标对象和参考对象分别分为明度层与色度层),只对明度层进行处理,而保持色度层不变;加权最小二乘滤波,借助加权最小二乘滤波器将明度层分解为大尺度层和细节层,与光照有关的信息留在大尺度层上,与光照无关的信息留在细节层上;梯度操作,将光照变化引起的梯度从参考对象大尺度层梯度域迁移到目标对象大尺度层梯度;泊松积分,在灰度域的约束下求解泊松方程,将梯度图像变为灰度图像得到目标对象新大尺度层;图像组合,目标对象新大尺度层与目标对象的细节层混合得到目标对象新明度层,再与目标对象的颜色层混合得到光照迁移结果。The technical solution adopted by the present invention is: constructing a gradient operation-based object illumination migration method mainly includes: image alignment, first by automatically detecting feature points or manually marking feature points, and aligning the target object with the reference object by image deformation method , to find the mapping relationship between the target object and the reference object; image layering, at first the image is divided into the brightness layer and the chroma layer from the RGB color space to the CIELAB color space (the image layering is to divide the object area The image of the image is layered, that is, the target object and the reference object are divided into a lightness layer and a chroma layer respectively), and only the lightness layer is processed, while the chroma layer is kept unchanged; weighted least squares filtering, with the help of weighted least squares The filter decomposes the lightness layer into a large-scale layer and a detail layer, the information related to illumination is left on the large-scale layer, and the information not related to illumination is left on the detail layer; the gradient operation changes the gradient caused by the illumination change from the reference object large Scale layer gradient domain migrates to target object large-scale layer gradient; Poisson integral, solves Poisson equation under the constraints of gray-scale domain, transforms gradient image into gray-scale image to obtain new large-scale layer of target object; image combination, target object The new large-scale layer is mixed with the detail layer of the target object to obtain the new lightness layer of the target object, and then mixed with the color layer of the target object to obtain the light migration result.

图像对齐主要是为了解决参考对象图像和目标对象图像的几何,姿态,表情等可能不一致的情况,它包含特征点定位和图像变形两个步骤。首先采用活动轮廓模型获取粗略的特征点坐标,然后用户以交互式的方式来调整特征点的位置;接着以参考对象和目标对象的特征点对参考对象进行变形。本发明采用基于仿射变换的图像变形方法。首先,对目标对象以特征点进行三角剖分;然后,对于目标对象中的每一个三角形,找到其在参考对象中对应三角形,根据三角形在目标对象和参考对象中的顶点坐标计算出两个三角形的仿射变换矩阵;接着,利用向后图像变形找到目标对象中每一个像素点在参考对象中对应的坐标值。最后,对于目标对象中的每个像素取其在参考对象中对应坐标处的值,即可得到变形后的参考对象。Image alignment is mainly to solve the possible inconsistencies in the geometry, posture, and expression of the reference object image and the target object image. It includes two steps: feature point positioning and image deformation. First, the active contour model is used to obtain rough feature point coordinates, and then the user adjusts the position of the feature points in an interactive way; then, the reference object is deformed with the feature points of the reference object and the target object. The present invention adopts an image deformation method based on affine transformation. First, triangulate the target object with feature points; then, for each triangle in the target object, find its corresponding triangle in the reference object, and calculate two triangles according to the vertex coordinates of the triangle in the target object and the reference object The affine transformation matrix; then, use the backward image deformation to find the corresponding coordinate value of each pixel in the target object in the reference object. Finally, for each pixel in the target object, the value at the corresponding coordinate in the reference object is taken to obtain the deformed reference object.

颜色分层过程中,本发明选择CIELAB颜色空间,将彩色图像分解为明度L通道和颜色a、b通道,其中,L通道包含了明度信息,而a和b两个通道包含了颜色信息。这样避免了对图像在RGB三个通道上处理容易造成生成结果颜色的不和谐的情况。In the color layering process, the present invention selects the CIELAB color space, and decomposes the color image into lightness L channel and color a, b channels, wherein, the L channel contains lightness information, and the a and b two channels contain color information. This avoids the situation that processing the image on the three channels of RGB is likely to cause disharmony in the resulting color.

细节分层采用最小二乘滤波器将明度层分解为大尺度层并利用除法得到细节层。细节层可以被视为光照无关量,大尺度层被视为光照相关量。细节分解过程中加权最小二乘滤波器的参数采用自适应的计算方法,在图像不同区域采用不同级别的平滑处理,使大尺度层包含更多的光照有关的信息,细节层包含更多与光照无关的特征信息。滤波参数计算方法如下:在不平坦的图像区域(例如胡子,眉毛等区域)设置较大的平滑值。在平坦的图像区域,设置较小的平滑值。图像的不平坦度根据图像梯度来计算,计算方式为统计梯度图像每个局部窗口内梯度像素值大于某一阈值的数量,将该统计量进行归一化,用来表示图像的不平坦度。The detail layer uses the least squares filter to decompose the lightness layer into a large-scale layer and obtains the detail layer by division. The detail layer can be regarded as an illumination-independent quantity, and the large-scale layer is regarded as an illumination-dependent quantity. In the detail decomposition process, the weighted least squares filter parameters adopt an adaptive calculation method, and different levels of smoothing are used in different regions of the image, so that the large-scale layer contains more information related to lighting, and the detail layer contains more information related to lighting. irrelevant feature information. The calculation method of filtering parameters is as follows: set a larger smoothing value in uneven image areas (such as beard, eyebrows, etc.). In flat image areas, set a smaller smoothing value. The unevenness of the image is calculated according to the image gradient. The calculation method is to count the number of gradient pixel values greater than a certain threshold in each local window of the gradient image, and normalize the statistics to represent the unevenness of the image.

对梯度域进行处理时,首先将以目标对象大尺度层为参考对参考对象大尺度进行双边滤波,保留目标对象一些边缘信息,并将滤波后的大尺度层变换到梯度域,为了能够区分轮廓区域和非轮廓区域以便更好地处理轮廓区域的梯度,本发明生成一个在轮廓处渐变的掩码图像,掩码图像具有在轮廓区域附近渐变的特点,这样能够使得参考对象非轮廓区域的较大梯度与目标对象在轮廓附近能够平滑地混合。When processing the gradient domain, firstly, the large-scale layer of the target object is used as a reference to perform bilateral filtering on the large scale of the reference object, retain some edge information of the target object, and transform the filtered large-scale layer into the gradient domain, in order to be able to distinguish contours area and non-contour area in order to better deal with the gradient of the contour area, the present invention generates a mask image that gradually changes at the contour, and the mask image has the characteristics of gradation near the contour area, so that the comparison Large gradients blend smoothly with target objects near silhouettes.

本发明对于轮廓区域和非轮廓区域分别进行处理:处理非轮廓区域时,将非轮廓区域的较大梯度迁移到目标对象上;处理轮廓区域时,将由光照引起梯度迁移到目标对象上。The present invention processes the contour area and the non-contour area separately: when processing the non-contour area, the larger gradient of the non-contour area is transferred to the target object; when processing the contour area, the gradient caused by illumination is transferred to the target object.

图像组合,将处理后的大尺度层与目标对象的细节层混合得到光照迁移结果的明度层,再与目标对象的色度层混合得到光照迁移的结果。Image combination, the processed large-scale layer is mixed with the detail layer of the target object to obtain the lightness layer of the light migration result, and then mixed with the chroma layer of the target object to obtain the light migration result.

本发明与现有的技术相比,其特点是:1、本发明将图像分为明度层和色度层,明度层通过最小二乘滤波器再分为大尺度层和细节层,将与光照相关的信息保留在大尺度层上,图像对象的细节信息则留在了细节层上。2、本发明将参考对像中由光照引起的梯度变化迁移到目标对像上,在处理过程中对边界区域和非边界区域分别进行处理,使生成的结果更加真实。3、本发明给出的光照迁移方法只需要一张参考对象,并且不需要对目标对象的几何进行估计或做出假设,此外也不需要对光照模型进行假设,能够生成具有真实感的光照迁移结果。Compared with the existing technology, the present invention has the following characteristics: 1. The present invention divides the image into a lightness layer and a chroma layer, and the lightness layer is further divided into a large-scale layer and a detail layer through a least squares filter, and the lightness layer is combined with the lightness layer. The relevant information is kept on the large-scale layer, and the detail information of the image object is left on the detail layer. 2. The present invention transfers the gradient change caused by the illumination in the reference image to the target image, and processes the boundary area and non-boundary area separately during the processing, so that the generated result is more realistic. 3. The lighting migration method provided by the present invention only needs a reference object, and does not need to estimate or make assumptions about the geometry of the target object, and does not need to make assumptions about the lighting model, and can generate realistic lighting migration result.

附图说明:Description of drawings:

图1是本发明的主流程图;Fig. 1 is main flowchart of the present invention;

图2是本发明用户对目标对象标点流程图;Fig. 2 is a flow chart of the user punctuating the target object in the present invention;

图3是本发明图像仿射变换示意图;Fig. 3 is a schematic diagram of image affine transformation of the present invention;

图4(a)是本发明基于仿射变换的三角剖分及像素映射特征点云示意图;图4(b)三角剖分示意图;图4(c)像素映射示意图Fig. 4 (a) is the triangulation based on affine transformation of the present invention and the schematic diagram of pixel mapping feature point cloud; Fig. 4 (b) triangulation schematic diagram; Fig. 4 (c) pixel mapping schematic diagram

图5是本发明基于仿射变换的图像变形方法流程图;Fig. 5 is the flow chart of the image deformation method based on affine transformation of the present invention;

图6是本发明梯度操作流程图。Fig. 6 is a flowchart of gradient operation in the present invention.

具体实施方式:Detailed ways:

下面结合附图对本发明作详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings.

参阅图1本发明的主流程图,本发明基于梯度操作的对象光照迁移方法包含以下基本过程:首先是使用活动轮廓模型人脸定位工具和图像变形方法将参考对象(即输入的图像中参考对象区域的部分)对齐到目标对象(即输入的图像中目标对象区域的部分),接着将参考对象和目标对象均分解为明度层和色度层,使用最小二乘滤波器将明度层分为大尺度层和细节层,所有的操作均在大尺度层上进行;本发明先将参考对像和目标对象的大尺度层由灰度域转换到梯度域,将参考对像中由光照引起的梯度变化迁移到目标对像上,迁移过程中不同区域采用不同的方式进行;利用泊松积分将大尺度层由梯度域恢复至灰度域,得到目标对象新的大尺度层,并与目标对象色度层,细节层重新组合得到目标对象在参考对象光照条件下的光照迁移结果。Referring to the main flow chart of the present invention in Fig. 1, the object illumination migration method based on the gradient operation of the present invention comprises the following basic processes: first, use the active contour model face positioning tool and the image deformation method to convert the reference object (i.e. the reference object in the input image) area) to the target object (that is, the part of the target object area in the input image), and then both the reference object and the target object are decomposed into a luma layer and a chroma layer, and a least squares filter is used to divide the luma layer into large Scale layer and detail layer, all operations are carried out on the large-scale layer; the present invention first converts the large-scale layer of the reference object and the target object from the gray scale domain to the gradient domain, and converts the gradient caused by the illumination in the reference object Changes are migrated to the target image, and different regions are carried out in different ways during the migration process; the large-scale layer is restored from the gradient domain to the gray-scale domain by using Poisson integral, and a new large-scale layer of the target object is obtained, which is compared with the color of the target object. The degree layer and the detail layer are recombined to obtain the light migration result of the target object under the light conditions of the reference object.

参阅图2本发明用户对目标对象标点流程图,当图像对象是人脸时,采用自动或半自动的方法获取图像对象的特征坐标;而当图像对象非人脸时,用户采用完全手动的方式自定义特征点。本发明使用活动轮廓模型人脸定位工具完成人脸图像特征点的定位功能,该工具可以获取人脸图像上116个特征点的坐标,同时提供了特征点位置的手动调节功能,用户可以手动调整每个特征点的坐标。Referring to Fig. 2 user's target object punctuation flow chart of the present invention, when image object is people's face, adopt automatic or semi-automatic method to obtain the feature coordinate of image object; Define feature points. The present invention uses the active contour model face positioning tool to complete the positioning function of the feature points of the face image. The tool can obtain the coordinates of 116 feature points on the face image, and at the same time provides a manual adjustment function for the position of the feature points, which can be manually adjusted by the user Coordinates of each feature point.

参阅图3图像仿射变换,本发明将参考对象中的三角形按照目标对象中的对应三角形进行变形。对于目标对象中的每一个像素坐标(x,y),计算出其在参考对象中对应像素的坐标(x′,y′),像素(x,y)与像素(x′,y′)的对应关系(x′,y′)=f(x,y)由其分别所在的三角形顶点计算:设两组三角形的相对应的三组坐标分别为(x1,y1)与(x′1,y′1),(x2,y2)与(x′2,y′2),以及(x3,y3)与(x′3,y′3),定义仿射变换矩阵Taffine为:Referring to FIG. 3 for image affine transformation, the present invention deforms the triangles in the reference object according to the corresponding triangles in the target object. For each pixel coordinate (x, y) in the target object, calculate the coordinate (x', y') of the corresponding pixel in the reference object, and the pixel (x, y) and pixel (x', y') The corresponding relationship (x′, y′)=f(x, y) is calculated from the vertices of the triangles where they are respectively located: suppose the corresponding three sets of coordinates of the two sets of triangles are respectively (x1 , y1 ) and (x′1 , y′1 ), (x2 , y2 ) and (x′2 , y′2 ), and (x3 , y3 ) and (x′3 , y′3 ), define the affine transformation matrix Taffine for:

TTaffineaffine==xx11′′xx22′′xx33′′ythe y11′′ythe y22′′ythe y33′′111111**xx11xx22xx33ythe y11ythe y22ythe y33111111--11

对于目标对象中的像素坐标(x,y),利用仿射变换矩阵计算出其在参考对象中对应像素的坐标(x′,y′):For the pixel coordinates (x, y) in the target object, use the affine transformation matrix to calculate the coordinates (x', y') of the corresponding pixel in the reference object:

[x′ y′ 1]T=Taffine[x y 1]T[x′ y′ 1]T =Taffine [x y′ 1]T

将目标对象IB所有像素坐标值赋值为其在参考对象IA中对应坐标处像素,I′A(x,y)=IA(f(x,y)),得到变形后的参考对象。All the pixel coordinates of the target object IB are assigned to the pixels at the corresponding coordinates in the reference object IA , I'A (x, y) = IA (f (x, y)), and the deformed reference object is obtained.

参阅图4和图5说明本发明基于仿射变换的图像变形方法,特征点之间没有连线,如图4(a)所示,首先,对目标对象以特征点进行德劳内三角剖分,使得剖分后的三角形没有相互重叠并且覆盖整个点云面,如图4(b)所示;然后,对于目标对象中的每一个三角形,找到其在参考对象中对应三角形,如图4(c)所示,根据三角形在目标对象和参考对象中的顶点坐标计算出两个三角形的仿射变换矩阵;接着,利用向后图像变形找到目标对象中每一个像素点(x,y)在参考对象中对应的坐标值(x′,y′);最后,对于目标对象中的每个像素(x,y)取其在参考对象中对应坐标处(x′,y′)的值,即可得到变形后的参考对象。Referring to Fig. 4 and Fig. 5, the image deformation method based on affine transformation of the present invention is illustrated, there is no connecting line between the feature points, as shown in Fig. 4 (a), at first, carry out Delaunay triangulation to the target object with the feature points , so that the divided triangles do not overlap each other and cover the entire point cloud surface, as shown in Figure 4(b); then, for each triangle in the target object, find its corresponding triangle in the reference object, as shown in Figure 4( As shown in c), the affine transformation matrix of the two triangles is calculated according to the vertex coordinates of the triangle in the target object and the reference object; then, use the backward image deformation to find each pixel point (x, y) in the target object in the reference The corresponding coordinate value (x′, y′) in the object; finally, for each pixel (x, y) in the target object, take the value at the corresponding coordinate (x′, y′) in the reference object, then Get the deformed reference object.

本发明采用采用最小二乘滤波器来将明度图像分解为大尺度层s和细节层。最小二乘滤波器能够较好地保留明度层中的细节信息,它的求解过程是最小化下面能量函数:The present invention uses the least squares filter to decompose the brightness image into a large-scale layer s and a detail layer. The least squares filter can better retain the detailed information in the lightness layer, and its solution process is to minimize the following energy function:

EE.==||ll--sthe s||22++λHλH((▿▿sthe s,,▿▿ll))

Hh((▿▿sthe s,,▿▿ll))==ΣΣpp((((∂∂sthe s//∂∂xx))pp22((∂∂ll//∂∂xx))ppαα++((∂∂sthe s//∂∂ythe y))pp22((∂∂ll//∂∂ythe y))ppαα++ϵϵ))

其中,s是要求解的大尺度图像;|l-s|2是使得l和s尽可能接近的数据项;

Figure BDA0000095269230000054
是使得s尽可能光滑的正则化项,用来最小化s的偏导;下标p表示一个像素在图像上空间位置;最小化
Figure BDA0000095269230000061
会导致输入图像l的梯度发生非线性缩放,在梯度较大的区域,缩放较小,而梯度较小的区域则需要进行较大的缩放,α控制梯度非线性缩放的整体亲和性,即非线性缩放对l梯度大小的敏感性,增大α会导致生成的结果保留更加尖锐的边界;ε是一个很小的数(通常为0.0001),用来防止在l平坦的区域出现除0;λ是数据项和正则项间的平衡调节因子,增加λ会导致滤波后的图像更加平滑,本发明在最小二乘滤波过程中在图像不同区域采用不同级别的平滑处理,在不同的图像区域设计不同大小的λ值。Among them, s is the large-scale image to be solved; |ls|2 is the data item that makes l and s as close as possible;
Figure BDA0000095269230000054
is a regularization term that makes s as smooth as possible, and is used to minimize the partial derivative of s; the subscript p represents the spatial position of a pixel on the image; minimize
Figure BDA0000095269230000061
It will cause nonlinear scaling of the gradient of the input image l. In areas with larger gradients, the scaling is smaller, while areas with smaller gradients require larger scaling. α controls the overall affinity of gradient nonlinear scaling, namely The sensitivity of nonlinear scaling to the size of the l gradient, increasing α will cause the generated result to retain a sharper boundary; ε is a small number (usually 0.0001), used to prevent division by 0 in the flat area of l; λ is the balance adjustment factor between the data item and the regular item. Increasing λ will lead to smoother filtered images. In the least squares filtering process, the present invention adopts different levels of smoothing in different areas of the image, and designs in different image areas Lambda values of different sizes.

为了自适应计算λ值,首先计算明度层的梯度图像,并统计每个像素局部窗口内梯度大于阈值的像素数量;接着将其归一化,用于计算出最小二乘滤波器的参数。具体地,对图像不同区域设置不同的λ值:首先,计算明度层l的水平和竖直方向梯度,分别为

Figure BDA0000095269230000062
Figure BDA0000095269230000063
并给出一个阈值t1;然后,对于每个像素i,计算以该像素为中心的局部窗口wp内梯度大小大于阈值的像素数量。In order to adaptively calculate the λ value, first calculate the gradient image of the brightness layer, and count the number of pixels whose gradient is greater than the threshold in the local window of each pixel; then normalize it and use it to calculate the parameters of the least squares filter. Specifically, different λ values are set for different regions of the image: first, the horizontal and vertical gradients of the lightness layer l are calculated, respectively
Figure BDA0000095269230000062
and
Figure BDA0000095269230000063
And give a threshold t1 ; then, for each pixel i, calculate the number of pixels whose gradient size is greater than the threshold in the local window wp centered on this pixel.

γγ((pp))==ΣΣii∈∈wwpp((((∂∂ll//∂∂xx))ii22++((∂∂ll//∂∂ythe y))ii22≥&Greater Equal;tt11))

将γ(p)归一化到0-1,则λ值设置为:Normalize γ(p) to 0-1, then the value of λ is set as:

λ(p)=λs+(λls)*γ(p)λ(p)=λs +(λls )*γ(p)

其中,λs和λl分别表示控制对图像进行最低和最高级别平滑处理所对应的最小和最大λ值。在实现时,α=1.2,局部窗口半径设为8,λs=1,λl=4,阈值t1=0.02。Among them, λs and λl represent the minimum and maximum λ values corresponding to the lowest and highest levels of smoothing control on the image, respectively. In implementation, α=1.2, the local window radius is set to 8, λs =1, λl =4, and the threshold t1 =0.02.

使用最小二乘滤波器对图像进行滤波求解首先计算梯度图像,并计算像素与相邻像素的亲和力;然后计算点表示的空间非同质拉普拉斯矩阵,并构造稀疏线性方程组;最后求解稀疏线性方程组,得到滤波结果。Use the least squares filter to filter and solve the image first calculate the gradient image, and calculate the affinity between the pixel and the adjacent pixels; then calculate the spatially non-homogeneous Laplacian matrix represented by the point, and construct a sparse linear equation system; finally solve Sparse linear equations to obtain filtering results.

参阅图6梯度操作流程说明如何对图像进行梯度操作:Refer to Figure 6 for the gradient operation process to illustrate how to perform gradient operations on images:

对梯度域进行处理时,首先将以目标对象大尺度层为参照对参考对象大尺度进行双边滤波,保留目标对象一些边缘信息,本发明中使用如下双边滤波器:When processing the gradient domain, firstly, the large-scale layer of the target object will be used as a reference to carry out bilateral filtering on the large scale of the reference object, and some edge information of the target object will be retained. In the present invention, the following bilateral filters are used:

BB((II))==11WWppΣΣqq∈∈ωωGGσσdd((||||pp--qq||||))GGσσrr((||IIpp--IIqq||))IIqq

WWpp==ΣΣqq∈∈ωωGGσσdd((||||pp--qq||||))GGσσrr((||IIpp--IIqq||))

GGσσ((xx))==11σσ22ππexpexp((--xx2222σσ22))

其中,I是输入图像;p是图像坐标索引;B(I)是滤波后的结果;ω是以像素p为中心的窗口,q是窗口ω中的像素;

Figure BDA0000095269230000071
是空间邻近函数,σd是窗口大小;
Figure BDA0000095269230000072
是灰度相似度,σr是窗口大小;Wp是归一化项,使B(I)的每个像素值都在0-1之间。Among them, I is the input image; p is the image coordinate index; B(I) is the result after filtering; ω is the window centered on pixel p, and q is the pixel in the window ω;
Figure BDA0000095269230000071
is the spatial proximity function, σd is the window size;
Figure BDA0000095269230000072
is the gray similarity, σr is the window size; Wp is the normalization item, so that each pixel value of B(I) is between 0-1.

为了区分轮廓区域和非轮廓区域,本发明生成一个在轮廓处渐变的掩码图像β图像,β图像具有在轮廓区域附近渐变的特点,这样能够使得参考对象非轮廓区域的较大梯度与目标对象在轮廓附近能够平滑地混合。In order to distinguish the contour area and the non-contour area, the present invention generates a mask image β image that gradually changes at the contour. Blends smoothly near contours.

β图像的计算方法如下:定义一个轮廓图像k,轮廓图像在皮肤区域取值为0,在眉毛区域、眼睛区域、嘴唇轮廓区域、鼻子轮廓线区域取值为1,在鼻孔区域取值为2,掩码图像β的计算方法如下:The calculation method of the β image is as follows: define a contour image k, the contour image takes a value of 0 in the skin area, takes a value of 1 in the eyebrow area, eye area, lip outline area, and nose outline area, and takes a value of 2 in the nostril area , the mask image β is calculated as follows:

ββ((pp))==minminqq∈∈ωω((11--kk((qq))·&Center Dot;ee--((pp--qq))2222σσ22))

其中,p表示图像坐标索引;I表示输入图像;ω表示以像素p为中心的窗口,q为窗口中像素索引,σ为窗口大小。Among them, p represents the image coordinate index; I represents the input image; ω represents the window centered on pixel p, q is the pixel index in the window, and σ is the window size.

将滤波后的大尺度层变换到梯度域,对梯度域进行处理时要在轮廓域和非轮廓区域进行不同的操作:对于非轮廓区域,设定一个阈值,大于该阈值的梯度是由光照引起的,将参考对象非轮廓区域的梯度大于该阈值的梯度迁移到目标对象上;对于轮廓区域,利用掩码图像β将参考对象轮廓区域中非轮廓引起的梯度变化迁移到目标对象上。Transform the filtered large-scale layer to the gradient domain. When processing the gradient domain, different operations should be performed in the contour domain and non-contour areas: for non-contour areas, set a threshold, and the gradient greater than the threshold is caused by illumination. , the gradient of the non-contour area of the reference object is transferred to the target object; for the contour area, the gradient change caused by the non-contour in the contour area of the reference object is transferred to the target object by using the mask image β.

对于给定目标对象sB和参考对象sA,及其对应β图像,在梯度域上操作按如下公式进行:For a given target object sB and reference object sA , and its corresponding β image, the operation on the gradient domain is performed according to the following formula:

▿▿sthe sBB′′((pp))==▿▿sthe sAA((pp)),,ifβifβ((pp))||||▿▿sthe sAA((pp))||||>>||||▿▿sthe sBB((pp))||||▿▿sthe sBB((pp)),,otherwiseotherwise

其中,p表示图像坐标索引;

Figure BDA0000095269230000075
Figure BDA0000095269230000076
分别表示目标对象图像、参考对象图像和光照迁移后的目标对象相应的大尺度层梯度图像;
Figure BDA0000095269230000077
表示参考对象的梯度是否由轮廓引起的;β(p)为掩码图像,用来避免将参考对象上主要轮廓构成的梯度迁移到目标对象上。Among them, p represents the image coordinate index;
Figure BDA0000095269230000075
and
Figure BDA0000095269230000076
respectively represent the target object image, the reference object image and the corresponding large-scale layer gradient image of the target object after light migration;
Figure BDA0000095269230000077
Indicates whether the gradient of the reference object is caused by the contour; β(p) is a mask image to avoid transferring the gradient formed by the main contour on the reference object to the target object.

本发明采用泊松积分将梯度图像变换到灰度图像,求解过程中加入灰度约束,这样可以使处目标对象的大尺度层在总体亮度上更加接近参考对象的大尺度层,泊松积分求解过程是最小化如下能量函数:The present invention uses Poisson integral to transform the gradient image into a grayscale image, and adds grayscale constraints in the solution process, so that the large-scale layer of the target object can be closer to the large-scale layer of the reference object in overall brightness, and the Poisson integral can solve the problem. The procedure is to minimize the following energy function:

EE.((ff))==ΣΣpp∈∈II[[EE.dd((pp))++EE.gg((pp))]]

其中,I表示输入图像;p表示图像坐标索引;灰度项约束Ed(p)和梯度项约束Eg(p)分别表示为:Among them, I represents the input image; p represents the image coordinate index; the gray-scale term constraint Ed (p) and the gradient term constraint Eg (p) are expressed as:

Ed(p)=λ[f(p)-d(p)]2Ed (p)=λ[f(p)-d(p)]2

Eg(p)=[fx(p)-gx(p)]2+[fy(p)-gy(p)]2Eg (p)=[fx (p)-gx (p)]2 +[fy (p)-gy (p)]2

其中,f(p)是待求解的大尺度层,d(p)是参考对象大尺度层;fx(p)和fy(p)是待求解大尺度层水平梯度和垂直梯度图像;gx(p)和gy(p)是处理后的大尺度层水平梯度和垂直梯度图像,是

Figure BDA0000095269230000082
的两个部分,参数λ用来控制灰度项约束对积分结果的影响。Among them, f(p) is the large-scale layer to be solved, d(p) is the large-scale layer of the reference object; fx (p) and fy (p) are the horizontal gradient and vertical gradient images of the large-scale layer to be solved; gx (p) and gy (p) are the processed large-scale layer horizontal gradient and vertical gradient images, which are
Figure BDA0000095269230000082
The two parts of , the parameter λ is used to control the influence of the gray term constraint on the integration result.

当利用灰度约束泊松方程求解得到目标对象大尺度层之后,与目标对象的细节层和目标对象色度层组合,则得到了光照迁移后的目标对象图像。After the large-scale layer of the target object is obtained by solving the gray-scale constrained Poisson equation, combined with the detail layer of the target object and the chromaticity layer of the target object, the image of the target object after light migration is obtained.

以上所述仅为本发明的一些基本说明,依据本发明的技术方案所做的任何等效变换,均应属于本发明的保护范围。The above descriptions are only some basic explanations of the present invention, and any equivalent transformation made according to the technical solution of the present invention shall fall within the scope of protection of the present invention.

Claims (6)

Translated fromChinese
1.一种基于梯度操作的对象光照迁移方法,其特征在于该方法包含以下步骤:1. A method for object illumination migration based on gradient operations, characterized in that the method comprises the following steps:(1)根据目标对象和参考对象之间的逐像素对应关系,将参考对象对齐到目标对象;(1) Align the reference object to the target object according to the pixel-by-pixel correspondence between the target object and the reference object;(2)将目标对象和参考对象均分解为色度层和明度层;(2) Decompose both the target object and the reference object into a chroma layer and a lightness layer;(3)利用自适应参数的加权最小二乘滤波器对明度层进行滤波,得到大尺度层,利用明度层与大尺度层得到细节层;(3) Use the weighted least squares filter of adaptive parameters to filter the lightness layer to obtain the large-scale layer, and use the lightness layer and the large-scale layer to obtain the detail layer;(4)将光照信息从参考对象大尺度层的梯度图像迁移到目标对象大尺度层的梯度图像;(4) Migrate the illumination information from the gradient image of the large-scale layer of the reference object to the gradient image of the large-scale layer of the target object;(5)将大尺度层由梯度域变换到灰度域,得到目标对象新大尺度层,与目标对象细节层组合,得到目标对象新明度层;(5) Transform the large-scale layer from the gradient domain to the gray-scale domain to obtain a new large-scale layer of the target object, and combine it with the detail layer of the target object to obtain a new lightness layer of the target object;(6)用目标对象新明度层,与目标对象的色度层混合得到光照迁移结果。(6) Use the new lightness layer of the target object to mix with the chroma layer of the target object to obtain the light migration result.2.根据权利要求1所述的基于梯度操作的对象光照迁移方法,其特征在于:步骤(1)所述的图像对齐采用以下步骤:2. The object illumination migration method based on gradient operation according to claim 1, characterized in that: the image alignment described in step (1) adopts the following steps:(1.1)采用活动轮廓模型获取粗略的特征点坐标,然后利用用户交互优化特征点的位置;(1.1) Use the active contour model to obtain rough feature point coordinates, and then use user interaction to optimize the position of feature points;(1.2)以参考对象和目标对象的特征点作为控制点,对参考对象按照目标对象进行变形。(1.2) Use the feature points of the reference object and the target object as control points, and deform the reference object according to the target object.3.根据权利要求2所述的基于梯度操作的对象光照迁移方法,其特征在于,步骤(1.2)  所述的变形采用以下步骤:3. The object illumination migration method based on gradient operation according to claim 2, characterized in that, the deformation described in step (1.2) adopts the following steps:(1.2.1)对目标对象以标志点进行德劳内三角剖分;(1.2.1) Delaunay triangulation is performed on the target object with marker points;(1.2.2)对于目标对象中的每一个三角形,找到其在参考对象中对应三角形,根据三角形在目标对象和参考对象中的顶点坐标计算出两个三角形的仿射变换矩阵;(1.2.2) For each triangle in the target object, find its corresponding triangle in the reference object, calculate the affine transformation matrix of two triangles according to the vertex coordinates of the triangle in the target object and the reference object;(1.2.3)利用向后图像变形找到目标对象中每一个像素点在参考对象中对应的坐标值;(1.2.3) Use backward image deformation to find the corresponding coordinate value of each pixel in the target object in the reference object;(1.2.4)对于目标对象中的每个像素取其在参考对象中对应坐标处的值,得到变形后的参考对象。(1.2.4) Take the value of each pixel in the target object at the corresponding coordinate in the reference object to obtain the deformed reference object.4.根据权利要求1所述的基于梯度操作的对象光照迁移方法,其特征在于,步骤(2)所述的将目标对象和参考对象均分解为色度层和明度层是通过将图像从RGB颜色空间到CIELAB颜色空间完成。4. The object illumination migration method based on gradient operation according to claim 1, characterized in that, in step (2), decomposing the target object and the reference object into a chroma layer and a lightness layer is by converting the image from RGB Color space to CIELAB color space for completion.5.根据权利要求1所述的基于梯度操作的图像对象光照迁移方法,其特征在于,步骤(4)所述的将光照信息从参考对象大尺度层的梯度图像迁移到目标对象大尺度层的梯度图像的过程包含以下步骤:5. The image object illumination migration method based on gradient operation according to claim 1, characterized in that, in the step (4), the illumination information is migrated from the gradient image of the large-scale layer of the reference object to the large-scale layer of the target object. The process of gradient image consists of the following steps:(4.1)利用特征点,生成人脸区域图像和轮廓区域图像并生成掩码图像;(4.1) Utilize feature point, generate face area image and outline area image and generate mask image;(4.2)对参考对象按照目标对象进行双边滤波,将滤波结果变换到梯度域;(4.2) Perform bilateral filtering on the reference object according to the target object, and transform the filtering result to the gradient domain;(4.3)将参考对象非轮廓区域大于特定阈值的梯度迁移到目标对象非轮廓区域;(4.3) Migrate the gradient of the non-contour area of the reference object greater than a specific threshold to the non-contour area of the target object;(4.4)将参考对象轮廓区域大于特定阈值的梯度迁移到目标对象轮廓区域。(4.4) Migrate the gradients of the reference object contour region larger than a certain threshold to the target object contour region.6.根据权利要求1所述的基于梯度操作的对象光照迁移方法,其特征在于,步骤(6)所述的与目标对象的色度层混合是指在CIELAB颜色空间中,将新明度层与色度层从CIELAB颜色空间变换到RGB颜色空间。6. The object illumination migration method based on gradient operation according to claim 1, wherein the step (6) described in mixing with the chroma layer of the target object refers to combining the new lightness layer with the chroma layer in the CIELAB color space The chroma layer is transformed from CIELAB color space to RGB color space.
CN 2011102997162011-09-302011-09-30 Object Illumination Migration Method Based on Gradient OperationExpired - Fee RelatedCN102360513B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN 201110299716CN102360513B (en)2011-09-302011-09-30 Object Illumination Migration Method Based on Gradient Operation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN 201110299716CN102360513B (en)2011-09-302011-09-30 Object Illumination Migration Method Based on Gradient Operation

Publications (2)

Publication NumberPublication Date
CN102360513Atrue CN102360513A (en)2012-02-22
CN102360513B CN102360513B (en)2013-02-06

Family

ID=45585838

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN 201110299716Expired - Fee RelatedCN102360513B (en)2011-09-302011-09-30 Object Illumination Migration Method Based on Gradient Operation

Country Status (1)

CountryLink
CN (1)CN102360513B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102665062A (en)*2012-03-162012-09-12华为技术有限公司Method and device for stabilizing target object image in video
CN103337088A (en)*2013-07-102013-10-02北京航空航天大学Human face image light and shadow editing method based on edge preserving
CN103839220A (en)*2014-02-282014-06-04浙江大学Seamless image cloning method based on matrix decomposition
CN105447829A (en)*2015-11-252016-03-30小米科技有限责任公司Image processing method and device
CN107527334A (en)*2017-08-282017-12-29北京小米移动软件有限公司Human face light moving method and device
CN107833198A (en)*2017-11-092018-03-23中共中央办公厅电子科技学院A kind of Outdoor Scene weight illumination method decomposed based on large scale
CN109300170A (en)*2018-10-182019-02-01云南大学Portrait photo shadow transmission method
CN110175255A (en)*2019-05-292019-08-27腾讯科技(深圳)有限公司Image annotation method, and annotation display method and device based on pathological image
CN111063017A (en)*2018-10-152020-04-24华为技术有限公司 A kind of illumination estimation method and apparatus
CN112384928A (en)*2018-08-012021-02-19三星电子株式会社Method and apparatus for performing object illumination manipulation on an image

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20080114379A (en)*2007-06-272008-12-31성균관대학교산학협력단 Lighting normalization method and device
US20090046099A1 (en)*2006-11-132009-02-19BunkspeedReal-time display system
CN101441774A (en)*2008-12-262009-05-27北京航空航天大学Dynamic scene real time double face refraction drafting method based on image mapping space
CN101950352A (en)*2010-05-312011-01-19北京智安邦科技有限公司Target detection method capable of removing illumination influence and device thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20090046099A1 (en)*2006-11-132009-02-19BunkspeedReal-time display system
KR20080114379A (en)*2007-06-272008-12-31성균관대학교산학협력단 Lighting normalization method and device
CN101441774A (en)*2008-12-262009-05-27北京航空航天大学Dynamic scene real time double face refraction drafting method based on image mapping space
CN101950352A (en)*2010-05-312011-01-19北京智安邦科技有限公司Target detection method capable of removing illumination influence and device thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOWU CHEN, MENGMENG CHEN, XIN JIN AND QINPING ZHAO: "Face illumination transfer through edge-preserving filters", 《IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2011》*
XIN JIN ET AL: "Learning artistic lighting template from portrait photographs", 《ECCV 2010》*

Cited By (18)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2013135067A1 (en)*2012-03-162013-09-19华为技术有限公司Method and apparatus for stabilizing target object image in video
CN102665062B (en)*2012-03-162016-03-30华为技术有限公司A kind ofly make method that target object image in video is stable and device
CN102665062A (en)*2012-03-162012-09-12华为技术有限公司Method and device for stabilizing target object image in video
CN103337088A (en)*2013-07-102013-10-02北京航空航天大学Human face image light and shadow editing method based on edge preserving
CN103337088B (en)*2013-07-102016-07-06北京航空航天大学A kind of facial image shadow edit methods kept based on edge
CN103839220A (en)*2014-02-282014-06-04浙江大学Seamless image cloning method based on matrix decomposition
CN105447829B (en)*2015-11-252018-06-08小米科技有限责任公司Image processing method and device
CN105447829A (en)*2015-11-252016-03-30小米科技有限责任公司Image processing method and device
CN107527334A (en)*2017-08-282017-12-29北京小米移动软件有限公司Human face light moving method and device
CN107833198A (en)*2017-11-092018-03-23中共中央办公厅电子科技学院A kind of Outdoor Scene weight illumination method decomposed based on large scale
CN107833198B (en)*2017-11-092021-06-01中共中央办公厅电子科技学院Outdoor scene relighting method based on large-scale decomposition
CN112384928A (en)*2018-08-012021-02-19三星电子株式会社Method and apparatus for performing object illumination manipulation on an image
CN111063017A (en)*2018-10-152020-04-24华为技术有限公司 A kind of illumination estimation method and apparatus
CN111063017B (en)*2018-10-152022-04-12华为技术有限公司 A kind of illumination estimation method and apparatus
CN109300170A (en)*2018-10-182019-02-01云南大学Portrait photo shadow transmission method
CN110175255A (en)*2019-05-292019-08-27腾讯科技(深圳)有限公司Image annotation method, and annotation display method and device based on pathological image
CN110175255B (en)*2019-05-292022-04-05腾讯医疗健康(深圳)有限公司Image annotation method, and annotation display method and device based on pathological image
US12046056B2 (en)2019-05-292024-07-23Tencent Technology (Shenzhen) Company LimitedImage annotation method and apparatus, annotation presentation method and apparatus, device, and storage medium

Also Published As

Publication numberPublication date
CN102360513B (en)2013-02-06

Similar Documents

PublicationPublication DateTitle
CN102360513B (en) Object Illumination Migration Method Based on Gradient Operation
Zhuang et al.Underwater image enhancement with hyper-laplacian reflectance priors
CN109859098B (en) Face image fusion method, device, computer equipment and readable storage medium
CN109003325B (en)Three-dimensional reconstruction method, medium, device and computing equipment
CN109584353B (en)Method for reconstructing three-dimensional facial expression model based on monocular video
CN108154550B (en)RGBD camera-based real-time three-dimensional face reconstruction method
CN105427385B (en)A kind of high-fidelity face three-dimensional rebuilding method based on multilayer deformation model
CN101916454B (en)Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN108805090B (en) A virtual makeup test method based on a plane grid model
CN111932678B (en)Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN109829930A (en)Face image processing process, device, computer equipment and readable storage medium storing program for executing
CN116583878A (en) Method and system for personalizing 3D head model deformations
CN102509346A (en)Object illumination migration method based on edge retaining
JP7251003B2 (en) Face mesh deformation with fine wrinkles
CN106530240B (en)A kind of image defogging method optimized based on Multiscale Fusion and full variation
CN106952221A (en) A three-dimensional Beijing opera facial makeup automatic makeup method
CN110660076A (en)Face exchange method
WO2022143354A1 (en)Face generation method and apparatus for virtual object, and device and readable storage medium
CN106056650A (en)Facial expression synthetic method based on rapid expression information extraction and Poisson image fusion
CN110853064A (en) An Image Cooperative Segmentation Method Based on Minimum Fuzzy Divergence
CN107194995A (en)A kind of method of rapid build true three-dimension person model
WO2024055379A1 (en)Video processing method and system based on character avatar model, and related device
CN118196307A (en) A monocular human digital avatar modeling method based on drivable Gaussian point cloud
CN103337088B (en)A kind of facial image shadow edit methods kept based on edge
CN106228590A (en)A kind of human body attitude edit methods in image

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20130206

Termination date:20150930

EXPYTermination of patent right or utility model

[8]ページ先頭

©2009-2025 Movatter.jp