技术领域technical field
本发明具体涉及一种基于图像分割的阴影检测与去除算法,用于图像阴影检测和半影区域去除,属于图像处理技术领域。The invention specifically relates to a shadow detection and removal algorithm based on image segmentation, which is used for image shadow detection and penumbra area removal, and belongs to the technical field of image processing.
背景技术Background technique
阴影检测一直是图像处理领域的研究热点之一。由于阴影的存在会增加物体识别、视频分割等算法的难度,将阴影检测出来并将其去除可显著提高图像处理领域许多算法的性能。Shadow detection has always been one of the research hotspots in the field of image processing. Since the existence of shadows will increase the difficulty of algorithms such as object recognition and video segmentation, detecting and removing shadows can significantly improve the performance of many algorithms in the field of image processing.
很多阴影检测的方法是基于光照模型,或者颜色模型提出的。如在颜色空间HSI下的阴影检测方法,该方法利用HSI空间中H和I的比值来检测阴影。但该方法更适合航拍图或者阴影比较明显的图像,针对复杂场景下的阴影检测效果则差强人意。但是,由于不能很好地判断一个像素点属于阴影还是属于颜色比较暗的非阴影,自动检测阴影的方法仍然是阴影检测的一大挑战。Many shadow detection methods are proposed based on illumination models or color models. Such as the shadow detection method under the color space HSI, this method uses the ratio of H and I in the HSI space to detect shadows. However, this method is more suitable for aerial images or images with obvious shadows, and the effect of shadow detection in complex scenes is not satisfactory. However, since it is impossible to judge whether a pixel belongs to a shadow or a non-shadow with a darker color, the method of automatically detecting shadows is still a big challenge for shadow detection.
迄今为止,图像处理领域已经存在大量的阴影检测算法。根据检测手段的不同,可将现有的阴影检测算法分为基于边缘的阴影检测和基于学习的阴影检测。So far, there have been a large number of shadow detection algorithms in the field of image processing. According to different detection means, existing shadow detection algorithms can be divided into edge-based shadow detection and learning-based shadow detection.
基于边缘的阴影检测算法首先需要场景彩色图和光照不变的灰度图,灰度图由校准的摄像机获得,通过比较灰度图的边缘和原图像的边缘来检测阴影(G.D.Finlayson,2006)。该方法对高质量图像有着极佳的处理效果,而对普通图像效果一般。基于学习的阴影检测算法则考虑到阴影边缘的复杂性,转而从经验的角度将数据驱动的方法引入到阴影检测,如基于光照强度、梯度等信息利用条件随机场(Conditional random field,CRF),判断一个区域是否为阴影;或是利用CRF来判断一条边缘是否为阴影。虽然这种利用条件随机场CRF的方法可以在一定条件下很好地检测阴影,但它训练过程冗长,并对训练集的依赖度较高。时至今日,在光照、物体反射率和阴影几何形状等因素的影响下,阴影检测仍然是一个非常具有挑战性的问题。The edge-based shadow detection algorithm first needs the scene color image and the grayscale image with constant illumination. The grayscale image is obtained by a calibrated camera, and the shadow is detected by comparing the edge of the grayscale image with the edge of the original image (G.D.Finlayson, 2006) . This method has an excellent processing effect on high-quality images, but not so much on ordinary images. The learning-based shadow detection algorithm takes into account the complexity of shadow edges, and instead introduces data-driven methods into shadow detection from an empirical point of view, such as using Conditional Random Field (CRF) based on information such as light intensity and gradient. , to judge whether an area is a shadow; or use CRF to judge whether an edge is a shadow. Although this method using conditional random field (CRF) can detect shadows well under certain conditions, it has a lengthy training process and a high degree of dependence on the training set. Today, shadow detection is still a very challenging problem under the influence of factors such as lighting, object reflectivity, and shadow geometry.
发明内容Contents of the invention
针对上述现有技术,本发明的目的在于如何提供一种基于图像分割的阴影 检测与去除算法,其可以有效地分割阴影和非阴影区域,还可以更好地检测和去除场景中的自阴影和投射阴影。For the prior art above, the purpose of the present invention is how to provide a shadow detection and removal algorithm based on image segmentation, which can effectively segment shadow and non-shadow areas, and can also better detect and remove self-shadows and shadows in the scene. Cast shadows.
为了解决上述技术问题,本发明采用如下技术方案:In order to solve the above technical problems, the present invention adopts the following technical solutions:
一种基于图像分割的阴影检测与去除算法,其特征在于,包括如下步骤:A kind of shadow detection and removal algorithm based on image segmentation, it is characterized in that, comprises the steps:
S100:利用纹理和亮度特征,结合局部信息和全局信息估计每个像素点是阴影边缘的概率;S100: Using texture and brightness features, combining local information and global information, to estimate the probability that each pixel is a shadow edge;
S200:使用分水岭算法利用轮廓信息gPb分割图像; S200: Use the watershed algorithm to segment the image using the contour information gPb;
S300:利用基于边缘的区域融合算法将图像中阴影区域和非阴影区域分割开来,同时分别把阴影区域和非阴影区域分割成若干个子区域;然后,利用单个区域信息和匹配区域的信息,分别训练一个分类器SVM,对阴影进行识别;随后使用图割算法求解检测阴影的能量方程,得到最终的阴影检测结果。S300: Use an edge-based area fusion algorithm to separate the shadow area and the non-shadow area in the image, and at the same time divide the shadow area and the non-shade area into several sub-areas; then, use the information of a single area and the information of the matching area, respectively Train a classifier SVM to identify shadows; then use the graph cut algorithm to solve the energy equation for detecting shadows to obtain the final shadow detection results.
S400,根据阴影检测的结果,使用抠图算法计算阴影标签,利用得到的标签点亮阴影区域,恢复阴影区域的光照,使其与周围非阴影的区域光照相同。S400. According to the result of the shadow detection, use a matting algorithm to calculate a shadow label, use the obtained label to light up the shadow area, and restore the illumination of the shadow area to make it the same as that of the surrounding non-shaded area.
所述步骤S100主要以下几步组成:The step S100 mainly consists of the following steps:
S101:通过计算方向梯度信息G(x,y,θ)构建阴影边缘检测器Pb,阴影边缘检测器分别计算亮度和纹理基元两个通道的梯度信息G(x,y,θ),阴影边缘检测器的构建方法是在图像中一点(x,y)为中心,以r为半径画一圆,该圆被方向为θ的直径分割为两个半圆;S101: Construct the shadow edge detector Pb by calculating the direction gradient information G(x, y, θ). The construction method of the detector is to draw a circle with a radius of r at a point (x, y) in the image, and the circle is divided into two semicircles by a diameter whose direction is θ;
S102:通过计算这两个半圆的柱状图之间的χ2距离得到方向梯度G:S102: Obtain the directional gradient G by calculating theχ distance between the histograms of the two semicircles:
其中,g和h代表两个半圆,i代表图像值域上的值;Among them, g and h represent two semicircles, and i represents the value on the image range;
S103:将计算得到的检测器Pb结合起来,得到不同尺度、不同通道组合起来的局部信息(local cues)mPb:S103: Combine the calculated detectors Pb to obtain local information (local cues) mPb of different scales and different channels:
其中,s代表圆的半径,i代表特征通道(亮度、纹理基元);Gi,σ(i,s)(x,y,θ)比较了以(x,y)为中心、σ(i,s)为半径大小、θ为半径方向的两个半圆之间的差异;αi,s则是各个梯度信息所代表的权重;Among them, s represents the radius of the circle, i represents the feature channel (brightness, texture primitive); Gi,σ(i,s) (x,y,θ) compares the center of (x,y), ,s) is the radius size, θ is the difference between two semicircles in the radius direction; αi,s is the weight represented by each gradient information;
在每个点上,选取梯度信息G的最大值作为该点的mPb值,该值表示最终的局部信息:At each point, the maximum value of the gradient information G is selected as the mPb value of the point, which represents the final local information:
步骤S104:以mPb构建一个稀疏矩阵,通过计算该矩阵的特征值和特征向量,得到所需的全局信息;稀疏矩阵是在一个半径r=5像素的区域内,将每个像素链接起来:Step S104: Construct a sparse matrix with mPb, and obtain the required global information by calculating the eigenvalues and eigenvectors of the matrix; the sparse matrix is in an area with a radius of r=5 pixels, linking each pixel:
其中,代表的是i和j之间的联系,ρ=0.1,接着定义Dii=∑jWij,通过in, represents the connection between i and j, ρ=0.1, then define Dii =∑j Wij , through
(D-W)v=λDv (5)(D-W)v=λDv (5)
计算得到特征向量{v0,v1,…,vn},和特征值0=λ0≤λ1≤…≤λn。Eigenvectors {v0 , v1 ,...,vn } and eigenvalues 0=λ0 ≤λ1 ≤...≤λn are obtained through calculation.
步骤S105:将步骤S104中的每一个特征向量看做一副图像,通过计算不同方向下的高斯滤波,得到方向信息然后,将不同特征向量下的方向信息线性叠加起来得到全局信息sPb:Step S105: Treat each feature vector in step S104 as an image, and obtain direction information by calculating Gaussian filtering in different directions Then, the direction information under different eigenvectors is linearly superimposed to obtain the global information sPb:
步骤S106:将局部信息mPb和全局信息sPb有机地结合起来分析图像轮廓信息gPb:Step S106: Organically combine local information mPb and global information sPb to analyze image contour information gPb:
其中,βi,s和γ分别表示mPb和sPb的系数。Among them, βi, s and γ represent the coefficients of mPb and sPb, respectively.
所述步骤S200主要以下几步组成:The step S200 mainly consists of the following steps:
步骤S201:估计出图像中任意一点(x,y)在方向θ上是轮廓的概率,求出该点轮廓检测的最大值:Step S201: Estimate the probability that any point (x, y) in the image is a contour in the direction θ, and find the maximum value of contour detection at this point:
步骤S202:利用数学形态学,以区域中E(x,y)的最小值为“集水盆地”计算每个区域,每个“集水盆地”对应一个区域,记为P0;两个集水盆地交汇处是“分水岭”,记为K0;Step S202: Using mathematical morphology, calculate each area with the minimum value of E(x, y) in the area as the "catchment basin", and each "catchment basin" corresponds to an area, which is recorded as P0 ; two sets The intersection of water basins is the "watershed", denoted as K0 ;
步骤S203:分水岭算法会产生过度分割问题,即将本不应该是边的地方标记为分水岭,利用区域合并算法解决过度分割问题;Step S203: the watershed algorithm will cause over-segmentation problem, that is, mark the place that should not be an edge as a watershed, and use the region merging algorithm to solve the over-segmentation problem;
所述区域合并算法如下:定义一个无向图G=(P0,K0,W(K0),E(P0)),其中W(K0)表示每条分水岭的权值,由分水岭上点的总能量除以分水岭上点的个数而得,E(P0)表示每个集水盆地的能量值,每个盆地的初始能量均为零,W(K0)描述了相邻两个区域之间的相异性;将分水岭按照其权值,由小到大存入队列。The region merging algorithm is as follows: define an undirected graph G=(P0 , K0 , W(K0 ), E(P0 )), where W(K0 ) represents the weight of each watershed, and the watershed The total energy of the upper point is divided by the number of points on the watershed, E(P0 ) represents the energy value of each catchment basin, and the initial energy of each basin is zero, W(K0 ) describes the adjacent The dissimilarity between the two areas; the watershed is stored in the queue according to its weight, from small to large.
区域合并算法包括以下步骤为:The region merging algorithm includes the following steps:
一、找到权重最小的边C*=argminW(C)1. Find the edge with the smallest weight C* =argminW(C)
假定R1和R2由边C*分割,且R=R1∪R2,如果min{E(R1),E(R2)}≠0,判断是否合并,合并条件为:则W(K0)≤τ·min{E(R1),E(R2)} (9) Suppose R1 and R2 are separated by side C* , and R=R1 ∪R2 , if min{E(R1 ),E(R2 )}≠0, judge whether to merge, the merge condition is: then W( K0 )≤τ·min{E(R1 ),E(R2 )} (9)
或min{E(R1),E(R2)}=0 (10),or min{E(R1 ),E(R2 )}=0 (10),
其中τ为一个常数;where τ is a constant;
二、若合并,则更新E(R)、P0和K0,E(R)、P0和K0的更新方法是:2. If combined, then update E(R), P0 and K0 , the update method of E(R), P0 and K0 is:
E(R)=max{E(R1),E(R2),W(C*)} (11) E(R)=max{E(R1 ),E(R2 ),W(C* )} (11)
P0←P0\{R1,R2}∪R (12) P0 ←P0 \{R1 ,R2 }∪R (12)
K0←K0\{C*} (13) K0 ←K0 \{C* } (13)
更地一步地,通过调整τ调整合并条件,进面控制最终区域的大小,τ越大最终合并的区域面积越大。Furthermore, adjust the merging conditions by adjusting τ, and control the size of the final region. The larger τ is, the larger the area of the final merging region will be.
所述步骤S300中通过解下述能量方程得到最终的阴影检测结果,能量方程是由图割算法解得:In the step S300, the final shadow detection result is obtained by solving the following energy equation, and the energy equation is solved by the graph cut algorithm:
同时,at the same time,
其中,表示区域匹配分类器对于两区域光照相同的估计,表示区域匹配分类器对于两区域光照不同的估计,是单区域分类器对单区域区域是否为阴影的估计,{i,j}∈Esame表示相同光照的两个区域,{i,j}∈Ediff表示不同光照的两个区域;y={-1,1}n,当为1时表示该区域是阴影区域。in, Indicates that the area matching classifier estimates that the illumination of the two areas is the same, Indicates the estimation of the area matching classifier for the different illumination of the two areas, is the single-region classifier’s estimate of whether the single-region region is a shadow, {i,j}∈Esame represents two regions with the same illumination, {i,j}∈Ediff represents two regions with different illumination; y={ -1,1}n , when it is 1, it means that the area is a shadow area.
对所述步骤S400更进一步地描述为:使用抠图算法计算阴影标签,该算法认为一个图像Ii可由前景Fi和背景Bi混合而成,其公式如下:The step S400 is further described as: use the matting algorithm to calculate the shadow label, and the algorithm believes that an image Ii can be formed by mixing the foreground Fi and the background Bi , and the formula is as follows:
Ii=kiFi+(1-ki)Bi (18) Ii =ki Fi +(1-ki )Bi (18)
=ki(LdRi+LeRi)+(1-ki)LeRi (19)=ki (Ld Ri +Le Ri )+(1-ki )Le Ri (19)
其中,Ld是直射光、Le是环境光、ki是阴影标签、Ri是点i的反射系数。 where Ld is the direct light, Le is the ambient light, kiis the shadow label, and Ri is the reflection coefficient of point i.
将前景标记为非阴影,背景标记为阴影,通过计算下述能量方程的最小值得到ki的大小,ki表示点i的标签,由(20)得到,k是ki组成的向量,得到k就得到了ki。Mark the foreground as non-shaded and the background as shadowed, and obtain the size ofki by calculating the minimum value of the following energy equation,ki represents the label of point i, obtained from (20), k is a vector composed ofki , and k gets ki .
kT是k的转置,k是ki组成的向量,λ是一个很大的数值,具体由实践而定。 是由步骤S300的能量方程得到的标签yk组成的向量,里每个元素的值就是公式里得到的yk,这里每一个元素值不是0就是1。请注意:该公式就是为了计算k,计算得到的k是一个向量,向量里的每个元素表示每个像素的标签;但是每个元素的值的取值范围变为[0、1],也就是说一部分像素的标签由1或0变为了一个0到1区间范围的值;标签仍为0或1的就表示阴影区域和非阴影区域,标签值在0和1之间的表示半影区域。kT is the transpose of k, k is the vector composed of ki , and λ is a large value, which depends on practice. is a vector consisting of labels yk obtained from the energy equation in step S300, The value of each element in is the yk obtained in the formula, where the value of each element is either 0 or 1. Please note: this formula is to calculate k, the calculated k is a vector, each element in the vector represents the label of each pixel; but the value range of each element becomes [0, 1], also That is to say, the label of some pixels has changed from 1 or 0 to a value ranging from 0 to 1; if the label is still 0 or 1, it represents the shadow area and non-shadow area, and if the label value is between 0 and 1, it represents the penumbra area .
其中,L是抠图的拉普拉斯矩阵,D(i,i)是一个对角阵,D(i,i)=1表示像素i是阴影区域的边缘,D(i,i)=0则表示其它所有点;Among them, L is the Laplacian matrix of matting, D(i,i) is a diagonal matrix, D(i,i)=1 means that pixel i is the edge of the shadow area, D(i,i)=0 then represent all other points;
利用得到的标签点亮阴影区域,恢复阴影区域的光照:Use the obtained label to lighten the shadow area and restore the lighting of the shadow area:
根据阴影模型,如果一个像素被点亮,则According to the shadow model, if a pixel is lit, then
其中,r=Ld/Le是直射光Ld和环境光Le的比值,Ii表示像素i原本的值,所以,计算得到r就可将阴影去除;Among them, r=Ld /Le is the ratio of the direct light Ld to the ambient light Le , and Ii represents the original value of the pixel i, so the shadow can be removed by calculating r;
已知A known
Ii=(ki·Ld+Le)Ri (24) Ii =(ki ·Ld +Le )Ri (24)
Ij=(kj·Ld+Le)Rj (25) Ij =(kj ·Ld +Le )Rj (25)
如果两个像素点的反射系数相同,即Ri=Rj,则If the reflection coefficients of two pixels are the same, that is, Ri =Rj , then
公式(24)和(25)的目的是计算r,计算该值的思想就是,找到材质相同(即反射系数相同,Ri=Rj)、光照不同同(Ld和Le一样,但阴影标签k不同)的两个点i,j,利用两者之间的数学关系就可以得到r。Ij就表示这样一个点j的 像素值。The purpose of formulas (24) and (25) is to calculate r.The idea of calculating this value is to find For two points i, j with different labels k), r can be obtained by using the mathematical relationship between the two. Ij represents the pixel value of such a point j.
与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:
一、利用相同的阴影特征将分割图像和检测阴影有机地结合起来,很好地分割了图像中的阴影和非阴影区域,使图像分割更加准确,阴影区域检测更准确、效果更好;1. Using the same shadow feature to organically combine image segmentation and shadow detection, the shadow and non-shadow areas in the image are well segmented, making image segmentation more accurate, and shadow area detection more accurate and better;
二、较完整地检测到了阴影的轮廓,本发明提供的检测算法对复杂背景的抗干扰能力较好,去除的图像不仅在阴影区域内部保存了较好的纹理特征,而且在阴影边缘处较为光滑,表明本文算法在半影区域很好地实现了光照过渡效果。2. The outline of the shadow is detected relatively completely. The detection algorithm provided by the present invention has better anti-interference ability to complex backgrounds. The removed image not only preserves better texture features inside the shadow area, but also is relatively smooth at the edge of the shadow , indicating that the algorithm in this paper can well realize the light transition effect in the penumbra area.
三、根据轮廓信息,提出了一种区域合并的算法,并且可以通过控制区域合并的参数,方便地控制合并区域的大小,这些优点共同提升了阴影去除的效果。3. According to the contour information, a region merging algorithm is proposed, and the size of the merging region can be conveniently controlled by controlling the parameters of the region merging. These advantages jointly improve the effect of shadow removal.
附图说明Description of drawings
图1为本发明的算法流程图;Fig. 1 is the algorithm flowchart of the present invention;
(a)为原图像;(b)为分割后的图像;(c)为单区域识别器检测的阴影和非阴影区域,白色表示阴影区域;(d)为匹配区域识别器检测的阴影和非阴影区域;(e)为经过抠图算法计算得到的阴影标签ki的灰度图;(f)为阴影去除后的图像。(a) is the original image; (b) is the segmented image; (c) is the shadow and non-shadow area detected by the single-region recognizer, and white represents the shadow region; (d) is the shadow and non-shadow detected by the matching region recognizer The shadow area; (e) is the grayscale image of the shadow label ki calculated by the matting algorithm; (f) is the image after shadow removal.
图2为区域合并算法流程图;Fig. 2 is a flow chart of the region merging algorithm;
图3为区域匹配图,图中红线表示相同光照的匹配区域,蓝线表示不同光照的匹配区域。Figure 3 is an area matching diagram. The red line in the figure indicates the matching area with the same illumination, and the blue line indicates the matching area with different illumination.
具体实施方式Detailed ways
下面将结合附图及具体实施方式对本发明作进一步的描述。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.
本发明根据阴影的特征,融合阴影边缘检测和阴影区域检测,分割阴影和非阴影区域。算法分为两个步骤:阴影边缘检测和阴影分割。According to the characteristics of the shadow, the invention combines shadow edge detection and shadow area detection to segment shadow and non-shadow areas. The algorithm is divided into two steps: shadow edge detection and shadow segmentation.
实施例阴影边缘检测Example shadow edge detection
首先,通过计算方向梯度信息G(x,y,θ)构建阴影边缘检测器Pb。构建的方法是在图像中一点(x,y)为中心,以r为半径画一圆,该圆被方向为θ的直径分割为两个半圆。最后通过计算这两个半圆的柱状图之间的χ2距离得到方向梯度G:First, the shadow edge detector Pb is constructed by computing the directional gradient information G(x, y, θ). The method of construction is to draw a circle with a radius of r at a point (x, y) in the image, and the circle is divided into two semicircles by a diameter whose direction is θ. Finally, the directional gradient G is obtained by calculating theχ2 distance between the histograms of these two semicircles:
其中,g和h代表两个半圆,i代表图像值域上的值。Among them, g and h represent two semicircles, and i represents the value on the image range.
检测器分别计算亮度和纹理基元两个通道的梯度信息G(x,y,θ)。纹理基元的计算方式是选择16个方向的奇偶高斯滤波器进行滤波,经过k平均算法聚类的纹理基元分为32组。The detector calculates the gradient information G(x, y, θ) of two channels of brightness and texture primitives respectively. The calculation method of the texture primitive is to select the odd-even Gaussian filter in 16 directions for filtering, and the texture primitives clustered by the k-means algorithm are divided into 32 groups.
其次,将计算得到的检测器Pb结合起来,得到不同尺度、不同通道组合起来的局部信息(local cues)mPb:Secondly, the calculated detectors Pb are combined to obtain local information (local cues) mPb of different scales and different channels:
其中,s代表圆的半径,i代表特征通道(亮度,纹理基元);Gi,σ(i,s)(x,y,θ)比较了以(x,y)为中心、σ(i,s)为半径大小、θ为半径方向的两个半圆之间的差异,αi,s则是各个梯度信息所代表的权重。Among them, s represents the radius of the circle,i represents the feature channel (luminance, texture primitive); ,s) is the size of the radius, θ is the difference between the two semicircles in the radial direction, and αi,s is the weight represented by each gradient information.
在每个点上,选取梯度信息G的最大值作为该点的mPb值,该值表示的最终的局部信息:At each point, the maximum value of the gradient information G is selected as the mPb value of the point, which represents the final local information:
接着,为了更好地提取轮廓信息,使用标准归一化图割的方法获取全局信息(global cues)。以mPb构建一个稀疏矩阵,通过计算该矩阵的特征值和特征向量,得到所需的全局信息。稀疏矩阵是在一个半径r=5像素的区域内,将每个像素链接起来:Next, in order to better extract contour information, the standard normalized graph cut method is used to obtain global information (global cues). Construct a sparse matrix with mPb, and obtain the required global information by calculating the eigenvalues and eigenvectors of the matrix. The sparse matrix is to link each pixel in an area with a radius of r=5 pixels:
其中,代表的是i和j之间的联系,ρ=0.1,接着定义Dii=∑jWij,通过in, represents the connection between i and j, ρ=0.1, then define Dii =∑j Wij , through
(D-W)v=λDv (5)(D-W)v=λDv (5)
计算得到特征向量{v0,v1,…,vn},和特征值0=λ0≤λ1≤…≤λn。Eigenvectors {v0 , v1 ,...,vn } and eigenvalues 0=λ0 ≤λ1 ≤...≤λn are obtained through calculation.
虽然归一化图割不能很好地分割图像,但却可以很好地反映图像的轮廓。所以,将每一个特征向量看做一副图像,通过计算不同方向下的高斯滤波,得到方向信息然后,将不同特征向量下的方向信息线性叠加起来:Although the normalized graph cut cannot segment the image very well, it can reflect the contour of the image well. Therefore, each eigenvector is regarded as an image, and the direction information is obtained by calculating the Gaussian filter in different directions Then, the direction information under different eigenvectors is linearly superimposed:
mPb和sPb分别代表了不同的边缘信息。mPb涵盖了所有边缘的信息,是局部信息;而sPb则表示了最为突出的边的信息,是全局信息。正因为此,将两者有机地结合起来可以有效地分析图像轮廓信息,本发明将其定义为gPb:mPb and sPb represent different edge information respectively. mPb covers all edge information, which is local information; while sPb represents the most prominent edge information, which is global information. Just because of this, the image profile information can be effectively analyzed by combining the two organically, which is defined as gPb by the present invention:
式中,βi,s和γ分别表示mPb和sPb的系数。In the formula, βi, s and γ represent the coefficients of mPb and sPb, respectively.
实施例图像分割Example image segmentation
gPb能够有效的表示轮廓,但是这些轮廓也不是完全封闭的,因此不能用作分割图像。为了更好地分割图像,本文使用分水岭算法利用轮廓信息gPb来进行区域分割。gPb can effectively represent contours, but these contours are not completely closed, so they cannot be used to segment images. In order to better segment the image, this paper uses the watershed algorithm to use the contour information gPb to segment the region.
为了更好的描述分水岭算法,首先考虑任意一个轮廓检测器E(x,y,θ)。这个检测器可以估计出图像中任意一点(x,y)在方向θ上是轮廓的概率,并且这个值越大,表示该点是轮廓的概率越高。对于每个点求轮廓检测的最大值:To better describe the watershed algorithm, first consider any contour detector E(x,y,θ). This detector can estimate the probability that any point (x, y) in the image is a contour in the direction θ, and the larger the value, the higher the probability that the point is a contour. Find the maximum contour detection for each point:
接着,利用数学形态学,以区域中E(x,y)的最小值为“集水盆地”计算每个区域。每个“集水盆地”对应一个区域,记为P0;两个集水盆地交汇处是“分水岭”,记为K0。然而,分水岭算法会产生过度分割问题,将本不应该是边的地方标记为分水岭。为了解决这一问题,利用一种新的区域合并算法。该算法的前提是:分水岭算法一定会产生过分问题,即初始的每个集水盆地都需要合并。Next, using mathematical morphology, each region is calculated with the minimum value of E(x,y) in the region as the "catchment basin". Each "catchment basin" corresponds to an area, denoted as P0 ; the intersection of two catchment basins is a "watershed", denoted as K0 . However, the watershed algorithm creates an over-segmentation problem, marking places that should not be edges as watershed. To solve this problem, a new region merging algorithm is utilized. The premise of this algorithm is: the watershed algorithm will inevitably produce excessive problems, that is, each initial catchment basin needs to be merged.
区域合并算法描述如下:定义一个无向图G=(P0,K0,W(K0),E(P0)),其中W(K0)表示每条分水岭的权值,由分水岭上点的总能量处以分水岭上点的个数而得;E(P0)表示每个集水盆地的能量值,每个盆地的初始能量均为零。同时需要注意的是在图中,每条都恰好分割两个区域,W(K0)描述了相邻两个区域之间的相异性。将分水岭按照其权值,由小到大存入队列,算法的流程图如图2所示。The region merging algorithm is described as follows: Define an undirected graph G=(P0 ,K0 ,W(K0 ),E(P0 )), where W(K0 ) represents the weight of each watershed, which is determined by The total energy of a point is obtained by the number of points on the watershed; E(P0 ) represents the energy value of each catchment basin, and the initial energy of each basin is zero. At the same time, it should be noted that in the figure, each bar divides exactly two regions, and W(K0 ) describes the dissimilarity between two adjacent regions. The watershed is stored in the queue from small to large according to its weight, and the flow chart of the algorithm is shown in Figure 2.
假定R1和R2由边C*分割,且R=R1∪R2。合并的条件是:如果min{E(R1),E(R2)}≠0,则Assume R1 and R2 are separated by edge C* , and R=R1 ∪R 2 . The condition for merging is: if min{E(R1 ),E(R2 )}≠0, then
W(K0)≤τ·min{E(R1),E(R2)} (9) W(K0 )≤τ·min{E(R1 ),E(R2 )} (9)
或min{E(R1),E(R2)}=0 (10) or min{E(R1 ),E(R2 )}=0 (10)
其中,τ表示一个常数。通过调整τ就可以调整合并条件,从而控制最终区域的大小,τ越大最终合并的区域面积越大。Among them, τ represents a constant. By adjusting τ, the merging condition can be adjusted, thereby controlling the size of the final region. The larger τ is, the larger the area of the final merging region will be.
E(R)、P0和K0的更新方法是:The update method of E(R), P0 and K0 is:
E(R)=max{E(R1),E(R2),W(C*)} (11) E(R)=max{E(R1 ),E(R2 ),W(C* )} (11)
P0←P0\{R1,R2}∪R (12) P0 ←P0 \{R1 ,R2 }∪R (12)
K0←K0\{C*} (13) K0 ←K0 \{C* } (13)
实施例阴影检测Example shadow detection
单区域识别:训练一个SVM分类器来判断单个区域是阴影的概率;训练集已手工标记出了阴影部分,分类器利用阴影的亮度和纹理基元来进行分类,输出为表示区域为阴影的概率。Single region recognition: train an SVM classifier to determine the probability of a single region being a shadow; the training set has manually marked the shadow part, the classifier uses the brightness of the shadow and texture primitives to classify, and the output is the probability that the region is a shadow .
匹配区域识别:检测一个区域是否为阴影区域,应该与其纹理相似的区域进行比较。如果和的亮度相似,则二者处于同一光照强度下;如果和的亮度相异,则认定亮度较暗的区域为阴影。Matching Region Recognition: Detects whether a region is a shaded region and should be compared to regions with similar textures. If the brightness of the sum is similar, the two are under the same light intensity; if the brightness of the sum is different, the area with a darker brightness is considered a shadow.
本发明利用分类器训练四个特征来判断阴影区域:The present invention uses a classifier to train four features to judge shadow areas:
①亮度和纹理基元的χ2距离;① Theχ2 distance between brightness and texture primitives;
②平均RGB比②Average RGB ratio
当对比的匹配区域是相同材质时,非阴影区域的三通道值更高。计算公式如下:When the compared matching area is the same material, the three-channel value is higher in the non-shaded area. Calculated as follows:
其中,Ravg1表示第一个区域R通道的平均值。Among them, Ravg1 represents the average value of the R channel in the first region.
③色彩对齐度 ③Color Alignment
相同材质的阴影/非阴影对在RGB空间上的色彩保持对齐。该参数通过计算ρR/ρG和ρG/ρB得到。Shaded/non-shaded pairs of the same material maintain color alignment in RGB space. This parameter is obtained by calculating ρR /ρG and ρG /ρB.
④区域归一化距离④ Regional normalized distance
由于匹配区域的材质是否相同不一定和是否相邻有关,所以本将该参数也列为训练特征之一。该值是由匹配区域之间的欧式距离计算而得的。Since the material of the matching area is not necessarily related to whether it is adjacent, this parameter is also listed as one of the training features. This value is calculated from the Euclidean distance between matching regions.
构建如下能量方程,使用图割算法得到最终的阴影检测结果:Construct the following energy equation, and use the graph cut algorithm to obtain the final shadow detection result:
同时,at the same time,
其中,表示区域匹配分类器对于两区域光照相同的估计,表示区域匹配分类器对于两区域光照不同的估计,是单区域分类器对单区域区域是否为阴影的估计;{i,j}∈Esame表示相同光照的两个区域,{i,j}∈Edif表示不同光照的两个区域;y={-1,1}n,当为1时表示该区域是阴影区域。in, Indicates that the area matching classifier estimates that the illumination of the two areas is the same, Indicates the estimation of the area matching classifier for the different illumination of the two areas, is the single-region classifier’s estimate of whether a single-region region is shaded; {i,j}∈Esame represents two regions with the same illumination, and {i,j}∈Edif represents two regions with different illumination; y= {-1,1}n , when it is 1, it means that the area is a shaded area.
实施例阴影去除Example shadow removal
为有效地进行阴影移除,必须建立一个适当的阴影模型。阴影模型的光照由直射光和环境光共同决定:For effective shadow removal, an appropriate shadow model must be established. The lighting of the shadow model is determined by the direct light and the ambient light:
Ii=(ki·Ld+Le)Ri (17) Ii =(ki ·Ld +Le )Ri (17)
对于像素i,ki表示阴影标签。当ki=0时,表示该像素点在阴影区域内;当ki=1时,表示该像素点在非阴影区域内;当0<ki<1,表示该像素处在半影区域内。For pixel i,ki denotes the shadow label. When ki =0, it means the pixel is in the shaded area; when ki =1, it means the pixel is in the non-shaded area; when 0<ki <1, it means the pixel is in the penumbra area .
尽管阴影检测已经为每个像素赋予了一个标签(0或1),然而实际场景中阴影边缘是由非阴影到阴影逐步过渡的。为了更好地对阴影边缘进行去除,本文使用抠图算法计算半影区域:Although shadow detection has assigned a label (0 or 1) to each pixel, in actual scenes shadow edges gradually transition from non-shadow to shadow. In order to better remove the shadow edge, this paper uses the matting algorithm to calculate the penumbra area:
Ii=kiFi+(1-ki)Bi (18) Ii =ki Fi +(1-ki )Bi (18)
=ki(LdRi+LeRi)+(1-ki)LeRi (19)=ki (Ld Ri +Le Ri )+(1-ki )Le Ri (19)
将前景标记为非阴影,背景标记为阴影。通过计算下述能量方程的最小值得到ki的大小:Mark the foreground as non-shaded and the background as shaded. The size ofki is obtained by calculating the minimum value of the following energy equation:
其中,L是抠图的拉普拉斯矩阵,D(i,i)是一个对角阵。实验中,D(i,i)=1表示像素i是阴影区域的边缘,D(i,i)=0则表示其它所有点。Among them, L is the Laplacian matrix of matting, and D(i,i) is a diagonal matrix. In the experiment, D(i,i)=1 indicates that pixel i is the edge of the shaded area, and D(i,i)=0 indicates all other points.
根据阴影模型,如果一个像素被点亮,则According to the shadow model, if a pixel is lit, then
其中,r=Ld/Le是直射光Ld和环境光Le的比值,Ii表示像素i原本的值。所以,计算得到r就可以将阴影去除。Wherein, r=Ld /Le is the ratio of the direct light Ld to the ambient light Le , and Ii represents the original value of the pixel i. Therefore, the shadow can be removed by calculating r.
已知A known
Ii=(ki·Ld+Le)Ri (24) Ii =(ki ·Ld +Le )Ri (24)
Ij=(kj·Ld+Le)Rj (25) Ij =(kj ·Ld +Le )Rj (25)
如果两个像素点的反射系数相同,即Ri=Rj,则If the reflection coefficients of two pixels are the same, that is, Ri =Rj , then
在相邻的阴影和非阴影区域两边选取最相近的像素点,然后根据式(24)计算得到r值。Select the closest pixel points on both sides of the adjacent shaded and non-shaded areas, and then calculate the r value according to formula (24).
以上内容仅为结合具体方案对本发明进行的一些详细说明,不能认定发明的具体实施只限于这些说明。对本发明所属技术领域的普通技术人员来说,在不脱离本发明的构思前提下,还可以做出简单的推演及替换,都应当视为在本发明的保护范围内。The above content is only some detailed descriptions of the present invention in combination with specific solutions, and it cannot be assumed that the specific implementation of the invention is limited to these descriptions. For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present invention, they can also make simple deduction and replacement, which should be regarded as within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410675195.XACN104463853A (en) | 2014-11-22 | 2014-11-22 | Shadow detection and removal algorithm based on image segmentation |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410675195.XACN104463853A (en) | 2014-11-22 | 2014-11-22 | Shadow detection and removal algorithm based on image segmentation |
| Publication Number | Publication Date |
|---|---|
| CN104463853Atrue CN104463853A (en) | 2015-03-25 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410675195.XAPendingCN104463853A (en) | 2014-11-22 | 2014-11-22 | Shadow detection and removal algorithm based on image segmentation |
| Country | Link |
|---|---|
| CN (1) | CN104463853A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105447501A (en)* | 2015-11-02 | 2016-03-30 | 北京旷视科技有限公司 | Method and device for shadow detection of license image based on clustering |
| CN106023113A (en)* | 2016-05-27 | 2016-10-12 | 哈尔滨工业大学 | Satellite high-score image shadow region recovery method based on non-local sparse |
| CN106295570A (en)* | 2016-08-11 | 2017-01-04 | 北京暴风魔镜科技有限公司 | Block filtration system and method alternately |
| CN106408648A (en)* | 2015-08-03 | 2017-02-15 | 青岛海信医疗设备股份有限公司 | Medical-tissue slice-image three-dimensional reconstruction method and equipment thereof |
| CN106488180A (en)* | 2015-08-31 | 2017-03-08 | 上海悠络客电子科技有限公司 | Video shadow detection method |
| CN107507146A (en)* | 2017-08-28 | 2017-12-22 | 武汉大学 | A kind of natural image soft shadowses removing method |
| CN109493406A (en)* | 2018-11-02 | 2019-03-19 | 四川大学 | Quick percentage is close to soft shadows method for drafting |
| CN110427950A (en)* | 2019-08-01 | 2019-11-08 | 重庆师范大学 | Method of shadow detection in purple soil soil image |
| US10504282B2 (en) | 2018-03-21 | 2019-12-10 | Zoox, Inc. | Generating maps without shadows using geometry |
| CN110765875A (en)* | 2019-09-20 | 2020-02-07 | 浙江大华技术股份有限公司 | Method, equipment and device for detecting boundary of traffic target |
| WO2020119618A1 (en)* | 2018-12-12 | 2020-06-18 | 中国科学院深圳先进技术研究院 | Image inpainting test method employing texture feature fusion |
| US10699477B2 (en)* | 2018-03-21 | 2020-06-30 | Zoox, Inc. | Generating maps without shadows |
| CN111526263A (en)* | 2019-02-01 | 2020-08-11 | 光宝电子(广州)有限公司 | Image processing method, device and computer system |
| CN111738931A (en)* | 2020-05-12 | 2020-10-02 | 河北大学 | Shadow Removal Algorithm for Photovoltaic Array UAV Aerial Imagery |
| CN112598592A (en)* | 2020-12-24 | 2021-04-02 | 广东博智林机器人有限公司 | Image shadow removing method and device, electronic equipment and storage medium |
| WO2021147408A1 (en)* | 2020-01-22 | 2021-07-29 | 腾讯科技(深圳)有限公司 | Pixel point identification method and apparatus, illumination rendering method and apparatus, electronic device and storage medium |
| CN113256666A (en)* | 2021-07-19 | 2021-08-13 | 广州中望龙腾软件股份有限公司 | Contour line generation method, system, equipment and storage medium based on model shadow |
| CN114175098A (en)* | 2019-07-30 | 2022-03-11 | 高尔纵株式会社 | Detection method of golf club and sensing device using same |
| CN114359684A (en)* | 2021-12-17 | 2022-04-15 | 浙江大华技术股份有限公司 | Image shadow evaluation method and device |
| CN114742836A (en)* | 2022-06-13 | 2022-07-12 | 浙江太美医疗科技股份有限公司 | Medical image processing method and device and computer equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103295013A (en)* | 2013-05-13 | 2013-09-11 | 天津大学 | Pared area based single-image shadow detection method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103295013A (en)* | 2013-05-13 | 2013-09-11 | 天津大学 | Pared area based single-image shadow detection method |
| Title |
|---|
| PABLO ARBELA´ EZ等: "Contour Detection and Hierarchical Image Segmentation", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》* |
| RUIQI GUO等: "Single-Image Shadow Detection and Removal using Paired Regions", 《COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106408648A (en)* | 2015-08-03 | 2017-02-15 | 青岛海信医疗设备股份有限公司 | Medical-tissue slice-image three-dimensional reconstruction method and equipment thereof |
| CN106488180A (en)* | 2015-08-31 | 2017-03-08 | 上海悠络客电子科技有限公司 | Video shadow detection method |
| CN105447501A (en)* | 2015-11-02 | 2016-03-30 | 北京旷视科技有限公司 | Method and device for shadow detection of license image based on clustering |
| CN105447501B (en)* | 2015-11-02 | 2019-03-01 | 徐州旷视数据科技有限公司 | Clustering-based license image shadow detection method and device |
| CN106023113A (en)* | 2016-05-27 | 2016-10-12 | 哈尔滨工业大学 | Satellite high-score image shadow region recovery method based on non-local sparse |
| CN106023113B (en)* | 2016-05-27 | 2018-12-14 | 哈尔滨工业大学 | Based on the high partial image shadow region restoration methods of the sparse satellite of non-local |
| CN106295570B (en)* | 2016-08-11 | 2019-09-13 | 北京暴风魔镜科技有限公司 | Filtration system and method are blocked in interaction |
| CN106295570A (en)* | 2016-08-11 | 2017-01-04 | 北京暴风魔镜科技有限公司 | Block filtration system and method alternately |
| CN107507146B (en)* | 2017-08-28 | 2021-04-16 | 武汉大学 | A method for removing soft shadows in natural images |
| CN107507146A (en)* | 2017-08-28 | 2017-12-22 | 武汉大学 | A kind of natural image soft shadowses removing method |
| US10504282B2 (en) | 2018-03-21 | 2019-12-10 | Zoox, Inc. | Generating maps without shadows using geometry |
| US10699477B2 (en)* | 2018-03-21 | 2020-06-30 | Zoox, Inc. | Generating maps without shadows |
| CN109493406B (en)* | 2018-11-02 | 2022-11-11 | 四川大学 | Fast percentage close to soft shadow drawing method |
| CN109493406A (en)* | 2018-11-02 | 2019-03-19 | 四川大学 | Quick percentage is close to soft shadows method for drafting |
| WO2020119618A1 (en)* | 2018-12-12 | 2020-06-18 | 中国科学院深圳先进技术研究院 | Image inpainting test method employing texture feature fusion |
| CN111526263B (en)* | 2019-02-01 | 2022-03-18 | 光宝电子(广州)有限公司 | Image processing method, device and computer system |
| CN111526263A (en)* | 2019-02-01 | 2020-08-11 | 光宝电子(广州)有限公司 | Image processing method, device and computer system |
| CN114175098A (en)* | 2019-07-30 | 2022-03-11 | 高尔纵株式会社 | Detection method of golf club and sensing device using same |
| CN110427950B (en)* | 2019-08-01 | 2021-08-27 | 重庆师范大学 | Purple soil image shadow detection method |
| CN110427950A (en)* | 2019-08-01 | 2019-11-08 | 重庆师范大学 | Method of shadow detection in purple soil soil image |
| CN110765875A (en)* | 2019-09-20 | 2020-02-07 | 浙江大华技术股份有限公司 | Method, equipment and device for detecting boundary of traffic target |
| CN110765875B (en)* | 2019-09-20 | 2022-04-19 | 浙江大华技术股份有限公司 | Method, equipment and device for detecting boundary of traffic target |
| WO2021147408A1 (en)* | 2020-01-22 | 2021-07-29 | 腾讯科技(深圳)有限公司 | Pixel point identification method and apparatus, illumination rendering method and apparatus, electronic device and storage medium |
| JP7381738B2 (en) | 2020-01-22 | 2023-11-15 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | Pixel point recognition and light rendering methods, devices, electronic equipment and computer programs |
| US12039662B2 (en) | 2020-01-22 | 2024-07-16 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for recognizing pixel point, illumination rendering method and apparatus, electronic device, and storage medium |
| JP2023501102A (en)* | 2020-01-22 | 2023-01-18 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | Pixel point recognition and light rendering method, apparatus, electronic device and computer program |
| CN111738931A (en)* | 2020-05-12 | 2020-10-02 | 河北大学 | Shadow Removal Algorithm for Photovoltaic Array UAV Aerial Imagery |
| CN112598592A (en)* | 2020-12-24 | 2021-04-02 | 广东博智林机器人有限公司 | Image shadow removing method and device, electronic equipment and storage medium |
| CN113256666A (en)* | 2021-07-19 | 2021-08-13 | 广州中望龙腾软件股份有限公司 | Contour line generation method, system, equipment and storage medium based on model shadow |
| CN114359684A (en)* | 2021-12-17 | 2022-04-15 | 浙江大华技术股份有限公司 | Image shadow evaluation method and device |
| CN114359684B (en)* | 2021-12-17 | 2025-09-05 | 浙江大华技术股份有限公司 | Image shadow evaluation method and device |
| CN114742836B (en)* | 2022-06-13 | 2022-09-09 | 浙江太美医疗科技股份有限公司 | Medical image processing method and device and computer equipment |
| CN114742836A (en)* | 2022-06-13 | 2022-07-12 | 浙江太美医疗科技股份有限公司 | Medical image processing method and device and computer equipment |
| Publication | Publication Date | Title |
|---|---|---|
| CN104463853A (en) | Shadow detection and removal algorithm based on image segmentation | |
| CN106548463B (en) | Automatic dehazing method and system for sea fog images based on dark channel and Retinex | |
| CN103942794B (en) | A kind of image based on confidence level is collaborative scratches drawing method | |
| CN110268420B (en) | Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product | |
| CN101840577B (en) | Image automatic segmentation method based on graph cut | |
| CN104834898B (en) | A kind of quality classification method of personage's photographs | |
| CN104050471B (en) | Natural scene character detection method and system | |
| CN101916370A (en) | Image processing method of non-feature region in face detection | |
| CN104778453B (en) | A kind of night pedestrian detection method based on infrared pedestrian's brightness statistics feature | |
| CN105809121A (en) | Multi-characteristic synergic traffic sign detection and identification method | |
| CN108009518A (en) | A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks | |
| CN110268442B (en) | Computer-implemented method of detecting foreign objects on background objects in an image, apparatus for detecting foreign objects on background objects in an image, and computer program product | |
| CN106981068B (en) | An Interactive Image Segmentation Method Combined with Pixels and Superpixels | |
| CN104318262A (en) | Method and system for replacing skin through human face photos | |
| CN103902985B (en) | High-robustness real-time lane detection algorithm based on ROI | |
| WO2021098163A1 (en) | Corner-based aerial target detection method | |
| CN103119625B (en) | Method and device for video character segmentation | |
| Niu et al. | Image segmentation algorithm for disease detection of wheat leaves | |
| CN104599271A (en) | CIE Lab color space based gray threshold segmentation method | |
| CN105654440B (en) | Quick single image defogging algorithm based on regression model and system | |
| CN106156777A (en) | Textual image detection method and device | |
| CN112200746A (en) | Dehazing method and device for foggy traffic scene images | |
| CN104318266A (en) | Image intelligent analysis processing early warning method | |
| CN115601358B (en) | Tongue picture image segmentation method under natural light environment | |
| CN111489330A (en) | Weak and small target detection method based on multi-source information fusion |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| C53 | Correction of patent of invention or patent application | ||
| CB03 | Change of inventor or designer information | Inventor after:Liu Yanli Inventor after:Chen Zhuo Inventor before:Liu Yanli Inventor before:Chen Zhuo | |
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication | Application publication date:20150325 |