技术领域technical field
本发明涉及图像融合质量评价领域,尤其涉及一种红外与可见光图像融合质量评价方法。The invention relates to the field of image fusion quality evaluation, in particular to an infrared and visible light image fusion quality evaluation method.
背景技术Background technique
红外与可见光图像融合是图像融合的重要分支,也是目前图像融合研究的重点。红外传感器是通过热辐射成像,有利于突出场景中的目标区域,但不能很好地表征场景细节特征;可见光传感器通过物体反射成像,能够提供目标所在场景的细节信息。因此,红外与可见光图像融合不仅具有良好的红外图像的目标特征,而且还能很好地保留可见光图像的细节信息。相对于发展迅速的图像融合技术,融合图像质量评价发展相对缓慢。图像融合质量评价不仅可以比较融合方法性能优劣,还可以作为指导融合方法改进的依据。因此,红外与可见光图像融合质量评价方法对后续融合图像的应用至关重要。Infrared and visible light image fusion is an important branch of image fusion, and it is also the focus of current image fusion research. Infrared sensors image through thermal radiation, which is helpful to highlight the target area in the scene, but cannot characterize the details of the scene well; visible light sensors can provide detailed information about the scene where the target is located through imaging through object reflection. Therefore, the fusion of infrared and visible light images not only has good target characteristics of infrared images, but also preserves the details of visible light images well. Compared with the rapidly developing image fusion technology, the development of fusion image quality evaluation is relatively slow. Image fusion quality evaluation can not only compare the performance of fusion methods, but also serve as a basis for guiding the improvement of fusion methods. Therefore, the quality evaluation method of infrared and visible light image fusion is very important for the application of subsequent fusion images.
经过对现有技术的检索发现,中国专利文献号CN103049893A,公开日2013.04.17,公开了一种图像融合质量评价的方法及装置,包括以下步骤:步骤1)获取各源图像及所述源图像的融合图像;步骤2) 对各源图像采用模糊聚类方法进行分割得到分割图像,并将各分割图像合并为一个总的分割图;步骤3) 获取各源图像的视觉方差显著图,根据视觉方差显著图计算权值图,并根据视觉方差显著图和总的分割图计算源图像和融合图像各区域的显著系数;步骤4)根据总的分割图、权值图和显著系数,计算各个区域上融合图像与源图像的加权结构相似度;步骤5)将所有区域的加权结构相似度求和得到该融合图像质量的评价指标。After searching the prior art, it is found that Chinese Patent Document No. CN103049893A, published on 2013.04.17, discloses a method and device for image fusion quality evaluation, including the following steps: Step 1) Acquire each source image and the source image fused image; Step 2) Segment each source image using fuzzy clustering method to obtain a segmented image, and merge each segmented image into a total segmented image; Step 3) Obtain the visual variance saliency map of each source image, according to the visual The variance saliency map calculates the weight map, and calculates the significant coefficients of each region of the source image and the fusion image according to the visual variance saliency map and the total segmentation map; step 4) calculates each region according to the total segmentation map, weight map and saliency coefficient The weighted structural similarity between the fused image and the source image; step 5) summing the weighted structural similarities of all regions to obtain an evaluation index of the fused image quality.
中国专利文献号CN104008543A,公开日2014.08.27,公开了一种图像融合质量评价方法,包括以下步骤:步骤1)获取源图像:要求源图像时间上同步、空间上覆盖同一区域;步骤2)图像预处理步骤:采用二次多项式配准,最近邻内插法重采样;步骤3)图像融合:对源图像采用多尺度分析和成分替换法相结合的融合方法,得到不同融合方法下的融合图像;步骤4)融合质量评价:计算出融合图像与源图像间的交叉熵和结构相似度,建立交叉熵和结构相似度加权函数模型,计算预设权值下的融合质量评价总值。Chinese Patent Document No. CN104008543A, published on August 27, 2014, discloses an image fusion quality evaluation method, which includes the following steps: Step 1) Acquire source images: require source images to be synchronized in time and cover the same area in space; Step 2) Image Preprocessing step: using quadratic polynomial registration, nearest neighbor interpolation method resampling; step 3) image fusion: using a fusion method combining multi-scale analysis and component replacement method for the source image to obtain fusion images under different fusion methods; Step 4) Fusion quality evaluation: Calculate the cross-entropy and structural similarity between the fusion image and the source image, establish a cross-entropy and structural similarity weighted function model, and calculate the total value of the fusion quality evaluation under the preset weight.
虽然上述两种技术能够评价融合图像质量,但是它们在红外与可见光图像融合质量评价表现不佳,分析其原因在于:Although the above two technologies can evaluate the fusion image quality, they perform poorly in the evaluation of the fusion quality of infrared and visible light images. The reasons for the analysis are as follows:
1.上述两种技术均是通过计算源图像与融合图像间的相似度评价融合图像质量,这两种技术均忽略了融合图像本身的细节特征;1. Both of the above two technologies evaluate the quality of the fusion image by calculating the similarity between the source image and the fusion image, and both of these two technologies ignore the details of the fusion image itself;
2.红外与可见光图像融合的主要目的之一是保留红外源图像的目标特征,但上述两种技术均没有针对融合图像的目标特征进行评价;2. One of the main purposes of infrared and visible light image fusion is to preserve the target features of the infrared source image, but neither of the above two techniques evaluates the target features of the fused image;
3.一种图像融合质量评价的方法及装置(中国专利文献号CN103049893A)采用模糊聚类方法分割红外图像,但该分割方法对复杂背景干扰的图像及低信噪比图像不能正确分割,影响后续的评价结果;3. A method and device for image fusion quality evaluation (Chinese Patent Document No. CN103049893A) uses fuzzy clustering method to segment infrared images, but this segmentation method cannot correctly segment images with complex background interference and low signal-to-noise ratio images, which affects subsequent evaluation results;
4.一种图像融合质量评价方法(中国专利文献号CN104008543A)和一种图像融合质量评价的方法及装置(中国专利文献号CN103049893A)均采用结构相似度计算融合图像与源图像间的相似度,以此评价融合图像质量。但当两幅图像较为模糊时,分别应用这两种技术获得的图像融合质量评价结果与主观评价具有较差的一致性。4. An image fusion quality evaluation method (Chinese Patent Document No. CN104008543A) and a method and device for image fusion quality evaluation (Chinese Patent Document No. CN103049893A) both use structural similarity to calculate the similarity between the fusion image and the source image, This is used to evaluate the fusion image quality. But when the two images are blurred, the image fusion quality evaluation results obtained by applying these two techniques respectively have poor consistency with the subjective evaluation.
因此,寻求一种能够反映融合图像全局细节特征和局部目标特征,且与主观评价具有较高一致性的红外与可见光图像融合质量评价方法,对评价红外与可见光图像融合质量的好坏是至关重要的。Therefore, it is crucial to seek a quality evaluation method for infrared and visible light image fusion that can reflect the global detail characteristics and local target characteristics of the fusion image, and has a high consistency with subjective evaluation. important.
发明内容Contents of the invention
本发明针对现有技术存在的上述不足,提出了一种红外与可见光图像融合质量评价方法。Aiming at the above-mentioned deficiencies in the prior art, the present invention proposes a method for evaluating the fusion quality of infrared and visible light images.
本发明为解决上述技术问题采用以下技术方案:The present invention adopts the following technical solutions for solving the problems of the technologies described above:
一种红外与可见光图像融合质量评价方法,包含以下步骤:A method for evaluating the quality of infrared and visible light image fusion, comprising the following steps:
步骤1:获取红外源图像A、可见光源图像B及融合图像F;Step 1: Obtain infrared source image A, visible light source image B and fusion image F;
步骤2:计算融合图像的全局细节特征指标;Step 2: Calculate the global detail feature index of the fused image;
步骤3:计算融合图像的局部目标特征指标;Step 3: Calculate the local target feature index of the fused image;
步骤4:将融合图像的全局细节特征指标和局部目标特征指标进行加权求和,得到融合图像质量评价指标。Step 4: The global detail feature index and the local target feature index of the fused image are weighted and summed to obtain the fused image quality evaluation index.
作为本发明一种红外与可见光图像融合质量评价方法进一步的优化方案,步骤2中所述计算融合图像的全局细节特征指标,其详细过程为:As a further optimization scheme of the infrared and visible light image fusion quality evaluation method of the present invention, the global detail feature index of the fusion image is calculated in step 2, and the detailed process is as follows:
步骤2.1:计算融合图像F的梯度幅度值Step 2.1: Calculate the gradient magnitude value of the fused image F
对于融合图像F,计算其水平梯度和垂直梯度For the fused image F, calculate its horizontal gradient and the vertical gradient
这里,将二者之和作为梯度:Here, the sum of the two is used as the gradient:
步骤2.2:计算融合图像梯度幅度信息熵,并将其作为融合图像全局细节特征指标Rglobal。Step 2.2: Calculate the gradient magnitude information entropy of the fused image, and use it as the global detail feature index Rglobal of the fused image.
其中,pi表示融合图像梯度图中灰度值为i的像素出现的概率,L为图像的灰度等级。Among them, pi represents the probability of occurrence of the pixel with gray value i in the gradient map of the fused image, and L is the gray level of the image.
作为本发明一种红外与可见光图像融合质量评价方法进一步的优化方案,步骤3中所述计算融合图像的局部目标特征指标,其详细过程为:As a further optimization scheme of the infrared and visible light image fusion quality evaluation method of the present invention, the local target feature index of the fusion image is calculated in step 3, and the detailed process is as follows:
步骤3.1:红外源图像和融合图像的区域划分Step 3.1: Region division of infrared source image and fused image
采用基于频率域的显著区域提取方法将红外源图像划分为目标区域和背景区域,基于频率域的显著区域提取方法步骤如下:The frequency domain-based salient region extraction method is used to divide the infrared source image into the target region and the background region. The steps of the frequency domain-based salient region extraction method are as follows:
采用高斯带通滤波器抽取红外源图像的显著特征,高斯带通滤波器定义如下:A Gaussian band-pass filter is used to extract the salient features of the infrared source image, and the Gaussian band-pass filter is defined as follows:
其中,σ1,σ2(σ1>σ2)是高斯滤波器的标准方差。Among them, σ1 , σ2 (σ1 >σ2 ) are the standard deviation of the Gaussian filter.
为了尽可能地获得低频段的所有频率值,σ1设置为无穷大;为了去掉图像的高频噪声和纹理信息,选择先用小高斯核滤波器拟合离散的高斯值。In order to obtain all the frequency values in the low-frequency band as much as possible, σ1 is set to infinity; in order to remove the high-frequency noise and texture information of the image, a small Gaussian kernel filter is chosen to fit discrete Gaussian values first.
利用下式计算图像的显著度图S:The saliency map S of the image is calculated using the following formula:
S(x,y)=|Aμ-Awhc(x,y)|S(x,y)=|Aμ -Awhc (x,y)|
式中,Aμ为红外源图像灰度平均值;Awhc(x,y)为红外源图像经高斯滤波后的图像;||是L1范数。In the formula, Aμ is the average gray value of the infrared source image; Awhc (x, y) is the Gaussian filtered image of the infrared source image; || is the L1 norm.
采用区域生长法对红外源图像的目标和背景进行分割,具体步骤如下:1)在显著度图中选择灰度值最大的点作为种子点;2)以种子点为中心,考虑其4邻域像素点,如果满足生长规则,将其合并。以邻域像素点与已分割区域灰度均值的差作为相似性测度,把差值最小的邻域相似点合并到分割区域;3) 当相似性测度大于分割阈值时,则停止生长。Using the region growing method to segment the target and background of the infrared source image, the specific steps are as follows: 1) Select the point with the largest gray value in the saliency map as the seed point; 2) Take the seed point as the center and consider its 4 neighborhoods Pixels, if they meet the growth rules, merge them. The difference between the neighborhood pixel and the average gray value of the segmented area is used as the similarity measure, and the neighborhood similar points with the smallest difference are merged into the segmented area; 3) When the similarity measure is greater than the segmentation threshold, the growth is stopped.
步骤3.2:计算融合图像目标区域与红外源图像目标区域间的边缘结构相似度ESSIM(Alocal,Flocal)Step 3.2: Calculate the edge structure similarity ESSIM(Alocal ,Flocal ) between the target area of the fusion image and the target area of the infrared source image
式中,分别为红外源图像的目标区域和融合图像的目标区域;l(Alocal,Flocal), c(Alocal,Flocal),s(Alocal,Flocal),e(Alocal,Flocal)分别为目标区域Alocal和目标区域Flocal的亮度比较分量、对比度比较分量、结构比较分量和边缘比较分量;参数α,β,γ,η分别为它们的权重,通常取α=β=γ=η=1;分别为目标区域Alocal和目标区域Flocal的像素均值;分别为目标区域Alocal和目标区域Flocal的像素方差;为目标区域Alocal和目标区域Flocal之间的协方差,分别为目标区域Alocal和目标区域Flocal的边缘图像的灰度方差,为两区域边缘图像的灰度协方差,这里的边缘图像是采用Sobel边缘检测法获取;c1,c2,c3,c4为常量,引入目的是为避免分母接近0时出现不稳定性。In the formula, are the target area of the infrared source image and the target area of the fusion image; l(Alocal ,Flocal ), c(Alocal ,Flocal ), s(Alocal ,Flocal ), e(Alocal ,Flocal ) are the brightness comparison component, contrast comparison component, structure comparison component and edge comparison component of the target area Alocal and the target area Flocal respectively; the parameters α, β, γ, and η are their weights respectively, usually α=β=γ= n = 1; are the pixel mean values of the target area Alocal and the target area Flocal respectively; are the pixel variance of the target area Al ocal and the target area Flocal respectively; is the covariance between the target area Alocal and the target area Flocal , are the gray variance of the edge images of the target area Alocal and the target area Flocal , respectively, is the gray level covariance of the edge images of the two regions. The edge images here are acquired by the Sobel edge detection method; c1 , c2 , c3 , and c4 are constants, which are introduced to avoid instability when the denominator is close to 0 .
步骤3.3:计算融合图像的目标与背景间的对比度Step 3.3: Calculate the contrast between the object and the background of the fused image
首先,计算融合图像的亮度均值减损对比归一化系数(简称MSCN系数),其计算公式如下:First, calculate the brightness mean value loss contrast normalization coefficient (referred to as MSCN coefficient) of the fused image, and its calculation formula is as follows:
式中,常数C是为避免图像平坦区分母趋向于零时发生不稳定;In the formula, the constant C is to avoid instability when the denominator of the image flat area tends to zero;
为二维圆对称的高斯加权函数,这里令K=L=3。 is a two-dimensional circularly symmetrical Gaussian weighting function, where K=L=3 is set.
其次,计算目标与背景间的韦伯对比度,计算公式为:Cw=|Lt-Lb|/LbSecondly, calculate the Weber contrast between the target and the background, the calculation formula is: Cw =|Lt -Lb |/Lb
其中,Lt和Lb分别为目标区域和邻近背景区域中像素的MSCN系数的平均值。whereLt andLb are the mean values of the MSCN coefficients of pixels in the target region and adjacent background regions, respectively.
步骤3.4计算融合图像的局部目标特征评价指标Step 3.4 Calculate the local target feature evaluation index of the fusion image
将融合图像目标区域与红外源图像目标区域间的边缘结构相似度ESSIM(Alocal,Flocal)和融合图像的目标与背景间的对比度Cw相加,获得融合图像的局部目标特征评价指标Rlocal,即:Add the edge structure similarity ESSIM(Alocal , Flocal ) between the target area of the fused image and the target area of the infrared source image and the contrast Cw between the target and the background of the fused image to obtain the local target feature evaluation index R of the fused imagelocal , that is:
Rlocal=ESSIM(Alocal,Flocal)+CWRlocal =ESSIM(Alocal ,Flocal )+CW
本发明采用以上技术方案与现有技术相比,具有以下技术效果:Compared with the prior art, the present invention adopts the above technical scheme and has the following technical effects:
1.本发明既考虑了红外与可见光图像融合的全局细节特征,又考虑了融合图像的目标特征,体现了红外与可见光图像融合的融合目的;1. The present invention not only considers the global detail characteristics of the fusion of infrared and visible light images, but also considers the target characteristics of the fusion image, reflecting the fusion purpose of fusion of infrared and visible light images;
2.本发明采用基于频率域的显著区域提取方法将红外源图像划分为目标区域和背景区域,对于有复杂背景干扰的图像及低信噪比图像能够正确分割;2. The present invention divides the infrared source image into a target area and a background area by using a significant area extraction method based on the frequency domain, and can correctly segment images with complex background interference and low signal-to-noise ratio images;
3.本发明首先计算融合图像的MSCN系数,用于模拟人类视觉的对比度增益掩盖过程,最后计算目标与背景间韦伯对比度,能有效实现融合图像目标与背景感知对比度的客观评价;3. The present invention first calculates the MSCN coefficient of the fused image, which is used to simulate the contrast gain masking process of human vision, and finally calculates the Weber contrast between the target and the background, which can effectively realize the objective evaluation of the perceptual contrast between the target and the background of the fused image;
4.本发明采用边缘结构相似度计算融合图像目标区域与红外源图像目标区域间的相关性,在一定程度上,能有效评价较为模糊的红外与可见光图像融合质量。4. The present invention uses the edge structure similarity to calculate the correlation between the fusion image target area and the infrared source image target area, which can effectively evaluate the fuzzy infrared and visible light image fusion quality to a certain extent.
5.通过仿真实验验证,本发明的红外与可见光图像融合质量评价方法具有较高的主观一致性,能更好的反映红外与可见光图像融合的质量。5. It is verified by simulation experiments that the infrared and visible light image fusion quality evaluation method of the present invention has high subjective consistency, and can better reflect the quality of infrared and visible light image fusion.
附图说明Description of drawings
图1是本发明提供的一种红外与可见光图像融合质量评价方法的流程图;Fig. 1 is a flow chart of a method for evaluating the fusion quality of infrared and visible light images provided by the present invention;
图2是本发明提供的融合图像及其梯度幅度图和梯度幅度直方图;Fig. 2 is the fused image and its gradient magnitude diagram and gradient magnitude histogram provided by the present invention;
图3是本发明提供的红外源图像和融合图像的分割图像;Fig. 3 is the segmentation image of infrared source image and fusion image provided by the present invention;
图4是本发明提供的第一组源图像及融合图像;Fig. 4 is the first group of source images and fusion images provided by the present invention;
图5是本发明提供的第二组源图像及融合图像;Fig. 5 is the second group of source images and fusion images provided by the present invention;
图6是本发明提供的第三组源图像及融合图像;Fig. 6 is the third group of source images and fusion images provided by the present invention;
图7是本发明提供的第四组源图像及融合图像。Fig. 7 is a fourth group of source images and fused images provided by the present invention.
具体实施方式detailed description
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图,对本发明进行详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in detail below in conjunction with the accompanying drawings. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
结合图1,本发明公开了一种红外与可见光图像融合质量评价方法,包含以下步骤:With reference to Fig. 1, the present invention discloses a method for evaluating the fusion quality of infrared and visible light images, which includes the following steps:
步骤1:获取红外源图像A、可见光源图像B及融合图像F;Step 1: Obtain infrared source image A, visible light source image B and fusion image F;
步骤2:计算融合图像的全局细节特征指标Rglobal;Step 2: Calculate the global detail feature index Rglobal of the fused image;
图像的梯度信息是图像中人眼敏感程度较高的内容,可以将其作为表征图像细节信息的一种特征。The gradient information of the image is the highly sensitive content of the human eye in the image, which can be used as a feature to characterize the detailed information of the image.
对于融合图像F,计算其水平梯度和垂直梯度For the fused image F, calculate its horizontal gradient and the vertical gradient
将二者之和作为融合图像的梯度:Use the sum of the two as the gradient of the fused image:
图2给出了融合图像及其梯度幅度图和梯度幅度直方图,其中,图2a是基于均值法(AVE)的图像融合,它所对应的梯度幅度图和梯度幅度直方图分别见图2b和图2c;图2d是基于拉普拉斯金字塔(LAP) 的图像融合,其对应的梯度幅度图和梯度幅度直方图分别见图2e和图2f。Figure 2 shows the fused image and its gradient magnitude map and gradient magnitude histogram, among which, Fig. 2a is an image fusion based on the average value method (AVE), and its corresponding gradient magnitude map and gradient magnitude histogram are shown in Fig. 2b and Figure 2c and Figure 2d are image fusion based on Laplacian Pyramid (LAP), and the corresponding gradient magnitude map and gradient magnitude histogram are shown in Figure 2e and Figure 2f, respectively.
分析图2可知,图2d的纹理丰富、分辨率高、图像对比度明显,质量优于图2a。相应地,虽然两幅融合图像对应的梯度直方图的形状均是一个窄的单峰,且最高峰靠近梯度值为零的方向,但它们的宽窄和灰度动态范围存在明显不同:1)图2f的峰型较图2c的峰型缓;2)最高峰值不同,图2c的最高峰值为14000,图2f的最高峰值不到8000;3)灰度级动态范围不同,图2f的灰度动态范围较图2c宽,其包含的信息更加丰富。Analysis of Figure 2 shows that Figure 2d has rich textures, high resolution, and obvious image contrast, and its quality is better than that of Figure 2a. Correspondingly, although the shape of the gradient histogram corresponding to the two fused images is a narrow single peak, and the highest peak is close to the direction where the gradient value is zero, their width and grayscale dynamic range are significantly different: 1) Fig. The peak shape of 2f is slower than that of Figure 2c; 2) the highest peak is different, the highest peak of Figure 2c is 14000, and the highest peak of Figure 2f is less than 8000; The range is wider than that in Figure 2c, and it contains richer information.
应用梯度直方图可以定量评价融合图像的细节丰富程度。直方图是从概率的角度研究图像梯度的特征,通过计算融合图像梯度信息熵定量分析融合图像细节丰富程度。融合图像全局细节特征指标公式如下所示:Applying the gradient histogram can quantitatively evaluate the detail richness of the fused image. The histogram is to study the characteristics of the image gradient from the perspective of probability, and quantitatively analyze the richness of the details of the fusion image by calculating the gradient information entropy of the fusion image. The formula of the fusion image global detail feature index is as follows:
其中,pi表示融合图像梯度图中灰度值为i的像素出现的概率,L为图像的灰度等级。Among them, pi represents the probability of occurrence of the pixel with gray value i in the gradient map of the fused image, and L is the gray level of the image.
步骤3:计算融合图像的局部目标特征指标Rlocal;Step 3: Calculate the local target feature index Rlocal of the fused image;
计算融合图像的局部目标特征指标Rlocal的具体步骤如下:The specific steps of calculating the local target feature indexRlocal of the fused image are as follows:
步骤3.1:红外源图像和融合图像的区域划分Step 3.1: Region division of infrared source image and fused image
采用基于频率域的显著区域提取法将红外源图像划分为目标区域和背景区域,基于频率域的显著区域提取法的具体步骤如下:The frequency domain-based salient region extraction method is used to divide the infrared source image into the target region and the background region. The specific steps of the frequency domain-based salient region extraction method are as follows:
采用高斯带通滤波器抽取红外源图像的显著特征,高斯带通滤波器定义如下:A Gaussian band-pass filter is used to extract the salient features of the infrared source image, and the Gaussian band-pass filter is defined as follows:
其中,σ1,σ2(σ1>σ2)是高斯滤波器的标准方差。为了尽可能地获得低频段的所有频率值,σ1设置为无穷大;为了去掉图像的高频噪声和纹理信息,选择先用小高斯核滤波器拟合离散的高斯值。Among them, σ1 , σ2 (σ1 >σ2 ) are the standard deviation of the Gaussian filter. In order to obtain all the frequency values in the low-frequency band as much as possible, σ1 is set to infinity; in order to remove the high-frequency noise and texture information of the image, a small Gaussian kernel filter is chosen to fit discrete Gaussian values first.
利用下式计算图像的显著度图S:The saliency map S of the image is calculated using the following formula:
S(x,y)=|Aμ-Awhc(x,y)|S(x,y)=|Aμ -Awhc (x,y)|
式中,Aμ为红外源图像灰度平均值;Awhc(x,y)为红外源图像经高斯滤波后的图像;||是L1范数。In the formula, Aμ is the average gray value of the infrared source image; Awhc (x, y) is the Gaussian filtered image of the infrared source image; || is the L1 norm.
采用区域生长法对红外源图像的目标和背景进行分割,具体步骤如下:1)在显著度图中选择灰度值最大的点作为种子点;2)以种子点为中心,考虑其4邻域像素点,如果满足生长规则,在将其合并。以邻域像素点与已分割区域灰度均值的差作为相似性测度,把差值最小的邻域相似点合并到分割区域;3) 当相似性测度大于分割阈值时,则停止生长。Using the region growing method to segment the target and background of the infrared source image, the specific steps are as follows: 1) Select the point with the largest gray value in the saliency map as the seed point; 2) Take the seed point as the center and consider its 4 neighborhoods Pixels, if they meet the growth rules, are merged. The difference between the neighborhood pixel point and the average gray value of the segmented area is used as the similarity measure, and the neighborhood similar point with the smallest difference is merged into the segmented area; 3) When the similarity measure is greater than the segmentation threshold, stop growing.
图3给出了红外源图像和融合图像的分割图像,其中图3a~图3c分别为红外源图像、AVE融合图像 (经均值法获得的融合图像)、LAP融合图像(经拉普拉斯金字塔获得的融合图像),图3d~图3f分别为它们各自的区域分割图。Figure 3 shows the segmented images of the infrared source image and the fusion image, in which Figures 3a to 3c are the infrared source image, the AVE fusion image (the fusion image obtained by the mean method), and the LAP fusion image (the Laplacian pyramid The obtained fused image), Fig. 3d ~ Fig. 3f are their respective region segmentation diagrams.
从图3可以看出,该方法有效提取了红外图像的目标区域,见图3d所示;将目标区域的划分结果分别映射到AVE融合图像和LAP融合图像,两幅融合图像的目标区域分割分别见图3e和图3f所示。从人眼观测可知,虽然AVE融合图像和LAP融合图像都能检测出热目标,但LAP融合方法的目标特征更好地保留了红外图像的目标特征,其融合质量优于AVE融合方法。It can be seen from Figure 3 that this method effectively extracts the target area of the infrared image, as shown in Figure 3d; the division results of the target area are mapped to the AVE fusion image and the LAP fusion image respectively, and the target area segmentation of the two fusion images is respectively See Figure 3e and Figure 3f. It can be seen from human eye observation that although both AVE fusion images and LAP fusion images can detect thermal targets, the target features of the LAP fusion method better retain the target features of the infrared image, and its fusion quality is better than that of the AVE fusion method.
步骤3.2:计算融合图像目标区域与红外源图像目标区域间的边缘结构相似度Step 3.2: Calculate the edge structure similarity between the fusion image target area and the infrared source image target area
式中,分别为红外源图像的目标区域和融合图像的目标区域;l(Alocal,Flocal), c(Alocal,Flocal),s(Alocal,Flocal),e(Alocal,Flocal)分别为红外源图像的目标区域Alocal和融合图像目标区域 Flocal的亮度比较分量、对比度比较分量、结构比较分量和边缘比较分量;参数α,β,γ,η分别为它们的权重,通常取α=β=γ=η=1;分别为目标区域Alocal和目标区域Flocal的像素均值;分别为目标区域Alocal和目标区域Flocal的像素方差;为目标区域Alocal和目标区域Flocal之间的协方差,分别为目标区域Alocal和目标区域Flocal的边缘图像的灰度方差,为两区域边缘图像的灰度协方差,这里的边缘图像是采用Sobel边缘检测法获取;c1,c2,c3,c4为常量,引入目的是为避免分母接近0时出现不稳定性。In the formula, are the target area of the infrared source image and the target area of the fusion image; l(Alocal ,Flocal ), c(Alocal ,Flocal ), s(Alocal ,Flocal ), e(Alocal ,Flocal ) are the brightness comparison components, contrast comparison components, structure comparison components and edge comparison components of the target area Alocal of the infrared source image and the fusion image target area Flocal ; the parameters α, β, γ, and η are their weights, usually taken as α=β=γ=η=1; are the pixel mean values of the target area Alocal and the target area Flocal respectively; are the pixel variance of the target area Alocal and the target area Flocal respectively; is the covariance between the target area Alocal and the target area Flocal , are the gray variance of the edge images of the target area Alocal and the target area Flocal , respectively, is the gray level covariance of the edge images of the two regions. The edge images here are acquired by the Sobel edge detection method; c1 , c2 , c3 , and c4 are constants, which are introduced to avoid instability when the denominator is close to 0 .
步骤3.3:计算融合图像的目标与背景间的对比度Step 3.3: Calculate the contrast between the object and the background of the fused image
首先,计算融合图像的亮度均值减损对比归一化系数(简称MSCN系数),其计算公式如下:First, calculate the brightness mean value loss contrast normalization coefficient (referred to as MSCN coefficient) of the fused image, and its calculation formula is as follows:
式中,常数C是为避免图像平坦区分母趋向于零时发生不稳定;In the formula, the constant C is to avoid instability when the denominator of the image flat area tends to zero;
为二维圆对称的高斯加权函数,这里令K=L=3。 is a two-dimensional circularly symmetrical Gaussian weighting function, where K=L=3 is set.
其次,计算目标与背景间的韦伯对比度,计算公式为:Cw=|Lt-Lb|/Lb。其中,Lt和Lb分别为目标区域和邻近背景区域中像素的MSCN系数的平均值。Secondly, the Weber contrast between the target and the background is calculated, and the calculation formula is: Cw =|Lt -Lb |/Lb . whereLt andLb are the mean values of the MSCN coefficients of pixels in the target region and adjacent background regions, respectively.
步骤3.4:计算融合图像的局部目标特征评价指标RlocalStep 3.4: Calculate the local target feature evaluation index Rlocal of the fused image
将区域边缘结构相似度ESSIM(Alocal,Flocal)和融合图像的目标与背景间的对比度Cw相加,获得融合图像的局部目标特征评价指标Rlocal,即:Rlocal=ESSIM(Alocal,Flocal)+CW。Add the regional edge structure similarity ESSIM(Alocal ,Flocal ) and the contrast Cw between the target and the background of the fused image to obtain the local target feature evaluation index Rlocal of the fused image, namely: Rlocal = ESSIM(Alocal ,Flocal )+CW .
步骤4:将融合图像的全局细节特征指标和局部目标特征指标进行加权求和,得到融合图像质量评价指标,计算公式为:R=w1Rglobal+w2RlocalStep 4: Weighted and summed the global detail feature index and local target feature index of the fused image to obtain the fused image quality evaluation index, the calculation formula is: R=w1 Rglobal +w2 Rlocal
式中,w1,w2分别为全局细节特征Rglobal和局部目标特征Rlocal的权重,通常取w1=0.6,w2=0.4。In the formula, w1 and w2 are the weights of the global detail feature Rglobal and the local target feature Rlocal respectively, and w1 =0.6 and w2 =0.4 are usually taken.
评价标准:评价指标R值越大,表示红外与可见光图像融合质量越优;反之,表示红外与可见光图像融合质量越差。Evaluation criteria: The larger the R value of the evaluation index, the better the quality of infrared and visible light image fusion; otherwise, the worse the quality of infrared and visible light image fusion.
本发明给出了基于一定仿真条件下的仿真结果图,体现出本发明技术方案获得的有益效果。The present invention provides a simulation result diagram based on certain simulation conditions, reflecting the beneficial effects obtained by the technical solution of the present invention.
本发明首先对四组红外与可见光源图像采用均值法(AVE)、主成分分析方法(PCA)、拉普拉斯金字塔方法(LAP)和离散小波(DWT)进行融合,再利用本发明一种红外与可见光图像融合质量评价方法对融合图像进行评价。In the present invention, at first, four groups of infrared and visible light source images are fused using the mean value method (AVE), the principal component analysis method (PCA), the Laplacian pyramid method (LAP) and the discrete wavelet (DWT), and then use a method of the present invention to Infrared and visible light image fusion quality evaluation method is used to evaluate the fusion image.
本发明选取其中具有代表性的4组红外与可见光源图像及融合图像进行展示,分别见图4—图7所示,其中,图中a、b分别为严格配准的红外与可见光源图像,图中c-f分别应用AVE、PCA、LAP、DWT获得的融合图像。The present invention selects 4 representative groups of infrared and visible light source images and fusion images for display, as shown in Figures 4-7 respectively, where a and b in the figure are strictly registered infrared and visible light source images, respectively, In the figure, c-f respectively apply the fused images obtained by AVE, PCA, LAP, and DWT.
表1给出了上述4组红外与可见光图像融合的评价值。Table 1 gives the evaluation values of the above four groups of infrared and visible light image fusion.
表1 四组红外与可见光图像融合质量评价值Table 1 Four groups of infrared and visible light image fusion quality evaluation values
从表1可以看出:1)在图4中,LAP融合方法的质量最优,其次是DWT融合方法;2)在图5中,DWT融合方法质量最优,其次是LAP融合方法。虽然PCA融合方法获得的融合图像细节比较丰富,但是它的局部目标特征质量较差,导致融合图像的整体质量较差。3)在图6和图7中,均是DWT融合方法最优,其次是LAP融合方法。It can be seen from Table 1 that: 1) In Figure 4, the quality of the LAP fusion method is the best, followed by the DWT fusion method; 2) In Figure 5, the quality of the DWT fusion method is the best, followed by the LAP fusion method. Although the details of the fused image obtained by the PCA fusion method are relatively rich, its local target feature quality is poor, resulting in poor overall quality of the fused image. 3) In Figure 6 and Figure 7, the DWT fusion method is the best, followed by the LAP fusion method.
综合可知,采用DWT或者LAP融合方法获得的融合图像质量较好。这是因为:一方面,由于小波分析方法考虑到了多分辨率的特性,无论从算法的合理性还是人眼主观评价等方面均优于AVE融合方法和PCA 融合方法;另一方面,从图4—图7可以看出,DWT融合方法和LAP融合方法均优于AVE融合方法和PCA 融合方法。因此,本发明是有效的,其结果与主观评价结果具有较好的一致性。It can be concluded that the fusion image quality obtained by DWT or LAP fusion method is better. This is because: on the one hand, because the wavelet analysis method takes into account the characteristics of multi-resolution, it is superior to the AVE fusion method and the PCA fusion method in terms of the rationality of the algorithm and the subjective evaluation of the human eye; on the other hand, from Figure 4 - It can be seen from Fig. 7 that both the DWT fusion method and the LAP fusion method are better than the AVE fusion method and the PCA fusion method. Therefore, the present invention is effective, and its results are in good agreement with the subjective evaluation results.
通过表1可以看出:本发明既可以评价红外与可见光融合图像的全局细节特征和局部目标特征,也可以评价融合图像的整体特征,兼顾了实际评价过程中的通用性和特殊性要求,对进一步改进红外与可见光图像融合方法具有一定的指导作用。It can be seen from Table 1 that the present invention can not only evaluate the global detail features and local target features of infrared and visible light fusion images, but also evaluate the overall features of fusion images, taking into account the generality and specificity requirements in the actual evaluation process, and the Further improving the infrared and visible light image fusion method has a certain guiding role.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710405190.9ACN107240096A (en) | 2017-06-01 | 2017-06-01 | A kind of infrared and visual image fusion quality evaluating method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710405190.9ACN107240096A (en) | 2017-06-01 | 2017-06-01 | A kind of infrared and visual image fusion quality evaluating method |
| Publication Number | Publication Date |
|---|---|
| CN107240096Atrue CN107240096A (en) | 2017-10-10 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710405190.9APendingCN107240096A (en) | 2017-06-01 | 2017-06-01 | A kind of infrared and visual image fusion quality evaluating method |
| Country | Link |
|---|---|
| CN (1) | CN107240096A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108062757A (en)* | 2018-01-05 | 2018-05-22 | 北京航空航天大学 | It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target |
| CN108256519A (en)* | 2017-12-13 | 2018-07-06 | 苏州长风航空电子有限公司 | A kind of notable detection method of infrared image based on global and local interaction |
| CN108447058A (en)* | 2018-03-30 | 2018-08-24 | 北京理工大学 | A kind of image quality evaluating method and system |
| CN108549874A (en)* | 2018-04-19 | 2018-09-18 | 广州广电运通金融电子股份有限公司 | A kind of object detection method, equipment and computer readable storage medium |
| CN108830847A (en)* | 2018-06-19 | 2018-11-16 | 中国石油大学(华东) | Visible light is objectively evaluated with infrared grayscale fusion image perceptual contrast |
| CN109118493A (en)* | 2018-07-11 | 2019-01-01 | 南京理工大学 | A kind of salient region detecting method in depth image |
| CN109166131A (en)* | 2018-09-29 | 2019-01-08 | 西安工业大学 | Segmentation and evaluation method of anti-halation image for automotive night vision based on fusion of infrared and visible light |
| CN109447980A (en)* | 2018-11-12 | 2019-03-08 | 公安部第三研究所 | Realize method, computer readable storage medium and the processor of image quality evaluation control |
| CN110035239A (en)* | 2019-05-21 | 2019-07-19 | 北京理工大学 | One kind being based on the more time of integration infrared image fusion methods of gray scale-gradient optimizing |
| CN110111292A (en)* | 2019-04-30 | 2019-08-09 | 淮阴师范学院 | A kind of infrared and visible light image fusion method |
| CN110322423A (en)* | 2019-04-29 | 2019-10-11 | 天津大学 | A kind of multi-modality images object detection method based on image co-registration |
| CN110428455A (en)* | 2019-04-19 | 2019-11-08 | 中国航空无线电电子研究所 | A kind of visible images and far infrared image object method for registering |
| CN110555823A (en)* | 2019-04-11 | 2019-12-10 | 江南大学 | Image fusion quality evaluation method based on TVL structure texture decomposition |
| CN110889817A (en)* | 2019-11-19 | 2020-03-17 | 中国人民解放军海军工程大学 | Image fusion quality evaluation method and device |
| CN111583315A (en)* | 2020-04-23 | 2020-08-25 | 武汉卓目科技有限公司 | Novel visible light image and infrared image registration method and device |
| CN114119560A (en)* | 2021-11-29 | 2022-03-01 | 杭州国家集成电路设计产业化基地有限公司 | Image quality evaluation method, system, and computer-readable storage medium |
| CN114331937A (en)* | 2021-12-27 | 2022-04-12 | 哈尔滨工业大学 | Multi-source image fusion method under low light conditions based on feedback iterative adjustment |
| CN116567189A (en)* | 2022-01-28 | 2023-08-08 | 深圳Tcl数字技术有限公司 | Image detail feature description method, device, computer equipment and storage medium |
| CN117893525A (en)* | 2024-02-28 | 2024-04-16 | 广州威睛光学科技有限公司 | Chip hot spot detection method based on infrared thermal imaging |
| CN119758708A (en)* | 2025-03-05 | 2025-04-04 | 中国科学院长春光学精密机械与物理研究所 | Target stable tracking method combined with ground speed compensation |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102881010A (en)* | 2012-08-28 | 2013-01-16 | 北京理工大学 | Method for evaluating perception sharpness of fused image based on human visual characteristics |
| CN106709916A (en)* | 2017-01-19 | 2017-05-24 | 泰康保险集团股份有限公司 | Image quality assessment method and device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102881010A (en)* | 2012-08-28 | 2013-01-16 | 北京理工大学 | Method for evaluating perception sharpness of fused image based on human visual characteristics |
| CN106709916A (en)* | 2017-01-19 | 2017-05-24 | 泰康保险集团股份有限公司 | Image quality assessment method and device |
| Title |
|---|
| 张小凤: "可见光和红外图像融合质量评价研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》* |
| 张玉珍 等: "基于相位一致性的结构相似度图像融合质量评价方法", 《兵工学报》* |
| 沈军民 等: "结合结构信息和亮度统计的无参考图像质量评价", 《电子学报》* |
| 郑加苏: "基于图像信息熵的无参考图像质量评估算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108256519A (en)* | 2017-12-13 | 2018-07-06 | 苏州长风航空电子有限公司 | A kind of notable detection method of infrared image based on global and local interaction |
| CN108256519B (en)* | 2017-12-13 | 2022-06-17 | 苏州长风航空电子有限公司 | Infrared image significance detection method based on global and local interaction |
| CN108062757A (en)* | 2018-01-05 | 2018-05-22 | 北京航空航天大学 | It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target |
| CN108062757B (en)* | 2018-01-05 | 2021-04-30 | 北京航空航天大学 | Method for extracting infrared target by using improved intuitionistic fuzzy clustering algorithm |
| CN108447058A (en)* | 2018-03-30 | 2018-08-24 | 北京理工大学 | A kind of image quality evaluating method and system |
| CN108447058B (en)* | 2018-03-30 | 2020-07-14 | 北京理工大学 | Image quality evaluation method and system |
| CN108549874A (en)* | 2018-04-19 | 2018-09-18 | 广州广电运通金融电子股份有限公司 | A kind of object detection method, equipment and computer readable storage medium |
| CN108830847A (en)* | 2018-06-19 | 2018-11-16 | 中国石油大学(华东) | Visible light is objectively evaluated with infrared grayscale fusion image perceptual contrast |
| CN109118493A (en)* | 2018-07-11 | 2019-01-01 | 南京理工大学 | A kind of salient region detecting method in depth image |
| CN109118493B (en)* | 2018-07-11 | 2021-09-10 | 南京理工大学 | Method for detecting salient region in depth image |
| CN113077482B (en)* | 2018-09-29 | 2024-01-19 | 西安工业大学 | Quality evaluation method of fusion image |
| CN113077482A (en)* | 2018-09-29 | 2021-07-06 | 西安工业大学 | Quality evaluation method for fused image |
| CN109166131A (en)* | 2018-09-29 | 2019-01-08 | 西安工业大学 | Segmentation and evaluation method of anti-halation image for automotive night vision based on fusion of infrared and visible light |
| CN109166131B (en)* | 2018-09-29 | 2021-06-29 | 西安工业大学 | Segmentation and evaluation method of anti-halation image for automotive night vision based on fusion of infrared and visible light |
| CN109447980B (en)* | 2018-11-12 | 2021-12-10 | 公安部第三研究所 | Method for implementing image quality evaluation control, computer readable storage medium and processor |
| CN109447980A (en)* | 2018-11-12 | 2019-03-08 | 公安部第三研究所 | Realize method, computer readable storage medium and the processor of image quality evaluation control |
| CN110555823B (en)* | 2019-04-11 | 2023-04-07 | 江南大学 | Image fusion quality evaluation method based on TVL structure texture decomposition |
| CN110555823A (en)* | 2019-04-11 | 2019-12-10 | 江南大学 | Image fusion quality evaluation method based on TVL structure texture decomposition |
| CN110428455B (en)* | 2019-04-19 | 2022-11-04 | 中国航空无线电电子研究所 | Target registration method for visible light image and far infrared image |
| CN110428455A (en)* | 2019-04-19 | 2019-11-08 | 中国航空无线电电子研究所 | A kind of visible images and far infrared image object method for registering |
| CN110322423A (en)* | 2019-04-29 | 2019-10-11 | 天津大学 | A kind of multi-modality images object detection method based on image co-registration |
| CN110111292B (en)* | 2019-04-30 | 2023-07-21 | 淮阴师范学院 | A Fusion Method of Infrared and Visible Light Images |
| CN110111292A (en)* | 2019-04-30 | 2019-08-09 | 淮阴师范学院 | A kind of infrared and visible light image fusion method |
| CN110035239A (en)* | 2019-05-21 | 2019-07-19 | 北京理工大学 | One kind being based on the more time of integration infrared image fusion methods of gray scale-gradient optimizing |
| CN110889817A (en)* | 2019-11-19 | 2020-03-17 | 中国人民解放军海军工程大学 | Image fusion quality evaluation method and device |
| CN111583315A (en)* | 2020-04-23 | 2020-08-25 | 武汉卓目科技有限公司 | Novel visible light image and infrared image registration method and device |
| CN114119560A (en)* | 2021-11-29 | 2022-03-01 | 杭州国家集成电路设计产业化基地有限公司 | Image quality evaluation method, system, and computer-readable storage medium |
| CN114331937A (en)* | 2021-12-27 | 2022-04-12 | 哈尔滨工业大学 | Multi-source image fusion method under low light conditions based on feedback iterative adjustment |
| CN116567189A (en)* | 2022-01-28 | 2023-08-08 | 深圳Tcl数字技术有限公司 | Image detail feature description method, device, computer equipment and storage medium |
| CN117893525A (en)* | 2024-02-28 | 2024-04-16 | 广州威睛光学科技有限公司 | Chip hot spot detection method based on infrared thermal imaging |
| CN117893525B (en)* | 2024-02-28 | 2024-06-11 | 广州威睛光学科技有限公司 | Chip hot spot detection method based on infrared thermal imaging |
| CN119758708A (en)* | 2025-03-05 | 2025-04-04 | 中国科学院长春光学精密机械与物理研究所 | Target stable tracking method combined with ground speed compensation |
| Publication | Publication Date | Title |
|---|---|---|
| CN107240096A (en) | A kind of infrared and visual image fusion quality evaluating method | |
| Qiu et al. | Guided filter-based multi-focus image fusion through focus region detection | |
| CN106339998B (en) | Multi-focus image fusion method based on contrast pyramid transformation | |
| Liu et al. | Dense SIFT for ghost-free multi-exposure fusion | |
| CN106570486B (en) | Filtered target tracking is closed based on the nuclear phase of Fusion Features and Bayes's classification | |
| CN108090888B (en) | Fusion detection method of infrared image and visible light image based on visual attention model | |
| CN103914834B (en) | A kind of significance object detecting method based on prospect priori and background priori | |
| CN109978848B (en) | Method for detecting hard exudation in fundus image based on multi-light-source color constancy model | |
| CN110490914A (en) | It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method | |
| CN107392968B (en) | Image saliency detection method fused with color contrast map and color space distribution map | |
| Zhang et al. | Infrared and visible image fusion using co-occurrence filter | |
| Yu et al. | Detecting circular and rectangular particles based on geometric feature detection in electron micrographs | |
| CN113344810B (en) | Image enhancement method based on dynamic data distribution | |
| CN103679173A (en) | Method for detecting image salient region | |
| CN106204570B (en) | A Corner Detection Method Based on Acausal Fractional Gradient Operator | |
| CN101923711A (en) | SAR Image Change Detection Method Based on Neighborhood Similarity and Mask Enhancement | |
| CN108830856B (en) | GA automatic segmentation method based on time series SD-OCT retina image | |
| CN101944233A (en) | Method for quickly extracting airport target in high-resolution remote sensing image | |
| CN105023253A (en) | Visual underlying feature-based image enhancement method | |
| CN110503617A (en) | Underwater image enhancement method based on high-frequency and low-frequency information fusion | |
| CN101980287A (en) | A Method of Image Edge Detection Using Non-subsampled Contourlet Transform | |
| CN102175993A (en) | Radar scene matching feature reference map preparation method based on satellite SAR (synthetic aperture radar) images | |
| CN109658357A (en) | A kind of denoising method towards remote sensing satellite image | |
| Anil et al. | A novel approach using active contour model for semi-automatic road extraction from high resolution satellite imagery | |
| CN115147613A (en) | Infrared small target detection method based on multidirectional fusion |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20171010 |