










技术领域technical field
本发明涉及遥感技术领域,具体涉及一种引力模型优化条件随机场的多尺度融合图像变化检测方法。The invention relates to the field of remote sensing technology, in particular to a multi-scale fusion image change detection method for a random field of a gravity model optimization condition.
背景技术Background technique
变化检测是遥感领域的研究热点。随着遥感影像空间分辨率的提高,遥感影像的细节信息更加丰富,但也使得光谱的类内方差增大,类间方差减小。基于像元的变化检测技术难以适应高分辨率遥感影像,面向对象变化检测应运而生。然而,面向对象变化检测往往只考虑遥感影像的单一尺度特征,不能很好的利用高分辨率遥感影像的多尺度特性,限制了变化检测精度的进一步提高。针对此问题,本发明提出一种引力模型优化条件随机场的多尺度融合变化检测技术。所提出技术能够融合高分影像的多尺度信息,并在融合过程中有效地利用空间信息,有望得到较优的高分辨率遥感影像变化检测结果。Change detection is a research hotspot in the field of remote sensing. With the improvement of the spatial resolution of remote sensing images, the detailed information of remote sensing images is more abundant, but it also increases the intra-class variance of the spectrum and reduces the inter-class variance. The pixel-based change detection technology is difficult to adapt to high-resolution remote sensing images, and object-oriented change detection emerges as the times require. However, object-oriented change detection often only considers the single-scale features of remote sensing images, and cannot make good use of the multi-scale characteristics of high-resolution remote sensing images, which limits the further improvement of change detection accuracy. Aiming at this problem, the present invention proposes a multi-scale fusion change detection technology for a gravitational model optimization conditional random field. The proposed technology can fuse multi-scale information of high-resolution images, and effectively utilize spatial information in the fusion process, which is expected to obtain better high-resolution remote sensing image change detection results.
现有技术中有对遥感图像进行变化检测的技术,专利文献CN 102496154A记载了一种基于马尔科夫随机场的多时相遥感图像变化检测方法,包括差值图像生成步骤、EM参数估计步骤、差值图像边缘检测步骤、自适应权重计算步骤以及马尔科夫随机场标记步骤。本发明利用差值图像的相邻像素大小自动调整马尔科夫随机场的权值大小,有效提高变化检测精度。There is a technology for detecting changes in remote sensing images in the prior art. Patent document CN 102496154A describes a method for detecting changes in multi-temporal remote sensing images based on Markov random fields, which includes a difference image generation step, an EM parameter estimation step, and a difference image generation step. Value image edge detection step, adaptive weight calculation step and Markov random field labeling step. The present invention automatically adjusts the size of the weight of the Markov random field by using the size of the adjacent pixels of the difference image, thereby effectively improving the accuracy of change detection.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是提供一种引力模型优化条件随机场的多尺度融合图像变化检测方法,能够快速得出遥感图像的变化检测结果。The technical problem to be solved by the present invention is to provide a multi-scale fusion image change detection method based on a random field of a gravitational model optimization condition, which can quickly obtain a change detection result of a remote sensing image.
为解决上述技术问题,本发明所采用的技术方案是:For solving the above-mentioned technical problems, the technical scheme adopted in the present invention is:
引力模型优化条件随机场的多尺度融合图像变化检测方法,包括以下步骤:The multi-scale fusion image change detection method of gravity model optimization conditional random field includes the following steps:
Step1、将不同时期对同一目标摄取的两期高分辨率遥感影像进行预处理,预处理包括影像配准和相对辐射校正;Step1. Preprocess the two-phase high-resolution remote sensing images captured by the same target in different periods. The preprocessing includes image registration and relative radiometric correction;
Step2、采用简单线性迭代聚类方法分别对两幅影像进行超像元分割,采用多分辨率分割方法分别对两幅影像进行面向对象分割,加上原始影像,共获得像元级、超像元级和对象级三个空间尺度的影像;Step 2. Use the simple linear iterative clustering method to perform superpixel segmentation on the two images respectively, and use the multi-resolution segmentation method to perform object-oriented segmentation on the two images respectively, and add the original image to obtain a total of pixel-level and superpixel-level segmentation. images at three spatial scales, level and object level;
Step3、使用变化矢量分析法分别生成两期影像的像元级、超像元级和对象级三种空间尺度的差分影像;
Step4、对Step3中三种空间尺度的差分影像进行模糊聚类,得到三种空间尺度差分影像的模糊隶属度函数;Step 4. Perform fuzzy clustering on the differential images of the three spatial scales in
Step5、使用决策级融合方法融合Step4中所获取的三组模糊隶属度函数,获得初步融合结果;
Step6、利用引力模型优化的条件随机场模型优化Step5中的初步融合结果,得到最终变化检测图。Step 6. Use the conditional random field model optimized by the gravitational model to optimize the initial fusion result in
上述的Step2中将两期高分辨率遥感影像摄取时间定义为T1时刻和T2时刻,对应的遥感影像分别为X1和X2,采用简单线性迭代聚类方法对X1和X2进行分割,得到超像元分割图S1和S2,采用多分辨率分割对X1和X2进行分割,得到对象分割图O1和O2;In the above Step 2, the two-phase high-resolution remote sensing image capture time is defined as time T1 and time T2 , and the corresponding remote sensing images are X1 and X2 respectively. The simple linear iterative clustering method is used to analyze X1 and X2 Segmentation to obtain superpixel segmentation maps S1 and S2 , and use multi-resolution segmentation to segment X1 and X2 to obtain object segmentation maps O1 and O2 ;
上述的Step3中,像元级,超像元级,对象级差分影像分别记为和三组差分影像的计算公式如下:In the above Step3, the pixel-level, super-pixel-level, and object-level differential images are respectively recorded as and The calculation formulas of the three sets of differential images are as follows:
其中,表示t时刻像元级影像第b波段的第i个像元,表示t时刻超像元级影像第b波段的第i个像元,表示t时刻对象级影像第b波段的第i个像元,t=1,2,b=1,2,…,B,B为波段数。in, represents the i-th pixel of the b-th band of the pixel-level image at time t, represents the i-th pixel of the b-th band of the superpixel-level image at time t, Indicates the i-th pixel of the b-th band of the object-level image at time t, t=1, 2, b=1, 2, ..., B, B is the number of bands.
上述的Step4中,对和三种空间尺度的差分影像分别进行模糊C均值聚类,得到三种空间尺度差分影像的模糊隶属度函数,得到的像元级、超像元级和对象级三种空间尺度差分影像的模糊隶属度函数分别记作和其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,表示像元级影像第i个像元属于第k类的隶属度,表示超像元级影像第i个像元属于第k类的隶属度,表示对象级影像第i个像元属于第k类的隶属度。In the above Step4, right and The difference images of the three spatial scales are respectively clustered by fuzzy C-means, and the fuzzy membership functions of the difference images of the three spatial scales are obtained. The degree function is written as and where k∈{wc ,wu },wc and wu represent the changed class and the unchanged class, respectively, represents the membership degree of the i-th pixel of the pixel-level image belonging to the k-th class, represents the membership degree of the ith pixel of the superpixel-level image belonging to the kth class, Indicates the membership degree of the i-th pixel of the object-level image belonging to the k-th class.
上述的Step5中,将三种空间尺度差分影像的模糊隶属度函数视作三组证据,采用证据理论对三组证据进行决策级融合,像元级证据m1由像元级模糊隶属度函数得到:其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,表示像元级证据中第i个像元的质量函数;In the
超像元级证据m2由超像元级模糊隶属度函数得到:其中表示超像元级证据中第i个像元的质量函数;The superpixel-level evidencem2 is obtained from the superpixel-level fuzzy membership function: in represents the quality function of the ith pixel in the superpixel-level evidence;
对象级证据m3由对象级模糊隶属度函数得到:其中表示对象级证据中第i个像元的质量函数。三组证据的融合证据用m表示。在证据理论的融合框架下,第i个像元的归一化常数Ki可由下式计算得到:The object-level evidencem3 is obtained by the object-level fuzzy membership function: in Represents the mass function of the ith cell in the object-level evidence. The fusion evidence of the three sets of evidence is denoted by m. Under the fusion framework of evidence theory, the normalization constant Ki of the ith pixel can be calculated by the following formula:
融合证据m在第i个像元处关于变化类wc的置信度可由下式计算得到:Confidence of the fusion evidence m about the change class wc at the ith pixel It can be calculated by the following formula:
(5)融合证据m在第i个像元处关于未变化类wu的置信度可由下式得到:(5) The confidence of the fusion evidence m about the unchanged classwu at the ith pixel It can be obtained by the following formula:
利用最大置信度原则获取初步融合结果,具体的,用li表示第i个像元在初步融合结果中的类别,li可通过下式得到:Use the principle of maximum confidence to obtain the preliminary fusion result. Specifically, use li to represent the category of the i-th pixel in the preliminary fusion result, and li can be obtained by the following formula:
wc和wu分别表示变化类和未变化类。wc and wu represent the changed class and the unchanged class, respectively.
上述的Step6中,条件随机场将初步融合结果视作场模型,利用空间上下文信息对初步融合结果进行优化,具体的,引力模型优化的条件随机场通过下式(8)给出的改进能量函数来优化初步融合结果:In the above Step6, the conditional random field regards the preliminary fusion result as a field model, and uses the spatial context information to optimize the preliminary fusion result. Specifically, the conditional random field for the optimization of the gravitational model is an improved energy function given by the following formula (8). to optimize the initial fusion results:
其中ψ1表示一元势函数,其考虑单个像元的观测信息,ψ2表示二元势函数,二元势函数是考虑像元与其邻域像元之间的相互关系,λ是调节ψ1和ψ2平衡的参数,n表示研究区的像元总数,Ni表示像元i的邻域,li和lj分别表示像元i和其邻域像元j的类别标签;where ψ1 represents a univariate potential function, which considers the observation information of a single pixel, ψ2 represents a binary potential function, which considers the relationship between a pixel and its neighboring pixels, and λ is the adjustment between ψ1 and ψ2 is the balance parameter, n represents the total number of pixels in the study area, Ni represents the neighborhood of pixel i, and li and lj represent the category labels of pixel i and its neighbor pixelj , respectively;
所述一元势详细定义如下:The unary potential is defined in detail as follows:
其中ψ1(li)表示将像元i分配给类别wu或wc的惩罚系数,ln表示自然对数比算子,表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,li表示像元i的类别标签;where ψ1 (lii ) denotes the penalty coefficient for assigning pixel i to the class wu or wc , ln denotes the natural log ratio operator, represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6), represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5), and li represents the category label of the pixeli ;
所述二元势函数详细定义如下:The binary potential function is defined in detail as follows:
其中ψ2(li,lj)表示像元i与其邻域像元j之间的相互作用,θ是调节sij与1-vij两项之间的平衡因子,li和lj分别表示像元i和其邻域像元j的类别标签,sij表示经典条件随机场中的二元势函数项,通过下式计算:where ψ2 (lii ,lj ) represents the interaction between pixel i and its neighboring pixel j, θ is the balance factor that adjusts the two terms of sij and 1-vij , andli and l jrespectively represents the category label of the pixel i and its neighbor pixel j, and sij represents the binary potential function term in the classical conditional random field, which is calculated by the following formula:
其中d(i,j)表示像元i和其邻域像元j在空间域的欧氏距离,xi表示由三组差分影像和在像元i处灰度值构成的光谱矢量,即xj表示由三组差分影像和在像元j处灰度值构成的光谱矢量,即||xi-xj||2表示矢量xi和xj的欧式距离的平方,<>运算符用于计算像元i及其所有邻域像元j的||xi-xj||2的平均值,vij表示空间引力模型,用于优化传统条件随机场的二元势函数项sij,通过下式定义:where d(i, j) represents the Euclidean distance between pixel i and its neighbor pixel j in the spatial domain, andxi represents the difference between three groups of differential images and The spectral vector formed by the gray value at pixel i, namely xj represents three sets of differential images and The spectral vector composed of gray values at pixel j, that is ||xi -xj ||2 represents the square of the Euclidean distance between the vectors xi and xj , and the <> operator is used to calculate the ||xi -xj | The mean value of |2 , vij represents the space gravity model, and is used to optimize the binary potential function term sij of the traditional conditional random field, which is defined by the following formula:
其中li和lj分别表示像元i和其邻域像元j的类别标签,d(i,j)表示像元i和邻域像元j在空间域的欧氏距离,表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,表示融合证据m在第j个像元处关于未变化类wu的置信度,通过式(6)计算得到,表示融合证据m在第j个像元处关于变化类wc的置信度,通过式(5)计算得到。where li and lj represent the category labels of pixel i and its neighbor pixel j, respectively, d(i,j) represents the Euclidean distance between pixel i and neighbor pixel j in the spatial domain, represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6), represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5), Represents the confidence of the fusion evidence m about the unchanged class wu at the jth pixel, which is calculated by formula (6), Represents the confidence of the fusion evidence m about the change class wc at the jth pixel, which is calculated by formula (5).
本发明提供的一种引力模型优化条件随机场的多尺度融合图像变化检测方法,将多尺度融合技术、引力模型和条件随机场集成并应用到变化检测中,分别考虑影像像元级、超像元级和对象级三个空间尺度,解决单尺度面向对象变化检测技术检测效果不佳的问题,通过融合高分辨率遥感影像的多尺度信息,并在融合过程中有效地利用空间信息,能够得到较优变化检测结果。The invention provides a multi-scale fusion image change detection method for gravitational model optimization conditional random field, which integrates and applies multi-scale fusion technology, gravitational model and conditional random field to change detection, and considers image pixel level and super-image respectively. Three spatial scales, element level and object level, solve the problem of poor detection effect of single-scale object-oriented change detection technology. By fusing multi-scale information of high-resolution remote sensing images and effectively using spatial information in the fusion process, we can get Better change detection results.
附图说明Description of drawings
下面结合附图和实施例对本发明作进一步说明:Below in conjunction with accompanying drawing and embodiment, the present invention will be further described:
图1是本发明实施例的实验数据T1时刻影像的第1波段;Fig.1 is the first waveband of the image at time T1 of experimental data according to an embodiment of the present invention;
图2是本发明实施例的实验数据T2时刻影像的第1波段;2 is the first waveband of the image at time T2 of experimental data according to an embodiment of the present invention;
图3是本发明实施例的实验数据的变化参考图;Fig. 3 is the variation reference diagram of the experimental data of the embodiment of the present invention;
图4是本发明实施例的流程图;4 is a flowchart of an embodiment of the present invention;
图5是基于模糊C均值聚类算法的变化检测结果图;Fig. 5 is the change detection result graph based on fuzzy C-means clustering algorithm;
图6是基于优化的模糊局部信息C均值算法的变化检测结果图;Fig. 6 is the change detection result graph based on the optimized fuzzy local information C-mean algorithm;
图7是基于数据级多尺度融合法的变化检测结果图;Fig. 7 is the change detection result graph based on the data-level multi-scale fusion method;
图8是基于马氏距离集成盒须图的变化检测结果图;Fig. 8 is the change detection result graph based on Mahalanobis distance integrated box-and-whisker diagram;
图9是基于DS证据理论的变化检测结果图;Fig. 9 is the change detection result graph based on DS evidence theory;
图10是基于K均值聚类集成自适应投票法的变化检测结果图;Figure 10 is a graph of the change detection result based on the K-means clustering integrated adaptive voting method;
图11是本发明实施例的变化检测结果图。FIG. 11 is a graph of a change detection result according to an embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图和实施例详细说明本发明技术方案。The technical solutions of the present invention will be described in detail below with reference to the accompanying drawings and embodiments.
本实施例采用SPOT 5影像做实验,所用影像空间分辨率为2.5米,大小为445×546像元,影像获取时间分别是2008年4月(T1)和2009年2月(T2),对应位置是中国北方某地区。图1、图2和图3分别展示T1和T2时刻影像及其变化参考图,其中变化参考图通过目视解译得到。In this example,
图4是本发明提出技术的流程图,本发明所采用的技术方案引力模型优化条件随机场的多尺度融合变化检测包括以下步骤:Fig. 4 is the flow chart of the technology proposed by the present invention. The technical solution adopted by the present invention is the multi-scale fusion change detection of the gravitational model optimization condition random field, which includes the following steps:
Step1、对两期高分辨率遥感影像进行预处理,所述的预处理包括影像配准和相对辐射校正等;本实施例参考T1时期的影像,对T2时期的影像进行影像配准和相对辐射校正。Step1. Preprocess the two-phase high-resolution remote sensing images. The preprocessing includes image registration and relative radiometric correction.In this embodiment, referring to the images in the T1 period, image registration and image registration are performedon the T2 period images. Relative radiometric correction.
Step2、设置相同的超像元分割参数和对象分割参数分别对两幅影像实施分割,分别获得两期影像的超像元级和对象级两个空间尺度影像,包含原始影像像元级尺度,共得到两期影像的三种空间尺度影像。Step 2. Set the same superpixel segmentation parameters and object segmentation parameters to segment the two images respectively, and obtain two spatial scale images of the superpixel level and object level of the two images respectively, including the pixel level scale of the original image. Three spatial scale images of two phase images were obtained.
本实施例中,设T1时刻和T2时刻的高分辨率遥感影像分别为X1和X2,采用简单线性迭代聚类方法SLIC对X1和X2进行分割,得到超像元分割图S1和S2,采用多分辨率分割方法对X1和X2进行分割,得到对象分割图O1和O2。In this embodiment, the high-resolution remote sensing images at time T1 and time T2 are set as X1 and X2 respectively, and the simple linear iterative clustering method SLIC is used to segment X1 and X2 to obtain a superpixel segmentation map S1 and S2 , adopt a multi-resolution segmentation method to segment X1 and X2 to obtain object segmentation maps O1 and O2 .
Step3、利用变化矢量分析技术来生成两期影像的差分影像;Step3, use the change vector analysis technology to generate the differential image of the two-phase images;
本步骤中,像元级,超像元级,对象级差分影像分别记为和所述三组差分影像的计算过程具体实施如下:In this step, pixel-level, super-pixel-level, and object-level differential images are respectively recorded as and The specific implementation of the calculation process of the three groups of differential images is as follows:
其中,表示t时刻像元级影像第b波段的第i个像元,表示t时刻超像元级影像第b波段的第i个像元,表示t时刻对象级影像第b波段的第i个像元,t=1,2,b=1,2,…,B,B为波段数,本实施例中B=3,像元级、超像元级和对象级差分影像分别由式(1)、式(2)和式(3)计算得到;in, represents the i-th pixel of the b-th band of the pixel-level image at time t, represents the i-th pixel of the b-th band of the superpixel-level image at time t, Indicates the i-th pixel of the b-th band of the object-level image at time t, t=1, 2, b=1, 2, ..., B, B is the number of bands, in this embodiment B=3, pixel-level, super The pixel-level and object-level differential images are calculated by formula (1), formula (2) and formula (3) respectively;
Step4、对和三种空间尺度的差分影像分别进行模糊C均值聚类,得到三种空间尺度差分影像的模糊隶属度函数;在本步中,得到的像元级、超像元级和对象级三种空间尺度差分影像的模糊隶属度函数分别记作和其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,表示像元级影像第i个像元属于第k类的隶属度,表示超像元级影像第i个像元属于第k类的隶属度,表示对象级影像第i个像元属于第k类的隶属度。Step4, yes and The difference images of the three spatial scales are respectively subjected to fuzzy C-means clustering, and the fuzzy membership functions of the difference images of the three spatial scales are obtained; in this step, the obtained three spatial scales of pixel level, superpixel level and object level The fuzzy membership function of the difference image is written as and where k∈{wc ,wu },wc and wu represent the changed class and the unchanged class, respectively, represents the membership degree of the i-th pixel of the pixel-level image belonging to the k-th class, represents the membership degree of the ith pixel of the superpixel-level image belonging to the kth class, Indicates the membership degree of the i-th pixel of the object-level image belonging to the k-th class.
Step5、应用决策融级融合技术融合三种空间尺度差分影像的模糊隶属度函数,获得初步的融合结果;
具体的,本实施例将三种空间尺度差分影像的模糊隶属度函数视作三组证据,采用证据理论对三组证据进行决策级融合。像元级证据m1由像元级模糊隶属度函数得到:其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,表示像元级证据中第i个像元的质量函数。超像元级证据m2由超像元级模糊隶属度函数得到:其中表示超像元级证据中第i个像元的质量函数。对象级证据m3由对象级模糊隶属度函数得到:其中表示对象级证据中第i个像元的质量函数。三组证据的融合证据用m表示。在证据理论的融合框架下,第i个像元的归一化常数Ki可由下式计算得到:Specifically, in this embodiment, the fuzzy membership functions of the three spatial scale difference images are regarded as three sets of evidence, and the evidence theory is used to perform decision-level fusion of the three sets of evidence. The pixel-level evidence m1 is obtained from the pixel-level fuzzy membership function: where k∈{wc ,wu },wc and wu represent the changed class and the unchanged class, respectively, Represents the quality function of the ith cell in the cell-level evidence. The superpixel-level evidencem2 is obtained from the superpixel-level fuzzy membership function: in Represents the mass function of the ith pixel in superpixel-level evidence. The object-level evidencem3 is obtained by the object-level fuzzy membership function: in Represents the mass function of the ith cell in the object-level evidence. The fusion evidence of the three sets of evidence is denoted by m. Under the fusion framework of evidence theory, the normalization constant Ki of the ith pixel can be calculated by the following formula:
融合证据m在第i个像元处关于变化类wc的置信度可由下式计算得到:Confidence of the fusion evidence m about the change class wc at the ith pixel It can be calculated by the following formula:
融合证据m在第i个像元处关于未变化类wu的置信度可由下式得到:Confidence of the fused evidence m about the unchanged class wu at the ith pixel It can be obtained by the following formula:
利用最大置信度原则获取初步融合结果,具体的,用li表示第i个像元在初步融合结果中的类别,li可通过下式得到:Use the principle of maximum confidence to obtain the preliminary fusion result. Specifically, use li to represent the category of the i-th pixel in the preliminary fusion result, and li can be obtained by the following formula:
wc和wu分别表示变化类和未变化类。wc and wu represent the changed class and the unchanged class, respectively.
Step6、利用引力模型优化的条件随机场模型优化所述初步融合结果,得到最终变化检测图;Step6, using the conditional random field model optimized by the gravitational model to optimize the preliminary fusion result to obtain the final change detection map;
在本步中,条件随机场将初步融合结果视作场模型,利用空间上下文信息对初步融合结果进行优化。具体的,引力模型优化的条件随机场通过式(8)给出的改进能量函数来优化初步融合结果:In this step, the conditional random field treats the preliminary fusion result as a field model, and uses the spatial context information to optimize the preliminary fusion result. Specifically, the conditional random field for the optimization of the gravitational model optimizes the initial fusion result through the improved energy function given by equation (8):
其中ψ1表示一元势函数,其主要考虑单个像元的观测信息,ψ2表示二元势函数,二元势函数是主要考虑像元与其邻域像元之间的相互关系。λ是调节ψ1和ψ2平衡的参数,在本实施例中λ=0.09,n表示研究区的像元总数,Ni表示像元i的邻域,本实施例采用5×5的邻域窗口,li和lj分别表示像元i和其邻域像元j的类别标签。Among them, ψ1 represents a univariate potential function, which mainly considers the observation information of a single pixel, ψ2 represents a binary potential function, and the binary potential function mainly considers the relationship between a pixel and its neighboring pixels. λ is a parameter for adjusting the balance of ψ1 and ψ2. In this embodiment, λ=0.09, n represents the total number of pixels in the study area, and Ni represents the neighborhood of pixel i. This embodiment uses a 5×5 neighborhood Window, li and lj represent the class labels of cell i and its neighbor cell j, respectively.
所述一元势详细定义如下:The unary potential is defined in detail as follows:
其中ψ1(li)表示将像元i分配给类别wu或wc的惩罚系数,ln表示自然对数比算子,表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,li表示像元i的类别标签。所述二元势函数详细定义如下:where ψ1 (lii ) denotes the penalty coefficient for assigning pixel i to the class wu or wc , ln denotes the natural log ratio operator, represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6), Represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5), and li represents the category label of the pixeli . The binary potential function is defined in detail as follows:
其中ψ2(li,lj)表示像元i与其邻域像元j之间的相互作用,θ是调节sij与1-vij两项之间的平衡因子,本实施例中θ=2,li和lj分别表示像元i和其邻域像元j的类别标签,sij表示经典条件随机场中的二元势函数项,通过下式计算:where ψ2 (lii ,lj ) represents the interaction between pixel i and its neighboring pixel j, and θ is the balance factor that adjusts between sij and 1-vij . In this embodiment, θ= 2, li and lj represent the category labels of pixel i and its neighbor pixel j, respectively, and sij represents the binary potential function term in the classical conditional random field, which is calculated by the following formula:
其中d(i,j)表示像元i和其邻域像元j在空间域的欧氏距离,xi表示由三组差分影像和在像元i处灰度值构成的光谱矢量,即xj表示由三组差分影像和在像元j处灰度值构成的光谱矢量,即||xi-xj||2表示矢量xi和xj的欧式距离的平方,<>运算符用于计算像元i及其所有邻域像元j的||xi-xj||2的平均值,vij表示空间引力模型,用于优化传统条件随机场的二元势函数项sij,通过下式定义:where d(i, j) represents the Euclidean distance between pixel i and its neighbor pixel j in the spatial domain, andxi represents the difference between three groups of differential images and The spectral vector formed by the gray value at pixel i, namely xj represents three sets of differential images and The spectral vector composed of gray values at pixel j, that is ||xi -xj ||2 represents the square of the Euclidean distance between the vectors xi and xj , and the <> operator is used to calculate the ||xi -xj | The mean value of |2 , vij represents the space gravity model, and is used to optimize the binary potential function term sij of the traditional conditional random field, which is defined by the following formula:
其中li和lj分别表示像元i和其邻域像元j的类别标签,d(i,j)表示像元i和邻域像元j在空间域的欧氏距离,表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,表示融合证据m在第j个像元处关于未变化类wu的置信度,通过式(6)计算得到,表示融合证据m在第j个像元处关于变化类wc的置信度,通过式(5)计算得到。where li and lj represent the category labels of pixel i and its neighbor pixel j, respectively, d(i,j) represents the Euclidean distance between pixel i and neighbor pixel j in the spatial domain, represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6), represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5), Represents the confidence of the fusion evidence m about the unchanged class wu at the jth pixel, which is calculated by formula (6), Represents the confidence of the fusion evidence m about the change class wc at the jth pixel, which is calculated by formula (5).
可以采用不同的技术理论对式(8)给出的能量函数进行最小化,从而实现对初步融合结果的优化改进,在本实施例中,采用最大流算法来最小化式(8)给出的能量函数。Different technical theories can be used to minimize the energy function given by equation (8), so as to realize the optimization and improvement of the preliminary fusion result. In this embodiment, the maximum flow algorithm is used to minimize the energy function given by equation (8). energy function.
如图5-11所示,分别给出模糊C均值聚类算法、优化的模糊局部信息C均值算法、数据级多尺度融合法、马氏距离集成盒须图、DS证据理论、K均值聚类集成自适应多数投票法和本发明的变化检测图,表1给出上述不同变化检测方法变化检测图的统计结果。As shown in Figure 5-11, the fuzzy C-means clustering algorithm, the optimized fuzzy local information C-means algorithm, the data-level multi-scale fusion method, the Mahalanobis distance integrated box-and-whisker plot, DS evidence theory, and K-means clustering are given respectively. Integrating the adaptive majority voting method and the change detection map of the present invention, Table 1 shows the statistical results of the change detection maps of the above-mentioned different change detection methods.
表1不同变化检测方法结果的统计比较Table 1 Statistical comparison of results of different change detection methods
对比图5-11给出的不同变化检测方法的变化检测图和表1给出的统计结果可知,本发明的变化检测效果明显优于其它对比变化检测算法,本发明的检测结果同时具有最小的总体错误和最高的Kappa系数,本发明检测结果的总体错误为15359,分别比模糊C均值聚类算法、优化的模糊局部信息C均值算法、数据级多尺度融合法、马氏距离集成盒须图、DS证据理论、K均值聚类集成自适应多数投票法减少22771、10343、11083、9126、18878和6684个像元,本发明检测结果的Kappa系数为0.74,分别比模糊C均值聚类算法、优化的模糊局部信息C均值算法、数据级多尺度融合法、马氏距离集成盒须图、DS证据理论、K均值聚类集成自适应多数投票法提高26%、15%、18%、16%、22%和10%。Comparing the change detection diagrams of different change detection methods given in Figures 5-11 and the statistical results given in Table 1, it can be seen that the change detection effect of the present invention is obviously better than other comparative change detection algorithms, and the detection result of the present invention has the smallest The overall error and the highest Kappa coefficient, the overall error of the detection result of the present invention is 15359, which are respectively higher than the fuzzy C-means clustering algorithm, the optimized fuzzy local information C-means algorithm, the data-level multi-scale fusion method, and the Mahalanobis distance integrated box-and-whisker plot. , DS evidence theory, K-means clustering integrated adaptive majority voting method reduces 22771, 10343, 11083, 9126, 18878 and 6684 pixels, the Kappa coefficient of the detection result of the present invention is 0.74, which is respectively better than the fuzzy C-means clustering algorithm, Optimized fuzzy local information C-means algorithm, data-level multi-scale fusion method, Mahalanobis distance integrated box-and-whisker plot, DS evidence theory, K-means clustering integrated adaptive majority voting method improved by 26%, 15%, 18%, 16% , 22% and 10%.
本发明提出的变化检测技术--引力模型优化条件随机场的多尺度融合变化检测,能够有效融合高分辨率遥感影像多种尺度信息,并在融合过程中充分利用影像的上下文信息,从而能够很大程度上解决单尺度面向对象变化检测不能很好利用高分辨率遥感影像多尺度特征的不足,提升变化检测的精度,取得较优的变化检测结果。The change detection technology proposed in the present invention, the multi-scale fusion change detection of the random field of the gravitational model optimization condition, can effectively fuse various scale information of high-resolution remote sensing images, and make full use of the context information of the images in the fusion process, so that the To a large extent, it solves the problem that single-scale object-oriented change detection cannot make good use of multi-scale features of high-resolution remote sensing images, improves the accuracy of change detection, and achieves better change detection results.
以上仅为本发明的一个实施例,并非用于限定本发明的保护范围,因此,凡在本发明的精神和原则之内所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above is only an embodiment of the present invention and is not intended to limit the protection scope of the present invention. Therefore, any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the scope of the present invention. within the scope of protection.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110111647.1ACN112767376B (en) | 2021-01-27 | 2021-01-27 | Multi-scale fusion image change detection method based on conditional random field optimized by gravity model |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110111647.1ACN112767376B (en) | 2021-01-27 | 2021-01-27 | Multi-scale fusion image change detection method based on conditional random field optimized by gravity model |
| Publication Number | Publication Date |
|---|---|
| CN112767376Atrue CN112767376A (en) | 2021-05-07 |
| CN112767376B CN112767376B (en) | 2023-07-11 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110111647.1AActiveCN112767376B (en) | 2021-01-27 | 2021-01-27 | Multi-scale fusion image change detection method based on conditional random field optimized by gravity model |
| Country | Link |
|---|---|
| CN (1) | CN112767376B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113240689A (en)* | 2021-06-01 | 2021-08-10 | 安徽建筑大学 | Method for rapidly extracting flood disaster area |
| CN113298137A (en)* | 2021-05-21 | 2021-08-24 | 青岛星科瑞升信息科技有限公司 | Hyperspectral image classification method based on local similarity data gravitation |
| CN114091508A (en)* | 2021-09-03 | 2022-02-25 | 三峡大学 | A Change Detection Method for Unsupervised Remote Sensing Image Based on Higher-Order Conditional Random Fields |
| CN114359693A (en)* | 2021-12-10 | 2022-04-15 | 三峡大学 | Change detection method of high-resolution remote sensing image based on fuzzy clustering of superpixels |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080101678A1 (en)* | 2006-10-25 | 2008-05-01 | Agfa Healthcare Nv | Method for Segmenting Digital Medical Image |
| US20090316988A1 (en)* | 2008-06-18 | 2009-12-24 | Samsung Electronics Co., Ltd. | System and method for class-specific object segmentation of image data |
| CN104361589A (en)* | 2014-11-12 | 2015-02-18 | 河海大学 | High-resolution remote sensing image segmentation method based on inter-scale mapping |
| CN107085708A (en)* | 2017-04-20 | 2017-08-22 | 哈尔滨工业大学 | Change detection method for high-resolution remote sensing images based on multi-scale segmentation and fusion |
| CN109389571A (en)* | 2017-08-03 | 2019-02-26 | 香港理工大学深圳研究院 | A kind of remote sensing image variation detection method, device and terminal |
| CN109409389A (en)* | 2017-08-16 | 2019-03-01 | 香港理工大学深圳研究院 | A kind of object-oriented change detecting method merging multiple features |
| CN109903274A (en)* | 2019-01-31 | 2019-06-18 | 兰州交通大学 | A high-resolution remote sensing image change detection method and system |
| CN110516754A (en)* | 2019-08-30 | 2019-11-29 | 大连海事大学 | Hyperspectral image classification method based on multi-scale superpixel segmentation |
| CN110738672A (en)* | 2019-10-18 | 2020-01-31 | 西安交通大学深圳研究院 | image segmentation method based on hierarchical high-order conditional random field |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080101678A1 (en)* | 2006-10-25 | 2008-05-01 | Agfa Healthcare Nv | Method for Segmenting Digital Medical Image |
| US20090316988A1 (en)* | 2008-06-18 | 2009-12-24 | Samsung Electronics Co., Ltd. | System and method for class-specific object segmentation of image data |
| CN104361589A (en)* | 2014-11-12 | 2015-02-18 | 河海大学 | High-resolution remote sensing image segmentation method based on inter-scale mapping |
| CN107085708A (en)* | 2017-04-20 | 2017-08-22 | 哈尔滨工业大学 | Change detection method for high-resolution remote sensing images based on multi-scale segmentation and fusion |
| CN109389571A (en)* | 2017-08-03 | 2019-02-26 | 香港理工大学深圳研究院 | A kind of remote sensing image variation detection method, device and terminal |
| CN109409389A (en)* | 2017-08-16 | 2019-03-01 | 香港理工大学深圳研究院 | A kind of object-oriented change detecting method merging multiple features |
| CN109903274A (en)* | 2019-01-31 | 2019-06-18 | 兰州交通大学 | A high-resolution remote sensing image change detection method and system |
| CN110516754A (en)* | 2019-08-30 | 2019-11-29 | 大连海事大学 | Hyperspectral image classification method based on multi-scale superpixel segmentation |
| CN110738672A (en)* | 2019-10-18 | 2020-01-31 | 西安交通大学深圳研究院 | image segmentation method based on hierarchical high-order conditional random field |
| Title |
|---|
| SUSMITA GHOSH, ET.AL: "Unsupervised change detection of remotely sensed images using fuzzy clustering", 《2009 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCES IN PATTERN RECOGNITION》* |
| SUSMITA GHOSH, ET.AL: "Unsupervised change detection of remotely sensed images using fuzzy clustering", 《2009 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCES IN PATTERN RECOGNITION》, 13 February 2009 (2009-02-13)* |
| YULIYA TARABALKA, ET.AL: "SVM- and MRF-based method for accurate classification of hyperspectral images", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》* |
| YULIYA TARABALKA, ET.AL: "SVM- and MRF-based method for accurate classification of hyperspectral images", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 7, no. 4, 31 October 2010 (2010-10-31), XP011309056* |
| 张华等: "《遥感数据可靠性分类方法研究》", 31 March 2016, 测绘出版社, pages: 95 - 99* |
| 赵婕: "《图像特征提取与语义分析》", 31 July 2015, 重庆大学出版社, pages: 169 - 172* |
| 邵攀: "非监督遥感变化检测模糊方法研究", 《中国优秀博硕士学位论文全文数据库(博士)基础科学辑》* |
| 邵攀: "非监督遥感变化检测模糊方法研究", 《中国优秀博硕士学位论文全文数据库(博士)基础科学辑》, no. 1, 31 January 2020 (2020-01-31), pages 82 - 83* |
| 金秋含等: "自适应空间信息MRF的FCM遥感图像聚类", 《计算机工程与设计》* |
| 金秋含等: "自适应空间信息MRF的FCM遥感图像聚类", 《计算机工程与设计》, vol. 40, no. 8, 31 August 2019 (2019-08-31)* |
| 龚健雅: "《对地观测数据处理与分析研究进展》", 31 December 2017, 武汉大学出版社, pages: 151 - 152* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113298137A (en)* | 2021-05-21 | 2021-08-24 | 青岛星科瑞升信息科技有限公司 | Hyperspectral image classification method based on local similarity data gravitation |
| CN113240689A (en)* | 2021-06-01 | 2021-08-10 | 安徽建筑大学 | Method for rapidly extracting flood disaster area |
| CN114091508A (en)* | 2021-09-03 | 2022-02-25 | 三峡大学 | A Change Detection Method for Unsupervised Remote Sensing Image Based on Higher-Order Conditional Random Fields |
| CN114359693A (en)* | 2021-12-10 | 2022-04-15 | 三峡大学 | Change detection method of high-resolution remote sensing image based on fuzzy clustering of superpixels |
| Publication number | Publication date |
|---|---|
| CN112767376B (en) | 2023-07-11 |
| Publication | Publication Date | Title |
|---|---|---|
| CN112767376A (en) | Multi-scale fusion image change detection method for gravity model optimization conditional random field | |
| Zhou et al. | Scale adaptive image cropping for UAV object detection | |
| CN112906531B (en) | Multi-source remote sensing image space-time fusion method and system based on non-supervision classification | |
| Qu et al. | Cycle-refined multidecision joint alignment network for unsupervised domain adaptive hyperspectral change detection | |
| CN109829519B (en) | Remote sensing image classification method and system based on self-adaptive spatial information | |
| US9449395B2 (en) | Methods and systems for image matting and foreground estimation based on hierarchical graphs | |
| CN109376641A (en) | A moving vehicle detection method based on UAV aerial video | |
| Chen et al. | Change detection in multi-temporal VHR images based on deep Siamese multi-scale convolutional networks | |
| CN114202694A (en) | Small sample remote sensing scene image classification method based on manifold mixed interpolation and contrast learning | |
| Han et al. | The edge-preservation multi-classifier relearning framework for the classification of high-resolution remotely sensed imagery | |
| Lu et al. | A novel synergetic classification approach for hyperspectral and panchromatic images based on self-learning | |
| Zhao et al. | Point based weakly supervised deep learning for semantic segmentation of remote sensing images | |
| Lv et al. | Iterative sample generation and balance approach for improving hyperspectral remote sensing imagery classification with deep learning network | |
| Wu et al. | Improved mask R-CNN-based cloud masking method for remote sensing images | |
| CN109002771A (en) | A kind of Classifying Method in Remote Sensing Image based on recurrent neural network | |
| Zhang et al. | Spatial contextual superpixel model for natural roadside vegetation classification | |
| Chang et al. | Unsupervised multi-view graph contrastive feature learning for hyperspectral image classification | |
| CN119206191B (en) | Built-in lightweight new energy self-adaptive detection method | |
| Zhang et al. | Locally homogeneous covariance matrix representation for hyperspectral image classification | |
| Derivaux et al. | Watershed segmentation of remotely sensed images based on a supervised fuzzy pixel classification | |
| CN109242885B (en) | Correlation filtering video tracking method based on space-time non-local regularization | |
| Dong et al. | Multilevel spatial feature-based manifold metric learning for domain adaptation in remote sensing image classification | |
| CN115456942A (en) | Change Detection Method Based on Label Constrained Superpixel Conditional Random Field | |
| CN116704378A (en) | Homeland mapping data classification method based on self-growing convolution neural network | |
| Zhao et al. | Sample Augmentation and Balance Approach for Improving Classification Performance with High-Resolution Remote Sensed Image |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |