Movatterモバイル変換


[0]ホーム

URL:


CN112767376A - Multi-scale fusion image change detection method for gravity model optimization conditional random field - Google Patents

Multi-scale fusion image change detection method for gravity model optimization conditional random field
Download PDF

Info

Publication number
CN112767376A
CN112767376ACN202110111647.1ACN202110111647ACN112767376ACN 112767376 ACN112767376 ACN 112767376ACN 202110111647 ACN202110111647 ACN 202110111647ACN 112767376 ACN112767376 ACN 112767376A
Authority
CN
China
Prior art keywords
pixel
level
images
fusion
evidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110111647.1A
Other languages
Chinese (zh)
Other versions
CN112767376B (en
Inventor
邵攀
衣云琪
任东
董婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGUfiledCriticalChina Three Gorges University CTGU
Priority to CN202110111647.1ApriorityCriticalpatent/CN112767376B/en
Publication of CN112767376ApublicationCriticalpatent/CN112767376A/en
Application grantedgrantedCritical
Publication of CN112767376BpublicationCriticalpatent/CN112767376B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开一种引力模型优化条件随机场的多尺度融合图像变化检测方法,采用简单线性迭代聚类方法SLIC分别对两幅影像进行超像元分割,共获得像元级、超像元级和对象级三个空间尺度的影像;利用变化矢量分析技术分别生成两期影像的像元级、超像元级和对象级三种空间尺度的差分影像;对三种空间尺度的差分影像进行模糊聚类,得到三种空间尺度差分影像的模糊隶属度函数;应用决策级融合方法融合所获取的三组模糊隶属度函数,获得初步融合结果;利用引力模型优化的条件随机场模型优化所述初步融合结果,得到最终变化检测图。本发明通过融合高分辨率遥感影像的多尺度信息,并在融合过程中有效地利用空间信息,能够得到较优变化检测结果。

Figure 202110111647

The invention discloses a multi-scale fusion image change detection method based on a random field of a gravitational model optimization condition. The simple linear iterative clustering method SLIC is used to separately perform superpixel segmentation on two images, and a total of pixel-level, super-pixel-level and Object-level images at three spatial scales; using change vector analysis technology to generate pixel-level, super-pixel-level and object-level differential images of the two phases of images; fuzzy clustering of the differential images at the three spatial scales The fuzzy membership functions of the three spatial scale difference images are obtained; the three sets of fuzzy membership functions obtained are fused by the decision-level fusion method, and the preliminary fusion results are obtained; the conditional random field model optimized by the gravity model is used to optimize the preliminary fusion. As a result, the final change detection map is obtained. The present invention can obtain better change detection results by fusing multi-scale information of high-resolution remote sensing images and effectively utilizing spatial information in the fusion process.

Figure 202110111647

Description

Translated fromChinese
引力模型优化条件随机场的多尺度融合图像变化检测方法A Gravity Model Optimization Conditional Random Field Method for Multiscale Fusion Image Change Detection

技术领域technical field

本发明涉及遥感技术领域,具体涉及一种引力模型优化条件随机场的多尺度融合图像变化检测方法。The invention relates to the field of remote sensing technology, in particular to a multi-scale fusion image change detection method for a random field of a gravity model optimization condition.

背景技术Background technique

变化检测是遥感领域的研究热点。随着遥感影像空间分辨率的提高,遥感影像的细节信息更加丰富,但也使得光谱的类内方差增大,类间方差减小。基于像元的变化检测技术难以适应高分辨率遥感影像,面向对象变化检测应运而生。然而,面向对象变化检测往往只考虑遥感影像的单一尺度特征,不能很好的利用高分辨率遥感影像的多尺度特性,限制了变化检测精度的进一步提高。针对此问题,本发明提出一种引力模型优化条件随机场的多尺度融合变化检测技术。所提出技术能够融合高分影像的多尺度信息,并在融合过程中有效地利用空间信息,有望得到较优的高分辨率遥感影像变化检测结果。Change detection is a research hotspot in the field of remote sensing. With the improvement of the spatial resolution of remote sensing images, the detailed information of remote sensing images is more abundant, but it also increases the intra-class variance of the spectrum and reduces the inter-class variance. The pixel-based change detection technology is difficult to adapt to high-resolution remote sensing images, and object-oriented change detection emerges as the times require. However, object-oriented change detection often only considers the single-scale features of remote sensing images, and cannot make good use of the multi-scale characteristics of high-resolution remote sensing images, which limits the further improvement of change detection accuracy. Aiming at this problem, the present invention proposes a multi-scale fusion change detection technology for a gravitational model optimization conditional random field. The proposed technology can fuse multi-scale information of high-resolution images, and effectively utilize spatial information in the fusion process, which is expected to obtain better high-resolution remote sensing image change detection results.

现有技术中有对遥感图像进行变化检测的技术,专利文献CN 102496154A记载了一种基于马尔科夫随机场的多时相遥感图像变化检测方法,包括差值图像生成步骤、EM参数估计步骤、差值图像边缘检测步骤、自适应权重计算步骤以及马尔科夫随机场标记步骤。本发明利用差值图像的相邻像素大小自动调整马尔科夫随机场的权值大小,有效提高变化检测精度。There is a technology for detecting changes in remote sensing images in the prior art. Patent document CN 102496154A describes a method for detecting changes in multi-temporal remote sensing images based on Markov random fields, which includes a difference image generation step, an EM parameter estimation step, and a difference image generation step. Value image edge detection step, adaptive weight calculation step and Markov random field labeling step. The present invention automatically adjusts the size of the weight of the Markov random field by using the size of the adjacent pixels of the difference image, thereby effectively improving the accuracy of change detection.

发明内容SUMMARY OF THE INVENTION

本发明所要解决的技术问题是提供一种引力模型优化条件随机场的多尺度融合图像变化检测方法,能够快速得出遥感图像的变化检测结果。The technical problem to be solved by the present invention is to provide a multi-scale fusion image change detection method based on a random field of a gravitational model optimization condition, which can quickly obtain a change detection result of a remote sensing image.

为解决上述技术问题,本发明所采用的技术方案是:For solving the above-mentioned technical problems, the technical scheme adopted in the present invention is:

引力模型优化条件随机场的多尺度融合图像变化检测方法,包括以下步骤:The multi-scale fusion image change detection method of gravity model optimization conditional random field includes the following steps:

Step1、将不同时期对同一目标摄取的两期高分辨率遥感影像进行预处理,预处理包括影像配准和相对辐射校正;Step1. Preprocess the two-phase high-resolution remote sensing images captured by the same target in different periods. The preprocessing includes image registration and relative radiometric correction;

Step2、采用简单线性迭代聚类方法分别对两幅影像进行超像元分割,采用多分辨率分割方法分别对两幅影像进行面向对象分割,加上原始影像,共获得像元级、超像元级和对象级三个空间尺度的影像;Step 2. Use the simple linear iterative clustering method to perform superpixel segmentation on the two images respectively, and use the multi-resolution segmentation method to perform object-oriented segmentation on the two images respectively, and add the original image to obtain a total of pixel-level and superpixel-level segmentation. images at three spatial scales, level and object level;

Step3、使用变化矢量分析法分别生成两期影像的像元级、超像元级和对象级三种空间尺度的差分影像;Step 3. Use the variation vector analysis method to generate differential images of three spatial scales of pixel level, super-pixel level and object level of the two-phase images respectively;

Step4、对Step3中三种空间尺度的差分影像进行模糊聚类,得到三种空间尺度差分影像的模糊隶属度函数;Step 4. Perform fuzzy clustering on the differential images of the three spatial scales inStep 3 to obtain the fuzzy membership functions of the differential images of the three spatial scales;

Step5、使用决策级融合方法融合Step4中所获取的三组模糊隶属度函数,获得初步融合结果;Step 5. Use the decision-level fusion method to fuse the three sets of fuzzy membership functions obtained in Step 4 to obtain a preliminary fusion result;

Step6、利用引力模型优化的条件随机场模型优化Step5中的初步融合结果,得到最终变化检测图。Step 6. Use the conditional random field model optimized by the gravitational model to optimize the initial fusion result inStep 5, and obtain the final change detection map.

上述的Step2中将两期高分辨率遥感影像摄取时间定义为T1时刻和T2时刻,对应的遥感影像分别为X1和X2,采用简单线性迭代聚类方法对X1和X2进行分割,得到超像元分割图S1和S2,采用多分辨率分割对X1和X2进行分割,得到对象分割图O1和O2In the above Step 2, the two-phase high-resolution remote sensing image capture time is defined as time T1 and time T2 , and the corresponding remote sensing images are X1 and X2 respectively. The simple linear iterative clustering method is used to analyze X1 and X2 Segmentation to obtain superpixel segmentation maps S1 and S2 , and use multi-resolution segmentation to segment X1 and X2 to obtain object segmentation maps O1 and O2 ;

上述的Step3中,像元级,超像元级,对象级差分影像分别记为

Figure BDA0002919362070000021
Figure BDA0002919362070000022
三组差分影像的计算公式如下:In the above Step3, the pixel-level, super-pixel-level, and object-level differential images are respectively recorded as
Figure BDA0002919362070000021
and
Figure BDA0002919362070000022
The calculation formulas of the three sets of differential images are as follows:

Figure BDA0002919362070000023
Figure BDA0002919362070000023

Figure BDA0002919362070000024
Figure BDA0002919362070000024

Figure BDA0002919362070000025
Figure BDA0002919362070000025

其中,

Figure BDA0002919362070000026
表示t时刻像元级影像第b波段的第i个像元,
Figure BDA0002919362070000027
表示t时刻超像元级影像第b波段的第i个像元,
Figure BDA0002919362070000028
表示t时刻对象级影像第b波段的第i个像元,t=1,2,b=1,2,…,B,B为波段数。in,
Figure BDA0002919362070000026
represents the i-th pixel of the b-th band of the pixel-level image at time t,
Figure BDA0002919362070000027
represents the i-th pixel of the b-th band of the superpixel-level image at time t,
Figure BDA0002919362070000028
Indicates the i-th pixel of the b-th band of the object-level image at time t, t=1, 2, b=1, 2, ..., B, B is the number of bands.

上述的Step4中,对

Figure BDA0002919362070000029
Figure BDA00029193620700000210
三种空间尺度的差分影像分别进行模糊C均值聚类,得到三种空间尺度差分影像的模糊隶属度函数,得到的像元级、超像元级和对象级三种空间尺度差分影像的模糊隶属度函数分别记作
Figure BDA00029193620700000211
Figure BDA00029193620700000212
Figure BDA00029193620700000213
其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,
Figure BDA0002919362070000031
表示像元级影像第i个像元属于第k类的隶属度,
Figure BDA0002919362070000032
表示超像元级影像第i个像元属于第k类的隶属度,
Figure BDA0002919362070000033
表示对象级影像第i个像元属于第k类的隶属度。In the above Step4, right
Figure BDA0002919362070000029
and
Figure BDA00029193620700000210
The difference images of the three spatial scales are respectively clustered by fuzzy C-means, and the fuzzy membership functions of the difference images of the three spatial scales are obtained. The degree function is written as
Figure BDA00029193620700000211
Figure BDA00029193620700000212
and
Figure BDA00029193620700000213
where k∈{wc ,wu },wc and wu represent the changed class and the unchanged class, respectively,
Figure BDA0002919362070000031
represents the membership degree of the i-th pixel of the pixel-level image belonging to the k-th class,
Figure BDA0002919362070000032
represents the membership degree of the ith pixel of the superpixel-level image belonging to the kth class,
Figure BDA0002919362070000033
Indicates the membership degree of the i-th pixel of the object-level image belonging to the k-th class.

上述的Step5中,将三种空间尺度差分影像的模糊隶属度函数视作三组证据,采用证据理论对三组证据进行决策级融合,像元级证据m1由像元级模糊隶属度函数得到:

Figure BDA0002919362070000034
其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,
Figure BDA0002919362070000035
表示像元级证据中第i个像元的质量函数;In theabove Step 5, the fuzzy membership functions of the three spatial scale difference images are regarded as three sets of evidence, and the three sets of evidence are fused by the evidence theory at the decision level. The pixel-level evidence m1 is obtained from the pixel-level fuzzy membership function. :
Figure BDA0002919362070000034
where k∈{wc ,wu },wc and wu represent the changed class and the unchanged class, respectively,
Figure BDA0002919362070000035
Represents the quality function of the ith pixel in the pixel-level evidence;

超像元级证据m2由超像元级模糊隶属度函数得到:

Figure BDA0002919362070000036
其中
Figure BDA0002919362070000037
表示超像元级证据中第i个像元的质量函数;The superpixel-level evidencem2 is obtained from the superpixel-level fuzzy membership function:
Figure BDA0002919362070000036
in
Figure BDA0002919362070000037
represents the quality function of the ith pixel in the superpixel-level evidence;

对象级证据m3由对象级模糊隶属度函数得到:

Figure BDA0002919362070000038
其中
Figure BDA0002919362070000039
表示对象级证据中第i个像元的质量函数。三组证据的融合证据用m表示。在证据理论的融合框架下,第i个像元的归一化常数Ki可由下式计算得到:The object-level evidencem3 is obtained by the object-level fuzzy membership function:
Figure BDA0002919362070000038
in
Figure BDA0002919362070000039
Represents the mass function of the ith cell in the object-level evidence. The fusion evidence of the three sets of evidence is denoted by m. Under the fusion framework of evidence theory, the normalization constant Ki of the ith pixel can be calculated by the following formula:

Figure BDA00029193620700000310
Figure BDA00029193620700000310

融合证据m在第i个像元处关于变化类wc的置信度

Figure BDA00029193620700000311
可由下式计算得到:Confidence of the fusion evidence m about the change class wc at the ith pixel
Figure BDA00029193620700000311
It can be calculated by the following formula:

Figure BDA00029193620700000312
Figure BDA00029193620700000312

(5)融合证据m在第i个像元处关于未变化类wu的置信度

Figure BDA00029193620700000313
可由下式得到:(5) The confidence of the fusion evidence m about the unchanged classwu at the ith pixel
Figure BDA00029193620700000313
It can be obtained by the following formula:

Figure BDA00029193620700000314
Figure BDA00029193620700000314

利用最大置信度原则获取初步融合结果,具体的,用li表示第i个像元在初步融合结果中的类别,li可通过下式得到:Use the principle of maximum confidence to obtain the preliminary fusion result. Specifically, use li to represent the category of the i-th pixel in the preliminary fusion result, and li can be obtained by the following formula:

Figure BDA00029193620700000315
Figure BDA00029193620700000315

wc和wu分别表示变化类和未变化类。wc and wu represent the changed class and the unchanged class, respectively.

上述的Step6中,条件随机场将初步融合结果视作场模型,利用空间上下文信息对初步融合结果进行优化,具体的,引力模型优化的条件随机场通过下式(8)给出的改进能量函数来优化初步融合结果:In the above Step6, the conditional random field regards the preliminary fusion result as a field model, and uses the spatial context information to optimize the preliminary fusion result. Specifically, the conditional random field for the optimization of the gravitational model is an improved energy function given by the following formula (8). to optimize the initial fusion results:

Figure BDA0002919362070000041
Figure BDA0002919362070000041

其中ψ1表示一元势函数,其考虑单个像元的观测信息,ψ2表示二元势函数,二元势函数是考虑像元与其邻域像元之间的相互关系,λ是调节ψ1和ψ2平衡的参数,n表示研究区的像元总数,Ni表示像元i的邻域,li和lj分别表示像元i和其邻域像元j的类别标签;where ψ1 represents a univariate potential function, which considers the observation information of a single pixel, ψ2 represents a binary potential function, which considers the relationship between a pixel and its neighboring pixels, and λ is the adjustment between ψ1 and ψ2 is the balance parameter, n represents the total number of pixels in the study area, Ni represents the neighborhood of pixel i, and li and lj represent the category labels of pixel i and its neighbor pixelj , respectively;

所述一元势详细定义如下:The unary potential is defined in detail as follows:

Figure BDA0002919362070000042
Figure BDA0002919362070000042

其中ψ1(li)表示将像元i分配给类别wu或wc的惩罚系数,ln表示自然对数比算子,

Figure BDA0002919362070000043
表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,
Figure BDA0002919362070000044
表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,li表示像元i的类别标签;where ψ1 (lii ) denotes the penalty coefficient for assigning pixel i to the class wu or wc , ln denotes the natural log ratio operator,
Figure BDA0002919362070000043
represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6),
Figure BDA0002919362070000044
represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5), and li represents the category label of the pixeli ;

所述二元势函数详细定义如下:The binary potential function is defined in detail as follows:

Figure BDA0002919362070000045
Figure BDA0002919362070000045

其中ψ2(li,lj)表示像元i与其邻域像元j之间的相互作用,θ是调节sij与1-vij两项之间的平衡因子,li和lj分别表示像元i和其邻域像元j的类别标签,sij表示经典条件随机场中的二元势函数项,通过下式计算:where ψ2 (lii ,lj ) represents the interaction between pixel i and its neighboring pixel j, θ is the balance factor that adjusts the two terms of sij and 1-vij , andli and l jrespectively represents the category label of the pixel i and its neighbor pixel j, and sij represents the binary potential function term in the classical conditional random field, which is calculated by the following formula:

Figure BDA0002919362070000046
Figure BDA0002919362070000046

其中d(i,j)表示像元i和其邻域像元j在空间域的欧氏距离,xi表示由三组差分影像

Figure BDA0002919362070000047
Figure BDA0002919362070000048
在像元i处灰度值构成的光谱矢量,即
Figure BDA0002919362070000049
Figure BDA0002919362070000051
xj表示由三组差分影像
Figure BDA0002919362070000052
Figure BDA0002919362070000053
在像元j处灰度值构成的光谱矢量,即
Figure BDA0002919362070000054
||xi-xj||2表示矢量xi和xj的欧式距离的平方,<>运算符用于计算像元i及其所有邻域像元j的||xi-xj||2的平均值,vij表示空间引力模型,用于优化传统条件随机场的二元势函数项sij,通过下式定义:where d(i, j) represents the Euclidean distance between pixel i and its neighbor pixel j in the spatial domain, andxi represents the difference between three groups of differential images
Figure BDA0002919362070000047
and
Figure BDA0002919362070000048
The spectral vector formed by the gray value at pixel i, namely
Figure BDA0002919362070000049
Figure BDA0002919362070000051
xj represents three sets of differential images
Figure BDA0002919362070000052
and
Figure BDA0002919362070000053
The spectral vector composed of gray values at pixel j, that is
Figure BDA0002919362070000054
||xi -xj ||2 represents the square of the Euclidean distance between the vectors xi and xj , and the <> operator is used to calculate the ||xi -xj | The mean value of |2 , vij represents the space gravity model, and is used to optimize the binary potential function term sij of the traditional conditional random field, which is defined by the following formula:

Figure BDA0002919362070000055
Figure BDA0002919362070000055

其中li和lj分别表示像元i和其邻域像元j的类别标签,d(i,j)表示像元i和邻域像元j在空间域的欧氏距离,

Figure BDA0002919362070000056
Figure BDA0002919362070000057
表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,
Figure BDA0002919362070000058
表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,
Figure BDA0002919362070000059
Figure BDA00029193620700000510
表示融合证据m在第j个像元处关于未变化类wu的置信度,通过式(6)计算得到,
Figure BDA00029193620700000511
表示融合证据m在第j个像元处关于变化类wc的置信度,通过式(5)计算得到。where li and lj represent the category labels of pixel i and its neighbor pixel j, respectively, d(i,j) represents the Euclidean distance between pixel i and neighbor pixel j in the spatial domain,
Figure BDA0002919362070000056
Figure BDA0002919362070000057
represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6),
Figure BDA0002919362070000058
represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5),
Figure BDA0002919362070000059
Figure BDA00029193620700000510
Represents the confidence of the fusion evidence m about the unchanged class wu at the jth pixel, which is calculated by formula (6),
Figure BDA00029193620700000511
Represents the confidence of the fusion evidence m about the change class wc at the jth pixel, which is calculated by formula (5).

本发明提供的一种引力模型优化条件随机场的多尺度融合图像变化检测方法,将多尺度融合技术、引力模型和条件随机场集成并应用到变化检测中,分别考虑影像像元级、超像元级和对象级三个空间尺度,解决单尺度面向对象变化检测技术检测效果不佳的问题,通过融合高分辨率遥感影像的多尺度信息,并在融合过程中有效地利用空间信息,能够得到较优变化检测结果。The invention provides a multi-scale fusion image change detection method for gravitational model optimization conditional random field, which integrates and applies multi-scale fusion technology, gravitational model and conditional random field to change detection, and considers image pixel level and super-image respectively. Three spatial scales, element level and object level, solve the problem of poor detection effect of single-scale object-oriented change detection technology. By fusing multi-scale information of high-resolution remote sensing images and effectively using spatial information in the fusion process, we can get Better change detection results.

附图说明Description of drawings

下面结合附图和实施例对本发明作进一步说明:Below in conjunction with accompanying drawing and embodiment, the present invention will be further described:

图1是本发明实施例的实验数据T1时刻影像的第1波段;Fig.1 is the first waveband of the image at time T1 of experimental data according to an embodiment of the present invention;

图2是本发明实施例的实验数据T2时刻影像的第1波段;2 is the first waveband of the image at time T2 of experimental data according to an embodiment of the present invention;

图3是本发明实施例的实验数据的变化参考图;Fig. 3 is the variation reference diagram of the experimental data of the embodiment of the present invention;

图4是本发明实施例的流程图;4 is a flowchart of an embodiment of the present invention;

图5是基于模糊C均值聚类算法的变化检测结果图;Fig. 5 is the change detection result graph based on fuzzy C-means clustering algorithm;

图6是基于优化的模糊局部信息C均值算法的变化检测结果图;Fig. 6 is the change detection result graph based on the optimized fuzzy local information C-mean algorithm;

图7是基于数据级多尺度融合法的变化检测结果图;Fig. 7 is the change detection result graph based on the data-level multi-scale fusion method;

图8是基于马氏距离集成盒须图的变化检测结果图;Fig. 8 is the change detection result graph based on Mahalanobis distance integrated box-and-whisker diagram;

图9是基于DS证据理论的变化检测结果图;Fig. 9 is the change detection result graph based on DS evidence theory;

图10是基于K均值聚类集成自适应投票法的变化检测结果图;Figure 10 is a graph of the change detection result based on the K-means clustering integrated adaptive voting method;

图11是本发明实施例的变化检测结果图。FIG. 11 is a graph of a change detection result according to an embodiment of the present invention.

具体实施方式Detailed ways

以下结合附图和实施例详细说明本发明技术方案。The technical solutions of the present invention will be described in detail below with reference to the accompanying drawings and embodiments.

本实施例采用SPOT 5影像做实验,所用影像空间分辨率为2.5米,大小为445×546像元,影像获取时间分别是2008年4月(T1)和2009年2月(T2),对应位置是中国北方某地区。图1、图2和图3分别展示T1和T2时刻影像及其变化参考图,其中变化参考图通过目视解译得到。In this example,SPOT 5 images are used for experiments. The spatial resolution of the images used is 2.5 meters and the size is 445×546 pixels. The image acquisition times are April 2008 (T1 ) and February 2009 (T2 ), respectively. The corresponding location is a certain region in northern China. Figure1 , Figure2 and Figure 3 respectively show the images at time T1 and T2 and their change reference maps, wherein the change reference maps are obtained by visual interpretation.

图4是本发明提出技术的流程图,本发明所采用的技术方案引力模型优化条件随机场的多尺度融合变化检测包括以下步骤:Fig. 4 is the flow chart of the technology proposed by the present invention. The technical solution adopted by the present invention is the multi-scale fusion change detection of the gravitational model optimization condition random field, which includes the following steps:

Step1、对两期高分辨率遥感影像进行预处理,所述的预处理包括影像配准和相对辐射校正等;本实施例参考T1时期的影像,对T2时期的影像进行影像配准和相对辐射校正。Step1. Preprocess the two-phase high-resolution remote sensing images. The preprocessing includes image registration and relative radiometric correction.In this embodiment, referring to the images in the T1 period, image registration and image registration are performedon the T2 period images. Relative radiometric correction.

Step2、设置相同的超像元分割参数和对象分割参数分别对两幅影像实施分割,分别获得两期影像的超像元级和对象级两个空间尺度影像,包含原始影像像元级尺度,共得到两期影像的三种空间尺度影像。Step 2. Set the same superpixel segmentation parameters and object segmentation parameters to segment the two images respectively, and obtain two spatial scale images of the superpixel level and object level of the two images respectively, including the pixel level scale of the original image. Three spatial scale images of two phase images were obtained.

本实施例中,设T1时刻和T2时刻的高分辨率遥感影像分别为X1和X2,采用简单线性迭代聚类方法SLIC对X1和X2进行分割,得到超像元分割图S1和S2,采用多分辨率分割方法对X1和X2进行分割,得到对象分割图O1和O2In this embodiment, the high-resolution remote sensing images at time T1 and time T2 are set as X1 and X2 respectively, and the simple linear iterative clustering method SLIC is used to segment X1 and X2 to obtain a superpixel segmentation map S1 and S2 , adopt a multi-resolution segmentation method to segment X1 and X2 to obtain object segmentation maps O1 and O2 .

Step3、利用变化矢量分析技术来生成两期影像的差分影像;Step3, use the change vector analysis technology to generate the differential image of the two-phase images;

本步骤中,像元级,超像元级,对象级差分影像分别记为

Figure BDA0002919362070000061
Figure BDA0002919362070000062
所述三组差分影像的计算过程具体实施如下:In this step, pixel-level, super-pixel-level, and object-level differential images are respectively recorded as
Figure BDA0002919362070000061
and
Figure BDA0002919362070000062
The specific implementation of the calculation process of the three groups of differential images is as follows:

Figure BDA0002919362070000071
Figure BDA0002919362070000071

Figure BDA0002919362070000072
Figure BDA0002919362070000072

Figure BDA0002919362070000073
Figure BDA0002919362070000073

其中,

Figure BDA0002919362070000074
表示t时刻像元级影像第b波段的第i个像元,
Figure BDA0002919362070000075
表示t时刻超像元级影像第b波段的第i个像元,
Figure BDA0002919362070000076
表示t时刻对象级影像第b波段的第i个像元,t=1,2,b=1,2,…,B,B为波段数,本实施例中B=3,像元级、超像元级和对象级差分影像分别由式(1)、式(2)和式(3)计算得到;in,
Figure BDA0002919362070000074
represents the i-th pixel of the b-th band of the pixel-level image at time t,
Figure BDA0002919362070000075
represents the i-th pixel of the b-th band of the superpixel-level image at time t,
Figure BDA0002919362070000076
Indicates the i-th pixel of the b-th band of the object-level image at time t, t=1, 2, b=1, 2, ..., B, B is the number of bands, in this embodiment B=3, pixel-level, super The pixel-level and object-level differential images are calculated by formula (1), formula (2) and formula (3) respectively;

Step4、对

Figure BDA0002919362070000077
Figure BDA0002919362070000078
三种空间尺度的差分影像分别进行模糊C均值聚类,得到三种空间尺度差分影像的模糊隶属度函数;在本步中,得到的像元级、超像元级和对象级三种空间尺度差分影像的模糊隶属度函数分别记作
Figure BDA0002919362070000079
Figure BDA00029193620700000710
Figure BDA00029193620700000711
其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,
Figure BDA00029193620700000712
表示像元级影像第i个像元属于第k类的隶属度,
Figure BDA00029193620700000713
表示超像元级影像第i个像元属于第k类的隶属度,
Figure BDA00029193620700000714
表示对象级影像第i个像元属于第k类的隶属度。Step4, yes
Figure BDA0002919362070000077
and
Figure BDA0002919362070000078
The difference images of the three spatial scales are respectively subjected to fuzzy C-means clustering, and the fuzzy membership functions of the difference images of the three spatial scales are obtained; in this step, the obtained three spatial scales of pixel level, superpixel level and object level The fuzzy membership function of the difference image is written as
Figure BDA0002919362070000079
Figure BDA00029193620700000710
and
Figure BDA00029193620700000711
where k∈{wc ,wu },wc and wu represent the changed class and the unchanged class, respectively,
Figure BDA00029193620700000712
represents the membership degree of the i-th pixel of the pixel-level image belonging to the k-th class,
Figure BDA00029193620700000713
represents the membership degree of the ith pixel of the superpixel-level image belonging to the kth class,
Figure BDA00029193620700000714
Indicates the membership degree of the i-th pixel of the object-level image belonging to the k-th class.

Step5、应用决策融级融合技术融合三种空间尺度差分影像的模糊隶属度函数,获得初步的融合结果;Step 5. Apply the decision fusion level fusion technology to fuse the fuzzy membership functions of the three spatial scale difference images to obtain preliminary fusion results;

具体的,本实施例将三种空间尺度差分影像的模糊隶属度函数视作三组证据,采用证据理论对三组证据进行决策级融合。像元级证据m1由像元级模糊隶属度函数得到:

Figure BDA00029193620700000715
其中k∈{wc,wu},wc和wu分别表示变化类和未变化类,
Figure BDA00029193620700000716
表示像元级证据中第i个像元的质量函数。超像元级证据m2由超像元级模糊隶属度函数得到:
Figure BDA00029193620700000717
其中
Figure BDA00029193620700000718
表示超像元级证据中第i个像元的质量函数。对象级证据m3由对象级模糊隶属度函数得到:
Figure BDA00029193620700000719
其中
Figure BDA00029193620700000720
表示对象级证据中第i个像元的质量函数。三组证据的融合证据用m表示。在证据理论的融合框架下,第i个像元的归一化常数Ki可由下式计算得到:Specifically, in this embodiment, the fuzzy membership functions of the three spatial scale difference images are regarded as three sets of evidence, and the evidence theory is used to perform decision-level fusion of the three sets of evidence. The pixel-level evidence m1 is obtained from the pixel-level fuzzy membership function:
Figure BDA00029193620700000715
where k∈{wc ,wu },wc and wu represent the changed class and the unchanged class, respectively,
Figure BDA00029193620700000716
Represents the quality function of the ith cell in the cell-level evidence. The superpixel-level evidencem2 is obtained from the superpixel-level fuzzy membership function:
Figure BDA00029193620700000717
in
Figure BDA00029193620700000718
Represents the mass function of the ith pixel in superpixel-level evidence. The object-level evidencem3 is obtained by the object-level fuzzy membership function:
Figure BDA00029193620700000719
in
Figure BDA00029193620700000720
Represents the mass function of the ith cell in the object-level evidence. The fusion evidence of the three sets of evidence is denoted by m. Under the fusion framework of evidence theory, the normalization constant Ki of the ith pixel can be calculated by the following formula:

Figure BDA0002919362070000081
Figure BDA0002919362070000081

融合证据m在第i个像元处关于变化类wc的置信度

Figure BDA0002919362070000082
可由下式计算得到:Confidence of the fusion evidence m about the change class wc at the ith pixel
Figure BDA0002919362070000082
It can be calculated by the following formula:

Figure BDA0002919362070000083
Figure BDA0002919362070000083

融合证据m在第i个像元处关于未变化类wu的置信度

Figure BDA0002919362070000084
可由下式得到:Confidence of the fused evidence m about the unchanged class wu at the ith pixel
Figure BDA0002919362070000084
It can be obtained by the following formula:

Figure BDA0002919362070000085
Figure BDA0002919362070000085

利用最大置信度原则获取初步融合结果,具体的,用li表示第i个像元在初步融合结果中的类别,li可通过下式得到:Use the principle of maximum confidence to obtain the preliminary fusion result. Specifically, use li to represent the category of the i-th pixel in the preliminary fusion result, and li can be obtained by the following formula:

Figure BDA0002919362070000086
Figure BDA0002919362070000086

wc和wu分别表示变化类和未变化类。wc and wu represent the changed class and the unchanged class, respectively.

Step6、利用引力模型优化的条件随机场模型优化所述初步融合结果,得到最终变化检测图;Step6, using the conditional random field model optimized by the gravitational model to optimize the preliminary fusion result to obtain the final change detection map;

在本步中,条件随机场将初步融合结果视作场模型,利用空间上下文信息对初步融合结果进行优化。具体的,引力模型优化的条件随机场通过式(8)给出的改进能量函数来优化初步融合结果:In this step, the conditional random field treats the preliminary fusion result as a field model, and uses the spatial context information to optimize the preliminary fusion result. Specifically, the conditional random field for the optimization of the gravitational model optimizes the initial fusion result through the improved energy function given by equation (8):

Figure BDA0002919362070000087
Figure BDA0002919362070000087

其中ψ1表示一元势函数,其主要考虑单个像元的观测信息,ψ2表示二元势函数,二元势函数是主要考虑像元与其邻域像元之间的相互关系。λ是调节ψ1和ψ2平衡的参数,在本实施例中λ=0.09,n表示研究区的像元总数,Ni表示像元i的邻域,本实施例采用5×5的邻域窗口,li和lj分别表示像元i和其邻域像元j的类别标签。Among them, ψ1 represents a univariate potential function, which mainly considers the observation information of a single pixel, ψ2 represents a binary potential function, and the binary potential function mainly considers the relationship between a pixel and its neighboring pixels. λ is a parameter for adjusting the balance of ψ1 and ψ2. In this embodiment, λ=0.09, n represents the total number of pixels in the study area, and Ni represents the neighborhood of pixel i. This embodiment uses a 5×5 neighborhood Window, li and lj represent the class labels of cell i and its neighbor cell j, respectively.

所述一元势详细定义如下:The unary potential is defined in detail as follows:

Figure BDA0002919362070000091
Figure BDA0002919362070000091

其中ψ1(li)表示将像元i分配给类别wu或wc的惩罚系数,ln表示自然对数比算子,

Figure BDA0002919362070000092
表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,
Figure BDA0002919362070000093
表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,li表示像元i的类别标签。所述二元势函数详细定义如下:where ψ1 (lii ) denotes the penalty coefficient for assigning pixel i to the class wu or wc , ln denotes the natural log ratio operator,
Figure BDA0002919362070000092
represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6),
Figure BDA0002919362070000093
Represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5), and li represents the category label of the pixeli . The binary potential function is defined in detail as follows:

Figure BDA0002919362070000094
Figure BDA0002919362070000094

其中ψ2(li,lj)表示像元i与其邻域像元j之间的相互作用,θ是调节sij与1-vij两项之间的平衡因子,本实施例中θ=2,li和lj分别表示像元i和其邻域像元j的类别标签,sij表示经典条件随机场中的二元势函数项,通过下式计算:where ψ2 (lii ,lj ) represents the interaction between pixel i and its neighboring pixel j, and θ is the balance factor that adjusts between sij and 1-vij . In this embodiment, θ= 2, li and lj represent the category labels of pixel i and its neighbor pixel j, respectively, and sij represents the binary potential function term in the classical conditional random field, which is calculated by the following formula:

Figure BDA0002919362070000095
Figure BDA0002919362070000095

其中d(i,j)表示像元i和其邻域像元j在空间域的欧氏距离,xi表示由三组差分影像

Figure BDA0002919362070000096
Figure BDA0002919362070000097
在像元i处灰度值构成的光谱矢量,即
Figure BDA0002919362070000098
Figure BDA0002919362070000099
xj表示由三组差分影像
Figure BDA00029193620700000910
Figure BDA00029193620700000911
在像元j处灰度值构成的光谱矢量,即
Figure BDA00029193620700000912
||xi-xj||2表示矢量xi和xj的欧式距离的平方,<>运算符用于计算像元i及其所有邻域像元j的||xi-xj||2的平均值,vij表示空间引力模型,用于优化传统条件随机场的二元势函数项sij,通过下式定义:where d(i, j) represents the Euclidean distance between pixel i and its neighbor pixel j in the spatial domain, andxi represents the difference between three groups of differential images
Figure BDA0002919362070000096
and
Figure BDA0002919362070000097
The spectral vector formed by the gray value at pixel i, namely
Figure BDA0002919362070000098
Figure BDA0002919362070000099
xj represents three sets of differential images
Figure BDA00029193620700000910
and
Figure BDA00029193620700000911
The spectral vector composed of gray values at pixel j, that is
Figure BDA00029193620700000912
||xi -xj ||2 represents the square of the Euclidean distance between the vectors xi and xj , and the <> operator is used to calculate the ||xi -xj | The mean value of |2 , vij represents the space gravity model, and is used to optimize the binary potential function term sij of the traditional conditional random field, which is defined by the following formula:

Figure BDA00029193620700000913
Figure BDA00029193620700000913

其中li和lj分别表示像元i和其邻域像元j的类别标签,d(i,j)表示像元i和邻域像元j在空间域的欧氏距离,

Figure BDA00029193620700000914
Figure BDA00029193620700000915
表示融合证据m在第i个像元处关于未变化类wu的置信度,通过式(6)计算得到,
Figure BDA0002919362070000101
表示融合证据m在第i个像元处关于变化类wc的置信度,通过式(5)计算得到,
Figure BDA0002919362070000102
Figure BDA0002919362070000103
表示融合证据m在第j个像元处关于未变化类wu的置信度,通过式(6)计算得到,
Figure BDA0002919362070000104
表示融合证据m在第j个像元处关于变化类wc的置信度,通过式(5)计算得到。where li and lj represent the category labels of pixel i and its neighbor pixel j, respectively, d(i,j) represents the Euclidean distance between pixel i and neighbor pixel j in the spatial domain,
Figure BDA00029193620700000914
Figure BDA00029193620700000915
represents the confidence of the fusion evidence m about the unchanged class wu at the ith pixel, which is calculated by formula (6),
Figure BDA0002919362070000101
represents the confidence of the fusion evidence m about the change class wc at the ith pixel, which is calculated by formula (5),
Figure BDA0002919362070000102
Figure BDA0002919362070000103
Represents the confidence of the fusion evidence m about the unchanged class wu at the jth pixel, which is calculated by formula (6),
Figure BDA0002919362070000104
Represents the confidence of the fusion evidence m about the change class wc at the jth pixel, which is calculated by formula (5).

可以采用不同的技术理论对式(8)给出的能量函数进行最小化,从而实现对初步融合结果的优化改进,在本实施例中,采用最大流算法来最小化式(8)给出的能量函数。Different technical theories can be used to minimize the energy function given by equation (8), so as to realize the optimization and improvement of the preliminary fusion result. In this embodiment, the maximum flow algorithm is used to minimize the energy function given by equation (8). energy function.

如图5-11所示,分别给出模糊C均值聚类算法、优化的模糊局部信息C均值算法、数据级多尺度融合法、马氏距离集成盒须图、DS证据理论、K均值聚类集成自适应多数投票法和本发明的变化检测图,表1给出上述不同变化检测方法变化检测图的统计结果。As shown in Figure 5-11, the fuzzy C-means clustering algorithm, the optimized fuzzy local information C-means algorithm, the data-level multi-scale fusion method, the Mahalanobis distance integrated box-and-whisker plot, DS evidence theory, and K-means clustering are given respectively. Integrating the adaptive majority voting method and the change detection map of the present invention, Table 1 shows the statistical results of the change detection maps of the above-mentioned different change detection methods.

表1不同变化检测方法结果的统计比较Table 1 Statistical comparison of results of different change detection methods

Figure BDA0002919362070000105
Figure BDA0002919362070000105

对比图5-11给出的不同变化检测方法的变化检测图和表1给出的统计结果可知,本发明的变化检测效果明显优于其它对比变化检测算法,本发明的检测结果同时具有最小的总体错误和最高的Kappa系数,本发明检测结果的总体错误为15359,分别比模糊C均值聚类算法、优化的模糊局部信息C均值算法、数据级多尺度融合法、马氏距离集成盒须图、DS证据理论、K均值聚类集成自适应多数投票法减少22771、10343、11083、9126、18878和6684个像元,本发明检测结果的Kappa系数为0.74,分别比模糊C均值聚类算法、优化的模糊局部信息C均值算法、数据级多尺度融合法、马氏距离集成盒须图、DS证据理论、K均值聚类集成自适应多数投票法提高26%、15%、18%、16%、22%和10%。Comparing the change detection diagrams of different change detection methods given in Figures 5-11 and the statistical results given in Table 1, it can be seen that the change detection effect of the present invention is obviously better than other comparative change detection algorithms, and the detection result of the present invention has the smallest The overall error and the highest Kappa coefficient, the overall error of the detection result of the present invention is 15359, which are respectively higher than the fuzzy C-means clustering algorithm, the optimized fuzzy local information C-means algorithm, the data-level multi-scale fusion method, and the Mahalanobis distance integrated box-and-whisker plot. , DS evidence theory, K-means clustering integrated adaptive majority voting method reduces 22771, 10343, 11083, 9126, 18878 and 6684 pixels, the Kappa coefficient of the detection result of the present invention is 0.74, which is respectively better than the fuzzy C-means clustering algorithm, Optimized fuzzy local information C-means algorithm, data-level multi-scale fusion method, Mahalanobis distance integrated box-and-whisker plot, DS evidence theory, K-means clustering integrated adaptive majority voting method improved by 26%, 15%, 18%, 16% , 22% and 10%.

本发明提出的变化检测技术--引力模型优化条件随机场的多尺度融合变化检测,能够有效融合高分辨率遥感影像多种尺度信息,并在融合过程中充分利用影像的上下文信息,从而能够很大程度上解决单尺度面向对象变化检测不能很好利用高分辨率遥感影像多尺度特征的不足,提升变化检测的精度,取得较优的变化检测结果。The change detection technology proposed in the present invention, the multi-scale fusion change detection of the random field of the gravitational model optimization condition, can effectively fuse various scale information of high-resolution remote sensing images, and make full use of the context information of the images in the fusion process, so that the To a large extent, it solves the problem that single-scale object-oriented change detection cannot make good use of multi-scale features of high-resolution remote sensing images, improves the accuracy of change detection, and achieves better change detection results.

以上仅为本发明的一个实施例,并非用于限定本发明的保护范围,因此,凡在本发明的精神和原则之内所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above is only an embodiment of the present invention and is not intended to limit the protection scope of the present invention. Therefore, any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the scope of the present invention. within the scope of protection.

Claims (5)

1. The multi-scale fusion image change detection method for the gravity model optimization conditional random field is characterized by comprising the following steps of:
step1, preprocessing two-stage high-resolution remote sensing images shot by the same target at different periods, wherein the preprocessing comprises image registration and relative radiation correction;
step2, respectively carrying out superpixel segmentation on the two images by adopting a simple linear iterative clustering method, respectively carrying out object-oriented segmentation on the two images by adopting a multi-resolution segmentation method, and adding the original images to obtain images with three spatial scales of a pixel level, a superpixel level and an object level;
step3, respectively generating pixel-level, super-pixel-level and object-level differential images of the two-stage image by using a change vector analysis method;
step4, carrying out fuzzy clustering on the differential images of the three spatial scales in Step3 to obtain fuzzy membership functions of the differential images of the three spatial scales;
step5, fusing the three groups of fuzzy membership functions obtained in Step4 by using a decision-level fusion method to obtain a primary fusion result;
and Step6, optimizing the preliminary fusion result in Step5 by using the conditional random field model optimized by the gravity model to obtain a final change detection diagram.
2. The method for detecting changes in multi-scale fusion images of a gravity model optimized conditional random field according to claim 1, wherein the Step2 defines a two-stage high-resolution remote sensing image capturing time as T1Time and T2At each moment, the corresponding remote sensing images are X1And X2Using simple linear iterative clustering method to X1And X2Carrying out segmentation to obtain a superpixel segmentation graph S1And S2Using multi-resolution division method to X1And X2Performing segmentation to obtain an object segmentation graph O1And O2
In Step3, pixel-level, super-pixel-level, and object-level differential images are respectively recorded as
Figure FDA0002919362060000011
And
Figure FDA0002919362060000012
the calculation formula of the three groups of difference images is as follows:
Figure FDA0002919362060000013
Figure FDA0002919362060000014
Figure FDA0002919362060000021
wherein,
Figure FDA0002919362060000022
the ith pixel of the b-th wave band of the image at the pixel level at the time t,
Figure FDA0002919362060000023
the ith pixel of the b-th wave band of the super pixel level image at the time t,
Figure FDA0002919362060000024
and an ith pixel which represents the B-th wave band of the object-level image at the time t, wherein t is 1,2, B is 1,2, …, and B is the wave band number.
3. The gravity model optimized conditional random field multi-scale fusion image change detection method according to claim 2, wherein Step4 is applied to
Figure FDA0002919362060000025
And
Figure FDA0002919362060000026
fuzzy C-means clustering is respectively carried out on the differential images of the three spatial scales to obtain fuzzy membership functions of the differential images of the three spatial scales, and the obtained fuzzy membership functions of the differential images of the pixel level, the super pixel level and the object level are respectively recorded as
Figure FDA0002919362060000027
And
Figure FDA0002919362060000028
where k ∈ { w ∈ }c,wu},wcAnd wuRespectively representing a changed class and an unchanged class,
Figure FDA0002919362060000029
the membership degree of the ith pixel of the pixel-level image belonging to the kth class is represented,
Figure FDA00029193620600000210
the membership degree of the ith pixel of the super pixel level image belonging to the kth class is represented,
Figure FDA00029193620600000211
and the membership degree of the ith pixel of the object-level image belonging to the kth class is represented.
4. The method for detecting multi-scale fusion image change of a gravity model optimized conditional random field according to claim 3, wherein Step5 is implemented by taking fuzzy membership functions of three spatial scale difference images as three groups of evidences, performing decision-level fusion on the three groups of evidences by adopting an evidence theory, and performing pixel-level evidence m1The fuzzy membership function of the pixel level is obtained as follows:
Figure FDA00029193620600000212
where k ∈ { w ∈ }c,wu},wcAnd wuRespectively representing a changed class and an unchanged class,
Figure FDA00029193620600000213
representing a quality function of an ith pixel in the pixel-level evidence;
superpixel level evidence m2The fuzzy membership function of the super pixel level is obtained by:
Figure FDA00029193620600000214
wherein
Figure FDA00029193620600000215
Representing a quality function of the ith pixel in the super pixel level evidence;
object-level evidence m3The objective-level fuzzy membership function yields:
Figure FDA00029193620600000216
wherein
Figure FDA00029193620600000217
Representing the quality function of the ith pixel in the object-level evidence. The fusion evidence of the three sets of evidence is denoted by m. Under the fusion framework of evidence theoryNormalized constant K of the ith pixeliCan be calculated from the following formula:
Figure FDA00029193620600000218
fusion evidence m about change class w at the ith pixel elementcDegree of confidence of
Figure FDA0002919362060000031
Can be calculated from the following formula:
Figure FDA0002919362060000032
(5) fused evidence m about unchanged class w at the ith pixel elementuDegree of confidence of
Figure FDA0002919362060000033
Can be obtained by the following formula:
Figure FDA0002919362060000034
obtaining the preliminary fusion result by using the maximum confidence principle, specifically, using liIndicates the category of the ith pixel in the preliminary fusion result, liCan be obtained by the following formula:
Figure FDA0002919362060000035
wcand wuRespectively representing a changed class and an unchanged class.
5. The gravity model optimized conditional random field multi-scale fusion image change detection method according to claim 4, wherein in Step6, the conditional random field regards the preliminary fusion result as a field model, and optimizes the preliminary fusion result by using spatial context information, and specifically, the gravity model optimized conditional random field optimizes the preliminary fusion result by using an improved energy function given by the following formula (8):
Figure FDA0002919362060000036
wherein psi1Representing a univariate potential function which takes into account the observation information of the individual picture elements, #2Representing a binary potential function, the binary potential function being such as to take into account the interrelationship between a picture element and its neighbourhood picture elements, and λ being the adjustment ψ1And psi2A parameter of balance, N representing the total number of picture elements of the investigation region, NiRepresenting the neighborhood of the pixel i, liAnd ljRespectively representing the category labels of the pixel i and the neighborhood pixel j;
the univariate potential is defined in detail as follows:
Figure FDA0002919362060000037
wherein psi1(li) Indicating the assignment of a pel i to a class wuOr wcThe penalty coefficient of (1), ln represents a natural logarithmic ratio operator,
Figure FDA0002919362060000038
indicating that the fused evidence m is about the unchanged class w at the ith pixel elementuThe confidence of (2) is calculated by the formula (6),
Figure FDA0002919362060000039
indicating that the fusion evidence m is about the change class w at the ith pixel elementcIs calculated by the formula (5) and liA category label representing the pixel i;
the binary potential function is defined in detail as follows:
Figure FDA0002919362060000041
wherein psi2(li,lj) Representing the interaction between a pixel i and its neighbor pixel j, θ is the adjustment sijAnd 1-vijBalance factor between two terms, /)iAnd ljClass labels, s, representing picture elements i and its neighborhood picture elements j, respectivelyijRepresenting a binary potential function term in a classical conditional random field, calculated by:
Figure FDA0002919362060000042
where d (i, j) represents the Euclidean distance of pixel i and its neighborhood pixel j in the spatial domain, xiRepresenting a three-group difference image
Figure FDA0002919362060000043
And
Figure FDA0002919362060000044
spectral vectors formed by grey values at pixel i, i.e.
Figure FDA0002919362060000045
Figure FDA0002919362060000046
xjRepresenting a three-group difference image
Figure FDA0002919362060000047
And
Figure FDA0002919362060000048
spectral vector formed by gray values at pixel j, i.e.
Figure FDA0002919362060000049
||xi-xj||2Expression vectorQuantity xiAnd xjThe square of the euclidean distance of (c),<>the operator is used for calculating | x of the pixel i and all neighborhood pixels j thereofi-xj||2Average value of vijRepresenting a spatial gravity model for optimizing a binary potential function term s of a conventional conditional random fieldijDefined by the formula:
Figure FDA00029193620600000410
wherein liAnd ljRespectively representing the class labels of the pixel i and the neighborhood pixel j thereof, d (i, j) represents the Euclidean distance of the pixel i and the neighborhood pixel j in a space domain,
Figure FDA00029193620600000411
Figure FDA00029193620600000412
indicating that the fused evidence m is about the unchanged class w at the ith pixel elementuThe confidence of (2) is calculated by the formula (6),
Figure FDA00029193620600000413
indicating that the fusion evidence m is about the change class w at the ith pixel elementcThe confidence of (2) is calculated by the formula (5),
Figure FDA0002919362060000051
Figure FDA0002919362060000052
indicating that the fused evidence m is about the unchanged class w at the jth pixel elementuThe confidence of (2) is calculated by the formula (6),
Figure FDA0002919362060000053
indicating that the fusion evidence m is about the change class w at the jth pixel elementcThe confidence of (3) is calculated by the formula (5).
CN202110111647.1A2021-01-272021-01-27 Multi-scale fusion image change detection method based on conditional random field optimized by gravity modelActiveCN112767376B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110111647.1ACN112767376B (en)2021-01-272021-01-27 Multi-scale fusion image change detection method based on conditional random field optimized by gravity model

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110111647.1ACN112767376B (en)2021-01-272021-01-27 Multi-scale fusion image change detection method based on conditional random field optimized by gravity model

Publications (2)

Publication NumberPublication Date
CN112767376Atrue CN112767376A (en)2021-05-07
CN112767376B CN112767376B (en)2023-07-11

Family

ID=75706137

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110111647.1AActiveCN112767376B (en)2021-01-272021-01-27 Multi-scale fusion image change detection method based on conditional random field optimized by gravity model

Country Status (1)

CountryLink
CN (1)CN112767376B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113240689A (en)*2021-06-012021-08-10安徽建筑大学Method for rapidly extracting flood disaster area
CN113298137A (en)*2021-05-212021-08-24青岛星科瑞升信息科技有限公司Hyperspectral image classification method based on local similarity data gravitation
CN114091508A (en)*2021-09-032022-02-25三峡大学 A Change Detection Method for Unsupervised Remote Sensing Image Based on Higher-Order Conditional Random Fields
CN114359693A (en)*2021-12-102022-04-15三峡大学 Change detection method of high-resolution remote sensing image based on fuzzy clustering of superpixels

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20080101678A1 (en)*2006-10-252008-05-01Agfa Healthcare NvMethod for Segmenting Digital Medical Image
US20090316988A1 (en)*2008-06-182009-12-24Samsung Electronics Co., Ltd.System and method for class-specific object segmentation of image data
CN104361589A (en)*2014-11-122015-02-18河海大学High-resolution remote sensing image segmentation method based on inter-scale mapping
CN107085708A (en)*2017-04-202017-08-22哈尔滨工业大学 Change detection method for high-resolution remote sensing images based on multi-scale segmentation and fusion
CN109389571A (en)*2017-08-032019-02-26香港理工大学深圳研究院A kind of remote sensing image variation detection method, device and terminal
CN109409389A (en)*2017-08-162019-03-01香港理工大学深圳研究院A kind of object-oriented change detecting method merging multiple features
CN109903274A (en)*2019-01-312019-06-18兰州交通大学 A high-resolution remote sensing image change detection method and system
CN110516754A (en)*2019-08-302019-11-29大连海事大学Hyperspectral image classification method based on multi-scale superpixel segmentation
CN110738672A (en)*2019-10-182020-01-31西安交通大学深圳研究院image segmentation method based on hierarchical high-order conditional random field

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20080101678A1 (en)*2006-10-252008-05-01Agfa Healthcare NvMethod for Segmenting Digital Medical Image
US20090316988A1 (en)*2008-06-182009-12-24Samsung Electronics Co., Ltd.System and method for class-specific object segmentation of image data
CN104361589A (en)*2014-11-122015-02-18河海大学High-resolution remote sensing image segmentation method based on inter-scale mapping
CN107085708A (en)*2017-04-202017-08-22哈尔滨工业大学 Change detection method for high-resolution remote sensing images based on multi-scale segmentation and fusion
CN109389571A (en)*2017-08-032019-02-26香港理工大学深圳研究院A kind of remote sensing image variation detection method, device and terminal
CN109409389A (en)*2017-08-162019-03-01香港理工大学深圳研究院A kind of object-oriented change detecting method merging multiple features
CN109903274A (en)*2019-01-312019-06-18兰州交通大学 A high-resolution remote sensing image change detection method and system
CN110516754A (en)*2019-08-302019-11-29大连海事大学Hyperspectral image classification method based on multi-scale superpixel segmentation
CN110738672A (en)*2019-10-182020-01-31西安交通大学深圳研究院image segmentation method based on hierarchical high-order conditional random field

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
SUSMITA GHOSH, ET.AL: "Unsupervised change detection of remotely sensed images using fuzzy clustering", 《2009 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCES IN PATTERN RECOGNITION》*
SUSMITA GHOSH, ET.AL: "Unsupervised change detection of remotely sensed images using fuzzy clustering", 《2009 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCES IN PATTERN RECOGNITION》, 13 February 2009 (2009-02-13)*
YULIYA TARABALKA, ET.AL: "SVM- and MRF-based method for accurate classification of hyperspectral images", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》*
YULIYA TARABALKA, ET.AL: "SVM- and MRF-based method for accurate classification of hyperspectral images", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 7, no. 4, 31 October 2010 (2010-10-31), XP011309056*
张华等: "《遥感数据可靠性分类方法研究》", 31 March 2016, 测绘出版社, pages: 95 - 99*
赵婕: "《图像特征提取与语义分析》", 31 July 2015, 重庆大学出版社, pages: 169 - 172*
邵攀: "非监督遥感变化检测模糊方法研究", 《中国优秀博硕士学位论文全文数据库(博士)基础科学辑》*
邵攀: "非监督遥感变化检测模糊方法研究", 《中国优秀博硕士学位论文全文数据库(博士)基础科学辑》, no. 1, 31 January 2020 (2020-01-31), pages 82 - 83*
金秋含等: "自适应空间信息MRF的FCM遥感图像聚类", 《计算机工程与设计》*
金秋含等: "自适应空间信息MRF的FCM遥感图像聚类", 《计算机工程与设计》, vol. 40, no. 8, 31 August 2019 (2019-08-31)*
龚健雅: "《对地观测数据处理与分析研究进展》", 31 December 2017, 武汉大学出版社, pages: 151 - 152*

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113298137A (en)*2021-05-212021-08-24青岛星科瑞升信息科技有限公司Hyperspectral image classification method based on local similarity data gravitation
CN113240689A (en)*2021-06-012021-08-10安徽建筑大学Method for rapidly extracting flood disaster area
CN114091508A (en)*2021-09-032022-02-25三峡大学 A Change Detection Method for Unsupervised Remote Sensing Image Based on Higher-Order Conditional Random Fields
CN114359693A (en)*2021-12-102022-04-15三峡大学 Change detection method of high-resolution remote sensing image based on fuzzy clustering of superpixels

Also Published As

Publication numberPublication date
CN112767376B (en)2023-07-11

Similar Documents

PublicationPublication DateTitle
CN112767376A (en)Multi-scale fusion image change detection method for gravity model optimization conditional random field
Zhou et al.Scale adaptive image cropping for UAV object detection
CN112906531B (en)Multi-source remote sensing image space-time fusion method and system based on non-supervision classification
Qu et al.Cycle-refined multidecision joint alignment network for unsupervised domain adaptive hyperspectral change detection
CN109829519B (en)Remote sensing image classification method and system based on self-adaptive spatial information
US9449395B2 (en)Methods and systems for image matting and foreground estimation based on hierarchical graphs
CN109376641A (en) A moving vehicle detection method based on UAV aerial video
Chen et al.Change detection in multi-temporal VHR images based on deep Siamese multi-scale convolutional networks
CN114202694A (en)Small sample remote sensing scene image classification method based on manifold mixed interpolation and contrast learning
Han et al.The edge-preservation multi-classifier relearning framework for the classification of high-resolution remotely sensed imagery
Lu et al.A novel synergetic classification approach for hyperspectral and panchromatic images based on self-learning
Zhao et al.Point based weakly supervised deep learning for semantic segmentation of remote sensing images
Lv et al.Iterative sample generation and balance approach for improving hyperspectral remote sensing imagery classification with deep learning network
Wu et al.Improved mask R-CNN-based cloud masking method for remote sensing images
CN109002771A (en)A kind of Classifying Method in Remote Sensing Image based on recurrent neural network
Zhang et al.Spatial contextual superpixel model for natural roadside vegetation classification
Chang et al.Unsupervised multi-view graph contrastive feature learning for hyperspectral image classification
CN119206191B (en)Built-in lightweight new energy self-adaptive detection method
Zhang et al.Locally homogeneous covariance matrix representation for hyperspectral image classification
Derivaux et al.Watershed segmentation of remotely sensed images based on a supervised fuzzy pixel classification
CN109242885B (en)Correlation filtering video tracking method based on space-time non-local regularization
Dong et al.Multilevel spatial feature-based manifold metric learning for domain adaptation in remote sensing image classification
CN115456942A (en) Change Detection Method Based on Label Constrained Superpixel Conditional Random Field
CN116704378A (en)Homeland mapping data classification method based on self-growing convolution neural network
Zhao et al.Sample Augmentation and Balance Approach for Improving Classification Performance with High-Resolution Remote Sensed Image

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp