Movatterモバイル変換


[0]ホーム

URL:


CN107292328A - The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion - Google Patents

The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion
Download PDF

Info

Publication number
CN107292328A
CN107292328ACN201610202779.4ACN201610202779ACN107292328ACN 107292328 ACN107292328 ACN 107292328ACN 201610202779 ACN201610202779 ACN 201610202779ACN 107292328 ACN107292328 ACN 107292328A
Authority
CN
China
Prior art keywords
msub
shadow
mrow
msubsup
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610202779.4A
Other languages
Chinese (zh)
Inventor
邵振峰
罗晖
李德仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHUfiledCriticalWuhan University WHU
Priority to CN201610202779.4ApriorityCriticalpatent/CN107292328A/en
Publication of CN107292328ApublicationCriticalpatent/CN107292328A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提高一种多尺度多特征融合的遥感影像阴影检测提取方法及系统,包括将原始高分辨遥感影像进行多种尺度下的对象分割,获取初始的影像对象;将遥感影像投影变换至RGB和HSI颜色空间,获取三种不变的阴影颜色特征,得到相应二值候选阴影影像;根据每种尺度下对象分割结果,分别采用D‑S证据理论融合步骤b提取的多种不变的阴影光谱特征进行面向对象的阴影提取;将不同尺度下获取的阴影提取结果进行决策融合,从而获取最终的阴影区域。本发明综合考虑的阴影的多方面颜色特征和不同尺度下的空间特征,对不同大小范围的阴影区域都有很高检测精度,同时保证阴影区域提取完整性,使获取的阴影检测结果具有更高的质量和更大的实用价值。

The present invention improves a multi-scale and multi-feature fusion remote sensing image shadow detection and extraction method and system, including performing object segmentation on multiple scales of the original high-resolution remote sensing image to obtain the initial image object; transforming the remote sensing image projection into RGB and In the HSI color space, three invariant shadow color features are obtained, and corresponding binary candidate shadow images are obtained; according to the object segmentation results at each scale, a variety of invariant shadow spectra extracted by D-S evidence theory fusion step b are respectively used Features for object-oriented shadow extraction; decision-making fusion of shadow extraction results obtained at different scales to obtain the final shadow area. The invention comprehensively considers the multi-faceted color features of shadows and the spatial features of different scales, and has high detection accuracy for shadow areas of different sizes, while ensuring the integrity of shadow area extraction, so that the acquired shadow detection results have higher High quality and greater practical value.

Description

Translated fromChinese
多尺度多特征融合的遥感影像阴影检测提取方法及系统Method and system for remote sensing image shadow detection and extraction based on multi-scale and multi-feature fusion

技术领域technical field

本发明属于遥感图像处理数据预处理技术领域,涉及一种基于多尺度多特征融合的遥感影像检测和提取方法及系统。The invention belongs to the technical field of remote sensing image processing data preprocessing, and relates to a remote sensing image detection and extraction method and system based on multi-scale and multi-feature fusion.

背景技术Background technique

高分辨率遥感影像(一般为分辨率5米以下)具有更加清晰的地物形状、更加丰富的纹理信息以及更加明确的空间分布等特点,因此在对地观测中发挥着重要的作用。但是高分辨率遥感影像存在大量的阴影,其会对后续高分辨遥感影像解译和分析带来不利影响;另一方面,阴影也可以作为特定的辅助信息为地物提供几何及形状特征。因此阴影提取是高分辨率遥感影像处理的重要步骤和研究热点之一。基于不变颜色特征模型的阴影提取方法是遥感影像的阴影提取的重要方向之一,该类方法又分为基于像素的阴影提取和面向对象的阴影提取两类方法。其中基于像素的方法仅仅使用光谱特征进行阴影提取,该类方法虽然计算简单且高效,但是计算结果却形状完整性差且伴有椒盐噪声现象。而面向对象的方法是高分辨率遥感影像阴影提取中高效的一种提取方法。这是由于在此类方法中,空间信息被充分利用和挖掘,并且由于影像分割后大量的同质像素形成了候选阴影区域,提取阴影的形状完整性也比面向像素的方法更加优秀。High-resolution remote sensing images (generally with a resolution of less than 5 meters) have the characteristics of clearer object shapes, richer texture information, and clearer spatial distribution, so they play an important role in earth observation. However, there are a large number of shadows in high-resolution remote sensing images, which will adversely affect the interpretation and analysis of subsequent high-resolution remote sensing images; on the other hand, shadows can also be used as specific auxiliary information to provide geometric and shape features for ground objects. Therefore, shadow extraction is an important step and one of the research hotspots in high-resolution remote sensing image processing. The shadow extraction method based on the invariant color feature model is one of the important directions of shadow extraction in remote sensing images. This type of method is divided into two types: pixel-based shadow extraction and object-oriented shadow extraction. Among them, the pixel-based method only uses spectral features for shadow extraction. Although the calculation of this type of method is simple and efficient, the calculation results have poor shape integrity and are accompanied by salt and pepper noise. The object-oriented method is an efficient extraction method in high-resolution remote sensing image shadow extraction. This is because in such methods, spatial information is fully utilized and mined, and since a large number of homogeneous pixels form candidate shadow regions after image segmentation, the shape integrity of extracted shadows is also better than pixel-oriented methods.

分割是面向对象的影像处理的关键步骤,其结果会对后续的提取结果产生较大的影响。图像信息的尺度决定了提取目标信息的力度,大尺度上获取更为宏观粗糙的信息,小尺度决定目标提取将更加微观精细。尺度在分割技术中也是一种重要的参数,其决定了分割提取地物的大小。阴影在高分辨遥感影像上不具有统一的尺度,如建筑物的阴影面积较大,但是树木的阴影面积较小。当针对同类地物具有不同尺度大小的表现时,统一尺度分割会导致过分割和欠分割现象出现。因此针对高分辨率遥感影像的基于多尺度多特征融合的阴影检测和提取方法能够得到更好的提取精度。Segmentation is a key step in object-oriented image processing, and its results will have a greater impact on subsequent extraction results. The scale of the image information determines the strength of extracting target information. On a large scale, more macroscopic and rough information is obtained, and on a small scale, the target extraction will be more microscopic and fine. Scale is also an important parameter in the segmentation technology, which determines the size of the segmentation and extraction features. Shadows do not have a uniform scale on high-resolution remote sensing images. For example, the shadow area of buildings is larger, but the shadow area of trees is smaller. When there are representations of different scales for the same kind of features, uniform scale segmentation will lead to over-segmentation and under-segmentation. Therefore, the shadow detection and extraction method based on multi-scale and multi-feature fusion for high-resolution remote sensing images can obtain better extraction accuracy.

发明内容Contents of the invention

本发明的目的在于针对现有技术的缺点和不足,提供一种基于多尺度多特征融合的高分辨率遥感影像阴影检测和提取方法及系统,通过多个尺度的对象分割技术综合获取不同尺度下的阴影对象信息,采用阴影在不变颜色模型中的光谱特征进行基于对象的阴影提取,最后综合不同尺度下的阴影提取结果获取最终的阴影分布图。提取方法能适用于高分辨率遥感影像处理领域。The purpose of the present invention is to address the shortcomings and deficiencies of the prior art, to provide a high-resolution remote sensing image shadow detection and extraction method and system based on multi-scale and multi-feature fusion, and to comprehensively obtain objects at different scales through multi-scale object segmentation technology. The shadow object information of the shadow is used, and the spectral characteristics of the shadow in the invariant color model are used for object-based shadow extraction. Finally, the shadow extraction results at different scales are combined to obtain the final shadow distribution map. The extraction method can be applied to the field of high-resolution remote sensing image processing.

本发明所采用的技术方案提高一种多尺度多特征融合的遥感影像阴影检测提取方法,所述遥感影像为高分辨率遥感影像,包括以下步骤:The technical solution adopted in the present invention improves a multi-scale and multi-feature fusion remote sensing image shadow detection and extraction method, the remote sensing image is a high-resolution remote sensing image, including the following steps:

步骤a,将原始高分辨遥感影像进行多种尺度下的对象分割,获取初始的影像对象;In step a, the original high-resolution remote sensing image is segmented into objects at multiple scales to obtain initial image objects;

步骤b,将遥感影像投影变换至RGB和HSI颜色空间,获取三种不变的阴影颜色特征,根据K-means聚类方法对三种阴影颜色特征分别进行二值化处理,得到三张基于各阴影颜色特征的二值候选阴影影像;Step b, transform the remote sensing image projection into RGB and HSI color spaces, obtain three invariant shadow color features, and perform binarization on the three shadow color features according to the K-means clustering method, and obtain three images based on each binary candidate shadow images for shadow color features;

步骤c,根据步骤a所得每种尺度下对象分割结果,分别采用D-S证据理论融合步骤b提取的多种不变的阴影光谱特征进行面向对象的阴影提取,实现方式如下,In step c, according to the object segmentation results of each scale obtained in step a, the object-oriented shadow extraction is carried out by using the various invariant shadow spectral features extracted in step b of D-S evidence theory fusion, and the implementation method is as follows,

设步骤b得到的三个二值候选阴影影像F1,F2,F3为三个证据,假设集合Θ={h0,h1},其中h0代表阴影区域的对象,而h1代表非阴影区域;则2Θ的非空子集是{h0},{h1},{h0,h1};设影像被多尺度分割算法分割为多个对象,对象定义为Oj,j=1,2,...,k,Let the three binary candidate shadow images F1 , F2 , and F3 obtained in step b be three pieces of evidence, assuming the set Θ={h0 ,h1 }, where h0 represents the object in the shadow area, and h1 represents non-shaded area; then the non-empty subset of 2Θ is {h0 }, {h1 }, {h0 ,h1 }; suppose the image is divided into multiple objects by multi-scale segmentation algorithm, and the object is defined as Oj ,j =1,2,...,k,

设针对Fi的第j个对象内的三个非空集合的基本概率分配函数为i=1,2,3,分别如下所示,Let the basic probability distribution function for three non-empty sets in the jth object of Fi be i=1,2,3, respectively as follows,

其中,pi分别是特征Fi的权重,分别代表在Fi特征下对象j内的候选阴影像素个数和总像素个数;Among them, pi are the weights of feature Fi respectively, with Respectively represent the number of candidate shadow pixels and the total number of pixels in object j under Fi feature;

融合三个特征的分别为mj(h0)、mj(h1)、mj(h0,h1),当下式满足时,对象j被提取为阴影区域。Combining the three features They are mj (h0 ), mj (h1 ), mj (h0 , h1 ), respectively. When the following formula is satisfied, object j is extracted as a shaded area.

步骤d,将不同尺度下获取的阴影提取结果进行决策融合,从而获取最终的阴影区域。In step d, decision fusion is performed on the shadow extraction results acquired at different scales, so as to obtain the final shadow area.

而且,步骤a中,采用三种尺度下的多尺度分割结果。Moreover, in step a, multi-scale segmentation results under three scales are used.

而且,对每种尺度,分别采用自下而上基于异质度准则的区域增长分割方式实现。Moreover, for each scale, a bottom-up region-growing segmentation method based on the heterogeneity criterion is used to realize it.

而且,步骤d中,采用投票法进行决策融合。Moreover, in step d, a voting method is used for decision fusion.

本发明还相应提高一种多尺度多特征融合的遥感影像阴影检测提取系统,所述遥感影像为高分辨率遥感影像,包括以下模块:The present invention also correspondingly improves a multi-scale and multi-feature fusion remote sensing image shadow detection and extraction system. The remote sensing image is a high-resolution remote sensing image, including the following modules:

第一模块,用于将原始高分辨遥感影像进行多种尺度下的对象分割,获取初始的影像对象;The first module is used to segment the original high-resolution remote sensing images into objects at multiple scales to obtain initial image objects;

第二模块,用于将遥感影像投影变换至RGB和HSI颜色空间,获取三种不变的阴影颜色特征,根据K-means聚类方法对三种阴影颜色特征分别进行二值化处理,得到三张基于各阴影颜色特征的二值候选阴影影像;The second module is used to transform the projection of remote sensing images into RGB and HSI color spaces, and obtain three invariant shadow color features. According to the K-means clustering method, the three shadow color features are binarized respectively to obtain three A binary candidate shadow image based on each shadow color feature;

第三模块,用于根据第一模块所得每种尺度下对象分割结果,分别采用D-S证据理论融合第二模块提取的多种不变的阴影光谱特征进行面向对象的阴影提取,实现方式如下,The third module is used to perform object-oriented shadow extraction based on the object segmentation results of each scale obtained by the first module, using D-S evidence theory to fuse multiple invariant shadow spectral features extracted by the second module. The implementation method is as follows,

设第二模块得到的三个二值候选阴影影像F1,F2,F3为三个证据,假设集合Θ={h0,h1},其中h0代表阴影区域的对象,而h1代表非阴影区域;则2Θ的非空子集是{h0},{h1},{h0,h1};设影像被多尺度分割算法分割为多个对象,对象定义为Oj,j=1,2,...,k,Let the three binary candidate shadow images F1 , F2 , and F3 obtained by the second module be three pieces of evidence, assuming the set Θ={h0 ,h1 }, where h0 represents the object in the shadow area, and h1 Represents the non-shaded area; then the non-empty subset of 2Θ is {h0 }, {h1 }, {h0 ,h1 }; suppose the image is divided into multiple objects by the multi-scale segmentation algorithm, and the object is defined as Oj , j=1,2,...,k,

设针对Fi的第j个对象内的三个非空集合的基本概率分配函数为i=1,2,3,分别如下所示,Let the basic probability distribution function for three non-empty sets in the jth object of Fi be i=1,2,3, respectively as follows,

其中,pi分别是特征Fi的权重,分别代表在Fi特征下对象j内的候选阴影像素个数和总像素个数;Among them, pi are the weights of feature Fi respectively, with Respectively represent the number of candidate shadow pixels and the total number of pixels in object j under Fi feature;

融合三个特征的分别为mj(h0)、mj(h1)、mj(h0,h1),当下式满足时,对象j被提取为阴影区域。Combining the three features They are mj (h0 ), mj (h1 ), mj (h0 , h1 ), respectively. When the following formula is satisfied, object j is extracted as a shaded area.

第四模块,用于将不同尺度下获取的阴影提取结果进行决策融合,从而获取最终的阴影区域。The fourth module is used for decision-making fusion of shadow extraction results obtained at different scales, so as to obtain the final shadow area.

而且,第一模块中,采用三种尺度下的多尺度分割结果。Moreover, in the first module, multi-scale segmentation results at three scales are used.

而且,对每种尺度,分别采用自下而上基于异质度准则的区域增长分割方式实现。Moreover, for each scale, a bottom-up region-growing segmentation method based on the heterogeneity criterion is used to realize it.

而且,第四模块中,采用投票法进行决策融合。Moreover, in the fourth module, the voting method is used for decision fusion.

本发明提供的技术方案的有益效果为:结合高分辨率遥感影像的多尺度分割方法获取初始对象单元,在不变颜色空间中获取不受影像成像角度、光线影响的多种阴影光谱特征,采用D-S证据理论融合的方法以对象为单位进行多种不变阴影光谱特征的融合,综合提取出候选阴影区域,最终对不同分割尺度下按照前述步骤所得到的候选阴影区域进行决策融合,在考虑局部的多种空间尺度信息的同时,能够融合多种阴影提取光谱特征进行阴影提取,极大地提高遥感影像阴影提取的精度,从而支持后续的遥感影像处理流程,从而使获取的阴影检测结果具有更高的质量和更大的实用价值。The beneficial effects of the technical solution provided by the present invention are: combining the multi-scale segmentation method of high-resolution remote sensing images to obtain the initial object unit, and obtaining various shadow spectral features not affected by image imaging angle and light in the invariant color space, using The method of D-S evidence theory fusion combines multiple invariant shadow spectral features in units of objects, comprehensively extracts candidate shadow regions, and finally performs decision-making fusion on candidate shadow regions obtained according to the previous steps under different segmentation scales, considering local At the same time, it can integrate multiple shadow extraction spectral features for shadow extraction, which greatly improves the accuracy of remote sensing image shadow extraction, thereby supporting the subsequent remote sensing image processing process, so that the obtained shadow detection results have higher accuracy. High quality and greater practical value.

附图说明Description of drawings

图1为本发明实施例的流程图。Fig. 1 is a flowchart of an embodiment of the present invention.

具体实施方式detailed description

为了更好地理解本发明的技术方案,下面结合附图和实施例对本发明做进一步的详细说明。In order to better understand the technical solution of the present invention, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

步骤a:高分辨率遥感影像的多尺度分割Step a: Multi-scale segmentation of high-resolution remote sensing images

将原始的高分辨率遥感影像进行多种尺度的对象分割,获取初始的分割对象。本发明所采用的分割技术是一种经典的高分辨率遥感影像分割处理算法。它是一种自下而上基于异质度准则的区域增长分割技术。为便于实施参考起见,介绍其实现方式如下:The original high-resolution remote sensing images are segmented at multiple scales to obtain the initial segmented objects. The segmentation technology adopted in the present invention is a classic high-resolution remote sensing image segmentation processing algorithm. It is a bottom-up region growth segmentation technique based on heterogeneity criterion. For the convenience of implementation reference, the implementation method is introduced as follows:

该分割算法由单一像素开始经过多次迭代优化得到最终的分割结果,在每一次优化过程中,在所有相邻区域中选取具有最小的异质度(fm)增长的领域区域对,当fm不超过预先设置的异质度的阈值f时两相邻区域合并成一个新的区域;反之,若fm超过f,则合并过程停止。本发明中f设定为影像尺度s的平方,尺度决定了分割结果中最小的区域对象大小范围。异质度fm是通过加权融合光谱和空间形状两者的异质度进行计算的,具体计算过程如下所示。The segmentation algorithm starts from a single pixel and undergoes multiple iterative optimizations to obtain the final segmentation result. In each optimization process, the domain area pair with the smallest heterogeneity (fm ) growth is selected in all adjacent areas. When f Whenm does not exceed the preset heterogeneity threshold f, two adjacent areas are merged into a new area; otherwise, if fm exceeds f, the merging process stops. In the present invention, f is set as the square of the image scale s, and the scale determines the smallest area object size range in the segmentation result. The heterogeneity fm is calculated by weighting the heterogeneity of fusion spectrum and spatial shape, and the specific calculation process is as follows.

1)光谱异质度准则1) Spectral heterogeneity criterion

光谱异质度hcolor计算如公式(1)下所示。The spectral heterogeneity hcolor is calculated as shown under the formula (1).

其中,c为波段数,σc是波段的光谱标准差,wc是层的权重。where c is the number of bands, σc is the spectral standard deviation of the bands, and wc is the weight of the layer.

如果两个相邻区域的方差和面积分别为σcobj1、nobj1、σcobj2、nobj2,则合并后光谱差异性度量准则如式(2)所示。If the variance and area of two adjacent regions are respectively σcobj1 , nobj1 , σcobj2 , and nobj2 , then the combined spectral difference measurement criterion is shown in formula (2).

式中wc表示要参与分割合并波段的权重,nMerge、σcMerge分别为合并后区域的面积和标准差。wR,wG,wB分别设置为遥感影像红、绿、蓝波段的权重。In the formula, wc represents the weight to participate in the segmentation of the merged band, and nMerge and σcMerge are the area and standard deviation of the merged region, respectively. wR , wG , wB are respectively set as the weights of the red, green and blue bands of the remote sensing image.

2)形状异质度准则2) Shape heterogeneity criterion

表征形状特征的两个参量,分别为光滑度hsmooth和紧密度hcompact。光滑度是为了表征合并后区域的光滑程度,而紧密度是为了保证合并后区域更加紧凑。The two parameters characterizing the shape feature are smoothness hsmooth and compactness hcompact . Smoothness is used to characterize the smoothness of the merged region, while compactness is used to ensure that the merged region is more compact.

这两个参量的公式表达分别如式(3)、(4)所示。The formula expressions of these two parameters are shown in formulas (3) and (4) respectively.

式中l为区域的周长,b为区域最小外接矩形的周长,n为区域的面积。如果两个相邻区域的形状参量分别为lobj1、bobj1、lobj2、bobj2,合并后的形状参数为光滑度hcompact、紧密度hsmooth,那么这两个形状差异性度量准则如式(5)、(6)所示。In the formula, l is the perimeter of the region, b is the perimeter of the smallest circumscribed rectangle of the region, and n is the area of the region. If the shape parameters of two adjacent regions are lobj1 , bobj1 , lobj2 , bobj2 respectively, and the combined shape parameters are smoothness hcompact and compactness hsmooth , then the two shape difference measurement criteria are as follows: (5), (6) shown.

式中nmerge、lmerge、bmerge分别为区域合并后的面积、周长和区域最小外接矩形的周长。In the formula, nmerge , lmerge , and bmerge are respectively the area and perimeter of the merged regions and the perimeter of the smallest circumscribed rectangle of the region.

形状差异性hshape度量可用上述光滑度和紧密度两个参量来得到,如式(7)所示。The shape difference hshape measure can be obtained by the above two parameters of smoothness and compactness, as shown in formula (7).

hshape=wcompact×hcompact+(1-wcompact)×hsmooth (7)hshape = wcompact × hcompact + (1-wcompact ) × hsmooth (7)

式(7)中wcompact表示光滑度在计算形状差异性度量准则中所占的权值。In Equation (7), wcompact represents the weight of smoothness in calculating the shape difference measurement criterion.

3)光谱和形状异质度准则的综合3) Synthesis of spectral and shape heterogeneity criteria

综合以上几个区域合并异质度准则,采用一个综合的异质度准则计算公式,如式(8)所示。Combining the above several regional heterogeneity criteria, a comprehensive heterogeneity criterion calculation formula is used, as shown in formula (8).

fm=wcolor×hcolor+(1-wcolor)×hshape (8)fm =wcolor ×hcolor +(1-wcolor )×hshape (8)

其中hcolor和hshape分别表示颜色和形状的差异性度量准则,wcolor表示光谱差异性度量准则在综合准则中所占的权重。Among them, hcolor and hshape represent the difference measurement criterion of color and shape respectively, and wcolor represents the weight of the spectral difference measurement criterion in the comprehensive criterion.

当输入高分辨率遥感影像后,根据预设参数进行分割。具体实施时,本领域技术人员可根据经验设置尺度参数初始值s及浮动区间Δs,各类权重,包括wR、wG、wB、wcompact、wcolor。然后,按照上述区域增长分割方式进行尺度分别为s+Δs、s、s-Δs的多尺度分割,从而获取得到三个初始分割结果。When the high-resolution remote sensing image is input, it is segmented according to preset parameters. During specific implementation, those skilled in the art can set the initial value s of the scale parameter, the floating interval Δs, and various weights based on experience, including wR , wG , wB , wcompact , and wcolor . Then, multi-scale segmentation with scales of s+Δs, s, and s-Δs is performed according to the above-mentioned region growth segmentation method, so as to obtain three initial segmentation results.

步骤b:基于不变颜色模型的阴影光谱特征提取Step b: Shadow Spectral Feature Extraction Based on Invariant Color Model

本发明中采用基于RGB和HSI颜色空间中提取的三种不随遥感影像中成像角度、成像光线等成像条件变化而改变的阴影光谱特征。The present invention adopts three kinds of shadow spectrum features extracted from RGB and HSI color spaces that do not change with changes in imaging conditions such as imaging angle and imaging light in remote sensing images.

1)F1:HSI颜色空间中的色调(Hue)分量。该特征的计算公式如公式(9)所示,其中R、G、B分别代表遥感影像中红、绿、蓝波段的数值。一般来说,遥感影像中阴影和非阴影区域的色调值差异巨大,因此可选择为阴影提取的光谱特征之一。1) F1 : Hue component in the HSI color space. The calculation formula of this feature is shown in formula (9), where R, G, and B represent the values of the red, green, and blue bands in the remote sensing image, respectively. In general, the hue values of shaded and non-shaded regions in remote sensing imagery differ greatly, so it can be selected as one of the spectral features extracted for shadows.

其中,θ表示角度。where θ represents an angle.

2)F2:蓝波段和绿波段的差值,即F2=B-G。由于光线的遮挡,遥感影像上的阴影区域在光谱上呈现偏蓝的模式,该特征是衡量其偏蓝的程度,为区分阴影与非阴影区域提供参考。2) F2 : the difference between the blue band and the green band, that is, F2 =BG. Due to the occlusion of light, the shaded area on the remote sensing image presents a bluish pattern on the spectrum. This feature measures the degree of bluishness and provides a reference for distinguishing between shadowed and non-shaded areas.

3)F3:HSI颜色空间中的亮度I(Intensity)分量和饱和度S(Saturation)的差值,如公式(10)所示。3) F3 : the difference between the brightness I (Intensity) component and the saturation S (Saturation) in the HSI color space, as shown in formula (10).

经过步骤b中的上述三步,获取了三个阴影光谱特征影像,再分别对上述特征影像进行K-means二值聚类,即最终获取三张基于各阴影颜色特征的二值候选阴影影像,其中影像上数值为0代表候选阴影区域,而数值1代表候选非阴影区域。After the above three steps in step b, three shadow spectrum feature images are obtained, and then K-means binary clustering is performed on the above feature images respectively, that is, finally three binary candidate shadow images based on the color characteristics of each shadow are obtained, A value of 0 on the image represents a candidate shadow area, and a value of 1 represents a candidate non-shadow area.

步骤c:面向对象的基于Dempster-Shafer(D-S)证据理论的多光谱特征融合阴影提取Step c: Object-oriented multispectral feature fusion shadow extraction based on Dempster-Shafer (D-S) evidence theory

由于实际地物的复杂性,遥感影像中仅使用单一光谱特征或者使用步骤c的三个特征的交集影像提取阴影的精度并不高。本发明采用D-S证据理论进行多特征的融合,最终获取以对象为单位高分辨遥感影像的阴影提取准则。D-S证据理论核心理论为基本概率分配函数(Basic Probability Assignment Function,BPAF)。假设有限集合Θ,则其集族为2Θ。其中A是该集族中的一个非空集合,m(A)代表集合A的BPAF。集函数m:2Θ→[0,1],满足两个条件:其中为空集。对于q件独立证据理论和g类的集合Ar(且r=1,2…,g),mn(Bn)代表由第n(1≤n≤q,q≥3)个证据计算的BPAF,其中存在集合Bn∈{A1,A2,…,Ag}。因此,m(A)代表了融合q件证据的概率,定义如公式(11)所示。Due to the complexity of the actual ground objects, the accuracy of shadow extraction using only a single spectral feature or the intersection image of the three features in step c in remote sensing images is not high. The invention adopts the DS evidence theory to carry out multi-feature fusion, and finally obtains the shadow extraction criterion of the high-resolution remote sensing image with the object as the unit. The core theory of DS evidence theory is Basic Probability Assignment Function (BPAF). Assuming a finite set Θ, its family of sets is 2Θ . Where A is a non-empty set in the set family, and m(A) represents the BPAF of set A. Set function m: 2Θ → [0,1], satisfying two conditions: in is an empty set. For a set Ar of q independent evidence theories and g classes ( and r=1,2...,g), mn (Bn ) represents the BPAF calculated from the nth (1≤n≤q,q≥3) evidence, where there is a set Bn ∈{A1 ,A2 ,...,Ag }. Therefore, m(A) represents the probability of fusing q pieces of evidence, defined as shown in formula (11).

在本发明实施例中三个证据分别为步骤b所得三个特征,即二值候选阴影影像F1,F2,F3。假设集合Θ={h0,h1},其中h0代表阴影区域的对象,而h1代表非阴影区域。因此2Θ的非空子集是{h0},{h1},{h0,h1}。经过步骤a后,影像被分割为多个对象,对象定义为Oj(j=1,2,...,k)。针对Fi的第j个对象内的三个非空集合的BPAF为分别如公式(12)所示,其中pi,i=1,2,3分别是预先设置的步骤b中的三个特征F1,F2,F3的权重,具体实施时本领域技术人员可自行预设取值,例如经验值。分别代表在Fi特征下对象j内的候选阴影像素个数和总像素个数。融合步骤b中三个阴影光谱特征的为mj(h0)、mj(h1)、mj(h0,h1),皆由公式(11)和公式(12)计算所得。当公式(13)满足时,对象j被提取为阴影区域。In the embodiment of the present invention, the three evidences are the three features obtained in step b, namely the binary candidate shadow images F1 , F2 , and F3 . Assume a set Θ = {h0 , h1 }, where h0 represents objects in shaded regions and h1 represents non-shaded regions. So the non-empty subset of 2Θ is {h0 }, {h1 }, {h0 ,h1 }. After step a, the image is divided into multiple objects, and the object is defined as Oj (j=1,2,...,k). The BPAFs for three non-empty sets within the jth object of Fi are As shown in the formula (12), respectively, where pi , i=1, 2, 3 are the weights of the three features F1 , F2 , and F3 in the preset step b, and those skilled in the art You can preset the value by yourself, such as experience value. with Respectively represent the number of candidate shadow pixels and the total number of pixels in object j under Fi feature. of the three shaded spectral features in fusion step b are mj (h0 ), mj (h1 ), and mj (h0 , h1 ), all of which are calculated by formula (11) and formula (12). When formula (13) is satisfied, object j is extracted as a shaded area.

步骤d:多尺度阴影提取结果的决策融合Step d: Decision fusion of multi-scale shadow extraction results

步骤a中影像分割尺度分别定义为s+Δs、s、s-Δs进行多尺度分割,从而获取得到三个初始分割结果。分别针对这三个初始分割结果作为后续步骤c的输入,最终获取得到三种基于不同分割尺度下的面向对象的阴影提取结果。步骤d再对这三种不同分割尺度下阴影提取结果进行逐像素的投票决策,每一个像素获得票数更多即使其最终的状态(阴影或非阴影),最终获取得到更高精度的阴影提取结果。In step a, the image segmentation scales are respectively defined as s+Δs, s, and s-Δs for multi-scale segmentation, so as to obtain three initial segmentation results. The three initial segmentation results are respectively used as the input of the subsequent step c, and finally three object-oriented shadow extraction results based on different segmentation scales are obtained. In step d, a pixel-by-pixel voting decision is made on the shadow extraction results under these three different segmentation scales. Each pixel gets more votes even its final state (shadow or non-shadow), and finally obtains a higher-precision shadow extraction result. .

综上所述,本发明首先采用多尺度分割的方法获取充分利用高分辨率遥感影像空间信息的对象结果,提取了多个不随成像条件改变的阴影提取光谱特征,利用D-S证据理论提取了具有较高精度的面向对象高分辨率遥感影像阴影提取结果,最终进行多尺度结果的决策融合。To sum up, the present invention first adopts the method of multi-scale segmentation to obtain object results that make full use of the spatial information of high-resolution remote sensing images, extracts a number of shadows that do not change with imaging conditions, and extracts spectral features, and uses D-S evidence theory to extract High-precision object-oriented high-resolution remote sensing image shadow extraction results, and finally carry out decision fusion of multi-scale results.

具体实施时,本发明所提供方法可基于软件技术实现自动运行流程,也可采用模块化方式实现相应系统。本发明实施例还相应提高一种多尺度多特征融合的遥感影像阴影检测提取系统,所述遥感影像为高分辨率遥感影像,包括以下模块:During specific implementation, the method provided by the present invention can realize the automatic operation process based on software technology, and can also realize the corresponding system in a modular manner. Embodiments of the present invention also correspondingly improve a multi-scale and multi-feature fusion remote sensing image shadow detection and extraction system, the remote sensing image is a high-resolution remote sensing image, including the following modules:

第一模块,用于将原始高分辨遥感影像进行多种尺度下的对象分割,获取初始的影像对象;The first module is used to segment the original high-resolution remote sensing images into objects at multiple scales to obtain initial image objects;

第二模块,用于将遥感影像投影变换至RGB和HSI颜色空间,获取三种不变的阴影颜色特征,根据K-means聚类方法对三种阴影颜色特征分别进行二值化处理,得到三张基于各阴影颜色特征的二值候选阴影影像;The second module is used to transform the projection of remote sensing images into RGB and HSI color spaces, and obtain three invariant shadow color features. According to the K-means clustering method, the three shadow color features are binarized respectively to obtain three A binary candidate shadow image based on each shadow color feature;

第三模块,用于根据第一模块所得每种尺度下对象分割结果,分别采用D-S证据理论融合第二模块提取的多种不变的阴影光谱特征进行面向对象的阴影提取;The third module is used to perform object-oriented shadow extraction by using D-S evidence theory to fuse various invariant shadow spectral features extracted by the second module according to the object segmentation results of each scale obtained in the first module;

第四模块,用于将不同尺度下获取的阴影提取结果进行决策融合,从而获取最终的阴影区域。The fourth module is used for decision-making fusion of shadow extraction results obtained at different scales, so as to obtain the final shadow area.

各模块具体实现可参见相应步骤,本发明不予赘述。For the specific implementation of each module, reference may be made to the corresponding steps, which will not be described in detail in the present invention.

以下通过实验来验证本发明的有效性:Validation of the present invention is verified by experiment below:

实验一:北京城区WILD 15/4UAGA-F航片,分辨率约为0.5M,波段分别有红、绿、蓝波段,影像大小为512*512像素,选择经典的基于连续阈值的阴影检测算法(SuccessiveThresholding Scheme,STS)及数学形态学建筑/阴影指数(Morphological Building/Shadow Index,MBSI)方法作为对比。Experiment 1: WILD 15/4UAGA-F aerial photograph of Beijing urban area, the resolution is about 0.5M, the bands are red, green, and blue, and the image size is 512*512 pixels. The classic shadow detection algorithm based on continuous threshold ( SuccessiveThresholding Scheme, STS) and Mathematical Morphological Building/Shadow Index (Morphological Building/Shadow Index, MBSI) methods are used for comparison.

阴影提取评价指标:首先采用目视解疑的方法对影像上的阴影地区进行人工提取,该提取结果作为精度评定的参考影像。精度评定结果采用基于二类分类(阴影和非阴影区域)的Kappa系数作为指标。Shadow extraction evaluation index: Firstly, the shadow area on the image is manually extracted by using the method of visual disambiguation, and the extraction result is used as a reference image for accuracy evaluation. The accuracy evaluation results use the Kappa coefficient based on the binary classification (shaded and non-shaded areas) as an index.

Kappa系数是用来评价分类问题的权威评价指标。Kappa系数越大,精度越高。在阴影提取中,可以将检测结果看做二分类问题(阴影和非阴影)。本实验中,选取的是两种经典对比方法及本发明方法所能获得的Kappa系数来评价对应方法的阴影提取能力。The Kappa coefficient is an authoritative evaluation index used to evaluate classification problems. The larger the Kappa coefficient, the higher the accuracy. In shadow extraction, the detection results can be regarded as a binary classification problem (shadow and non-shadow). In this experiment, the Kappa coefficient obtained by two classical comparison methods and the method of the present invention is selected to evaluate the shadow extraction ability of the corresponding method.

Kappa系数计算方法为:The calculation method of Kappa coefficient is:

根据样本获取混淆矩阵,见表1。Obtain the confusion matrix according to the sample, see Table 1.

表1 混淆矩阵Table 1 Confusion Matrix

表1中,Nnn表示非阴影样本被检测成为非阴影的数量;Nnc表示非阴影样本被检测成为阴影的数量;Ncn表示阴影样本被检测成为非阴影的数量;Ncc表示阴影样本被检测成为阴影的数量;Nnp为Nnc和Nnc之和,Ncp为Ncn和Ncc之和,Npn为Nnn和Ncp之和,Npc为Nnc和Ncc之和,N为总样本数。In Table 1, Nnn represents the number of non-shadow samples detected as non-shadow; Nnc represents the number of non-shadow samples detected as shadow; Ncn represents the number of shadow samples detected as non-shadow; Ncc represents the number of shadow samples detected as shadow Quantity; Nnp is the sum of Nnc and Nnc, Ncp is the sum of Ncn and Ncc, Npn is the sum of Nnn and Ncp, Npc is the sum of Nnc and Ncc, and N is the total number of samples.

Kappa系数的计算公式为:The calculation formula of Kappa coefficient is:

实验结果:Experimental results:

本次实验结果实验参数分别为:s=20,Δ=10,wR=wG=wB=1/3,wcompact=0.5,wcolor=0.5,f=400,p1=0.79,p2=0.77,p3=0.88。从主观的角度分析本发明实验结果,阴影区域提取的形状信息得到很好的保留,基本没有椒盐噪声的现象,而对比实验的两种方法(STS和MBSI方法)都不同程度的出现了椒盐噪声的现象。根据Kappa系数对本发明方法及对比的STS、和MBSI方法进行实验对比结果如表2所示。The experimental parameters of this experiment are: s=20, Δ=10, wR =wG =wB =1/3, wcompact =0.5, wcolor =0.5, f=400, p1 =0.79, p2 = 0.77, p3 = 0.88. Analyzing the experimental results of the present invention from a subjective point of view, the shape information extracted by the shaded area is well preserved, and the phenomenon of salt and pepper noise is basically absent, while the two methods (STS and MBSI methods) of the comparative experiment all appear in varying degrees. The phenomenon. According to the Kappa coefficient, the experimental comparison results of the method of the present invention, the comparative STS, and the MBSI method are shown in Table 2.

表2 实验结果比较Table 2 Comparison of experimental results

KAPPA系数KAPPA Coefficient本发明方法The method of the invention0.7840.784STSSTS0.2930.293MBSIMBSI0.7040.704

综上所述,本发明与经典阴影检测方法和提取系统比较可知,不管是从主观视觉,还是客观评价指标上,本发明的检测方法和提取系统都具有很明显的优势,阴影提取结果更好地保持高分辨率影像的空间细节特征,是一种可行的高分辨遥感影像阴影检测方法,并实现了阴影的精确提取。In summary, comparing the present invention with the classic shadow detection method and extraction system, it can be known that the detection method and extraction system of the present invention have obvious advantages in terms of subjective vision and objective evaluation indicators, and the shadow extraction results are better It is a feasible shadow detection method for high-resolution remote sensing images and realizes the accurate extraction of shadows.

Claims (8)

Translated fromChinese
1.一种多尺度多特征融合的遥感影像阴影检测提取方法,所述遥感影像为高分辨率遥感影像,其特征在于,包括以下步骤:1. A remote sensing image shadow detection and extraction method of multi-scale multi-feature fusion, the remote sensing image is a high-resolution remote sensing image, it is characterized in that, comprising the following steps:步骤a,将原始高分辨遥感影像进行多种尺度下的对象分割,获取初始的影像对象;In step a, the original high-resolution remote sensing image is segmented into objects at multiple scales to obtain initial image objects;步骤b,将遥感影像投影变换至RGB和HSI颜色空间,获取三种不变的阴影颜色特征,根据K-means聚类方法对三种阴影颜色特征分别进行二值化处理,得到三张基于各阴影颜色特征的二值候选阴影影像;Step b, transform the remote sensing image projection into RGB and HSI color spaces, obtain three invariant shadow color features, and perform binarization on the three shadow color features according to the K-means clustering method, and obtain three images based on each binary candidate shadow images for shadow color features;步骤c,根据步骤a所得每种尺度下对象分割结果,分别采用D-S证据理论融合步骤b提取的多种不变的阴影光谱特征进行面向对象的阴影提取,实现方式如下,In step c, according to the object segmentation results of each scale obtained in step a, the object-oriented shadow extraction is carried out by using the various invariant shadow spectral features extracted in step b of D-S evidence theory fusion, and the implementation method is as follows,设步骤b得到的三个二值候选阴影影像F1,F2,F3为三个证据,假设集合Θ={h0,h1},其中h0代表阴影区域的对象,而h1代表非阴影区域;则2Θ的非空子集是{h0},{h1},{h0,h1};设影像被多尺度分割算法分割为多个对象,对象定义为Oj,j=1,2,...,k,Let the three binary candidate shadow images F1 , F2 , and F3 obtained in step b be three pieces of evidence, assuming the set Θ={h0 ,h1 }, where h0 represents the object in the shadow area, and h1 represents non-shaded area; then the non-empty subset of 2Θ is {h0 }, {h1 }, {h0 ,h1 }; suppose the image is divided into multiple objects by multi-scale segmentation algorithm, and the object is defined as Oj ,j =1,2,...,k,设针对Fi的第j个对象内的三个非空集合的基本概率分配函数为i=1,2,3,分别如下所示,Let the basic probability distribution function for three non-empty sets in the jth object of Fi be i=1,2,3, respectively as follows, <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msubsup> <mi>m</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <msubsup> <mi>N</mi> <mrow> <mi>S</mi> <mi>h</mi> <mi>a</mi> <mi>w</mi> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>/</mo> <msubsup> <mi>N</mi> <mrow> <mi>T</mi> <mi>o</mi> <mi>t</mi> <mi>a</mi> <mi>l</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>)</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>m</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msubsup> <mi>N</mi> <mrow> <mi>S</mi> <mi>h</mi> <mi>a</mi> <mi>w</mi> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>/</mo> <msubsup> <mi>N</mi> <mrow> <mi>T</mi> <mi>o</mi> <mi>t</mi> <mi>a</mi> <mi>l</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>)</mo> </mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>m</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced><mfenced open = "{" close = ""><mtable><mtr><mtd><msubsup><mi>m</mi><mi>i</mi><mi>j</mi></msubsup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>)</mo><mo>=</mo><mo>(</mo><msubsup><mi>N</mi><mrow><mi>S</mi><mi>h</mi><mi>a</mi><mi>w</mi><mi>d</mi><mi>o</mi><mi>w</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>/</mo><msubsup><mi>N</mi><mrow><mi>T</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi>><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>)</mo><msub><mi>p</mi><mi>i</mi></msub></mtd></mtr><mtr><mtd><mrow><msubsup><mi>m</mi><mi>i</mi><mi>j</mi></msubsup><mrow><mo>(</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mrow><mo>=</mo><mrow><mo>(</mo><mn>1</mn><mo>-</mo><msubsup><mi>N</mi><mrow><mi>S</mi><mi>h</mi><mi>a</mi><mi>w</mi><mi>d</mi><mi>o</mi><mi>w</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>/</mo><msubsup><mi>N</mi><mrow><mi>T</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>)</mo></mrow><msub><mi>p</mi><mi>i</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><msubsup><mi>m</mi><mi>i</mi><mi>j</mi></msubsup><mrow><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>,</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mrow><mo>=</mo><mn>1</mn><mo>-</mo><msub><mi>p</mi><mi>i</mi></msub></mrow></mtd></mtr></mtable></mfenced>其中,pi分别是特征Fi的权重,分别代表在Fi特征下对象j内的候选阴影像素个数和总像素个数;Among them, pi are the weights of feature Fi respectively, with Respectively represent the number of candidate shadow pixels and the total number of pixels in object j under Fi feature;融合三个特征的分别为mj(h0)、mj(h1)、mj(h0,h1),当下式满足时,对象j被提取为阴影区域。Combining the three features They are mj (h0 ), mj (h1 ), mj (h0 , h1 ), respectively. When the following formula is satisfied, object j is extracted as a shaded area. <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>)</mo> <mo>&gt;</mo> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>)</mo> <mo>&gt;</mo> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mtd> </mtr> </mtable> </mfenced><mfenced open = "{" close = ""><mtable><mtr><mtd><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>)</mo><mo>&gt;</mo><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mtd></mtr><mtr><mtd><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>)</mo><mo>&gt;</mo><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>,</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mtd></mtr></mtable></mfenced>步骤d,将不同尺度下获取的阴影提取结果进行决策融合,从而获取最终的阴影区域。In step d, decision fusion is performed on the shadow extraction results acquired at different scales, so as to obtain the final shadow area.2.根据权利要求1所述多尺度多特征融合的遥感影像阴影检测提取方法,其特征在于:步骤a中,采用三种尺度下的多尺度分割结果。2. The remote sensing image shadow detection and extraction method based on multi-scale and multi-feature fusion according to claim 1, characterized in that: in step a, multi-scale segmentation results under three scales are used.3.根据权利要求1所述多尺度多特征融合的遥感影像阴影检测提取方法,其特征在于:对每种尺度,分别采用自下而上基于异质度准则的区域增长分割方式实现。3. The remote sensing image shadow detection and extraction method based on multi-scale and multi-feature fusion according to claim 1, characterized in that: for each scale, it is implemented by using a bottom-up regional growth segmentation method based on the heterogeneity criterion.4.根据权利要求1或2或3所述多尺度多特征融合的遥感影像阴影检测提取方法,其特征在于:步骤d中,采用投票法进行决策融合。4. The remote sensing image shadow detection and extraction method based on multi-scale and multi-feature fusion according to claim 1, 2 or 3, characterized in that: in step d, the voting method is used for decision fusion.5.一种多尺度多特征融合的遥感影像阴影检测提取系统,所述遥感影像为高分辨率遥感影像,其特征在于,包括以下模块:5. A multi-scale and multi-feature fusion remote sensing image shadow detection and extraction system, the remote sensing image is a high-resolution remote sensing image, characterized in that it includes the following modules:第一模块,用于将原始高分辨遥感影像进行多种尺度下的对象分割,获取初始的影像对象;The first module is used to segment the original high-resolution remote sensing images into objects at multiple scales to obtain initial image objects;第二模块,用于将遥感影像投影变换至RGB和HSI颜色空间,获取三种不变的阴影颜色特征,根据K-means聚类方法对三种阴影颜色特征分别进行二值化处理,得到三张基于各阴影颜色特征的二值候选阴影影像;The second module is used to transform the projection of remote sensing images into RGB and HSI color spaces, and obtain three invariant shadow color features. According to the K-means clustering method, the three shadow color features are binarized respectively to obtain three A binary candidate shadow image based on each shadow color feature;第三模块,用于根据第一模块所得每种尺度下对象分割结果,分别采用D-S证据理论融合第二模块提取的多种不变的阴影光谱特征进行面向对象的阴影提取,实现方式如下,The third module is used to perform object-oriented shadow extraction based on the object segmentation results of each scale obtained by the first module, using D-S evidence theory to fuse multiple invariant shadow spectral features extracted by the second module. The implementation method is as follows,设第二模块得到的三个二值候选阴影影像F1,F2,F3为三个证据,假设集合Θ={h0,h1},其中h0代表阴影区域的对象,而h1代表非阴影区域;则2Θ的非空子集是{h0},{h1},{h0,h1};设影像被多尺度分割算法分割为多个对象,对象定义为Oj,j=1,2,...,k,Let the three binary candidate shadow images F1 , F2 , and F3 obtained by the second module be three pieces of evidence, assuming the set Θ={h0 ,h1 }, where h0 represents the object in the shadow area, and h1 Represents the non-shaded area; then the non-empty subset of 2Θ is {h0 }, {h1 }, {h0 ,h1 }; suppose the image is divided into multiple objects by the multi-scale segmentation algorithm, and the object is defined as Oj , j=1,2,...,k,设针对Fi的第j个对象内的三个非空集合的基本概率分配函数为i=1,2,3,分别如下所示,Let the basic probability distribution function for three non-empty sets in the jth object of Fi be i=1,2,3, respectively as follows, <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msubsup> <mi>m</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>)</mo> <mo>=</mo> <mo>(</mo> <msubsup> <mi>N</mi> <mrow> <mi>S</mi> <mi>h</mi> <mi>a</mi> <mi>w</mi> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>/</mo> <msubsup> <mi>N</mi> <mrow> <mi>T</mi> <mi>o</mi> <mi>t</mi> <mi>a</mi> <mi>l</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>)</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>m</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msubsup> <mi>N</mi> <mrow> <mi>S</mi> <mi>h</mi> <mi>a</mi> <mi>w</mi> <mi>d</mi> <mi>o</mi> <mi>w</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>/</mo> <msubsup> <mi>N</mi> <mrow> <mi>T</mi> <mi>o</mi> <mi>t</mi> <mi>a</mi> <mi>l</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>,</mo> <mi>i</mi> </mrow> <mi>j</mi> </msubsup> <mo>)</mo> </mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>m</mi> <mi>i</mi> <mi>j</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced><mfenced open = "{" close = ""><mtable><mtr><mtd><msubsup><mi>m</mi><mi>i</mi><mi>j</mi></msubsup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>)</mo><mo>=</mo><mo>(</mo><msubsup><mi>N</mi><mrow><mi>S</mi><mi>h</mi><mi>a</mi><mi>w</mi><mi>d</mi><mi>o</mi><mi>w</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>/</mo><msubsup><mi>N</mi><mrow><mi>T</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi>><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>)</mo><msub><mi>p</mi><mi>i</mi></msub></mtd></mtr><mtr><mtd><mrow><msubsup><mi>m</mi><mi>i</mi><mi>j</mi></msubsup><mrow><mo>(</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mrow><mo>=</mo><mrow><mo>(</mo><mn>1</mn><mo>-</mo><msubsup><mi>N</mi><mrow><mi>S</mi><mi>h</mi><mi>a</mi><mi>w</mi><mi>d</mi><mi>o</mi><mi>w</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>/</mo><msubsup><mi>N</mi><mrow><mi>T</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi><mi>C</mi><mi>o</mi><mi>u</mi><mi>n</mi><mi>t</mi><mo>,</mo><mi>i</mi></mrow><mi>j</mi></msubsup><mo>)</mo></mrow><msub><mi>p</mi><mi>i</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><msubsup><mi>m</mi><mi>i</mi><mi>j</mi></msubsup><mrow><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>,</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mrow><mo>=</mo><mn>1</mn><mo>-</mo><msub><mi>p</mi><mi>i</mi></msub></mrow></mtd></mtr></mtable></mfenced>其中,pi分别是特征Fi的权重,分别代表在Fi特征下对象j内的候选阴影像素个数和总像素个数;Among them, pi are the weights of feature Fi respectively, with Respectively represent the number of candidate shadow pixels and the total number of pixels in object j under Fi feature;融合三个特征的分别为mj(h0)、mj(h1)、mj(h0,h1),当下式满足时,对象j被提取为阴影区域。Combining the three features They are mj (h0 ), mj (h1 ), mj (h0 , h1 ), respectively. When the following formula is satisfied, object j is extracted as a shaded area. <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>)</mo> <mo>&gt;</mo> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>)</mo> <mo>&gt;</mo> <msup> <mi>m</mi> <mi>j</mi> </msup> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>)</mo> </mtd> </mtr> </mtable> </mfenced><mfenced open = "{" close = ""><mtable><mtr><mtd><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>)</mo><mo>&gt;</mo><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mtd></mtr><mtr><mtd><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>)</mo><mo>&gt;</mo><msup><mi>m</mi><mi>j</mi></msup><mo>(</mo><msub><mi>h</mi><mn>0</mn></msub><mo>,</mo><msub><mi>h</mi><mn>1</mn></msub><mo>)</mo></mtd></mtr></mtable></mfenced>第四模块,用于将不同尺度下获取的阴影提取结果进行决策融合,从而获取最终的阴影区域。The fourth module is used for decision-making fusion of shadow extraction results obtained at different scales, so as to obtain the final shadow area.6.根据权利要求5所述多尺度多特征融合的遥感影像阴影检测提取系统,其特征在于:第一模块中,采用三种尺度下的多尺度分割结果。6. The remote sensing image shadow detection and extraction system based on multi-scale and multi-feature fusion according to claim 5, characterized in that: in the first module, multi-scale segmentation results under three scales are used.7.根据权利要求5所述多尺度多特征融合的遥感影像阴影检测提取系统,其特征在于:对每种尺度,分别采用自下而上基于异质度准则的区域增长分割方式实现。7. The remote sensing image shadow detection and extraction system based on multi-scale and multi-feature fusion according to claim 5, characterized in that: for each scale, it is realized by using a bottom-up regional growth segmentation method based on the heterogeneity criterion.8.根据权利要求5或6或7所述多尺度多特征融合的遥感影像阴影检测提取系统,其特征在于:第四模块中,采用投票法进行决策融合。8. The multi-scale and multi-feature fusion remote sensing image shadow detection and extraction system according to claim 5, 6 or 7, characterized in that: in the fourth module, the voting method is used for decision fusion.
CN201610202779.4A2016-03-312016-03-31The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusionPendingCN107292328A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610202779.4ACN107292328A (en)2016-03-312016-03-31The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610202779.4ACN107292328A (en)2016-03-312016-03-31The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion

Publications (1)

Publication NumberPublication Date
CN107292328Atrue CN107292328A (en)2017-10-24

Family

ID=60088229

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610202779.4APendingCN107292328A (en)2016-03-312016-03-31The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion

Country Status (1)

CountryLink
CN (1)CN107292328A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107977968A (en)*2017-12-222018-05-01长江勘测规划设计研究有限责任公司The building layer detection method excavated based on buildings shadow information
CN108051371A (en)*2017-12-012018-05-18河北省科学院地理科学研究所A kind of shadow extraction method of ecology-oriented environment parameter remote-sensing inversion
CN110390267A (en)*2019-06-252019-10-29东南大学 A method and device for extracting mountain landscape buildings based on high-resolution remote sensing images
CN111415357A (en)*2020-03-192020-07-14长光卫星技术有限公司Portable shadow extraction method based on color image
CN112163599A (en)*2020-09-082021-01-01北京工商大学Image classification method based on multi-scale and multi-level fusion
CN113763410A (en)*2021-09-302021-12-07江苏天汇空间信息研究院有限公司Image shadow detection method based on HIS combined with spectral feature detection condition
CN114219823A (en)*2021-12-162022-03-22中国四维测绘技术有限公司 A method and computer equipment for extracting roof photovoltaic distribution images
CN115131676A (en)*2022-06-282022-09-30南京信息工程大学 A detection method for building changes in high-resolution remote sensing images

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103236063A (en)*2013-05-032013-08-07河海大学Multi-scale spectral clustering and decision fusion-based oil spillage detection method for synthetic aperture radar (SAR) images
CN103606154A (en)*2013-11-222014-02-26河海大学Multiple-dimensioned offshore oil-spill SAR image segmentation method based on JSEG and spectrum clustering
CN104751478A (en)*2015-04-202015-07-01武汉大学Object-oriented building change detection method based on multi-feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103236063A (en)*2013-05-032013-08-07河海大学Multi-scale spectral clustering and decision fusion-based oil spillage detection method for synthetic aperture radar (SAR) images
CN103606154A (en)*2013-11-222014-02-26河海大学Multiple-dimensioned offshore oil-spill SAR image segmentation method based on JSEG and spectrum clustering
CN104751478A (en)*2015-04-202015-07-01武汉大学Object-oriented building change detection method based on multi-feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HUI LUO等: "Development of a multi-scale object-based shadow detection method for high spatial resolution image", 《REMOTE SENSING LETTERS》*

Cited By (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108051371A (en)*2017-12-012018-05-18河北省科学院地理科学研究所A kind of shadow extraction method of ecology-oriented environment parameter remote-sensing inversion
CN108051371B (en)*2017-12-012018-10-02河北省科学院地理科学研究所A kind of shadow extraction method of ecology-oriented environment parameter remote-sensing inversion
CN107977968A (en)*2017-12-222018-05-01长江勘测规划设计研究有限责任公司The building layer detection method excavated based on buildings shadow information
CN110390267B (en)*2019-06-252021-06-01东南大学Mountain landscape building extraction method and device based on high-resolution remote sensing image
CN110390267A (en)*2019-06-252019-10-29东南大学 A method and device for extracting mountain landscape buildings based on high-resolution remote sensing images
CN111415357A (en)*2020-03-192020-07-14长光卫星技术有限公司Portable shadow extraction method based on color image
CN111415357B (en)*2020-03-192023-04-07长光卫星技术股份有限公司Portable shadow extraction method based on color image
CN112163599A (en)*2020-09-082021-01-01北京工商大学Image classification method based on multi-scale and multi-level fusion
CN112163599B (en)*2020-09-082023-09-01北京工商大学 An image classification method based on multi-scale and multi-level fusion
CN113763410A (en)*2021-09-302021-12-07江苏天汇空间信息研究院有限公司Image shadow detection method based on HIS combined with spectral feature detection condition
CN113763410B (en)*2021-09-302022-08-02江苏天汇空间信息研究院有限公司 Image shadow detection method based on HIS combined with spectral feature detection conditions
CN114219823A (en)*2021-12-162022-03-22中国四维测绘技术有限公司 A method and computer equipment for extracting roof photovoltaic distribution images
CN114219823B (en)*2021-12-162025-09-02中国四维测绘技术有限公司 A method and computer equipment for extracting rooftop photovoltaic distribution images
CN115131676A (en)*2022-06-282022-09-30南京信息工程大学 A detection method for building changes in high-resolution remote sensing images

Similar Documents

PublicationPublication DateTitle
CN110263717B (en) A land-use category determination method incorporating street view imagery
CN107292328A (en)The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion
CN103559500B (en)A kind of multi-spectral remote sensing image terrain classification method based on spectrum Yu textural characteristics
CN110309781B (en)House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN101840581B (en)Method for extracting profile of building from satellite remote sensing image
CN104331698B (en)Remote sensing type urban image extracting method
CN111191628B (en)Remote sensing image earthquake damage building identification method based on decision tree and feature optimization
CN108573276A (en) A Change Detection Method Based on High Resolution Remote Sensing Image
CN104573685B (en)A kind of natural scene Method for text detection based on linear structure extraction
CN106503739A (en)The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics
CN103077515B (en) A Method for Building Change Detection in Multispectral Images
CN107832797B (en)Multispectral image classification method based on depth fusion residual error network
Wuest et al.Region based segmentation of QuickBird multispectral imagery through band ratios and fuzzy comparison
CN103218832B (en)Based on the vision significance algorithm of global color contrast and spatial distribution in image
CN106971397B (en)Based on the city high-resolution remote sensing image dividing method for improving JSEG algorithms
CN111666900B (en) Method and device for acquiring land cover classification map based on multi-source remote sensing images
CN108830243A (en)Hyperspectral image classification method based on capsule network
CN105427313B (en)SAR image segmentation method based on deconvolution network and adaptive inference network
CN104951765B (en)Remote Sensing Target dividing method based on shape priors and visual contrast
CN104217440B (en)A kind of method extracting built-up areas from remote sensing images
CN107564016B (en) A multi-band remote sensing image segmentation and labeling method integrating spectral information of ground objects
CN108764330A (en)SAR image sorting technique based on super-pixel segmentation and convolution deconvolution network
CN110070545B (en) A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density
CN116051983A (en) A multi-spectral remote sensing image water body extraction method for multi-service system fusion
CN120198453A (en) A remote sensing image segmentation method based on visual basis model SAM and double edge correction

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20171024


[8]ページ先頭

©2009-2025 Movatter.jp