Movatterモバイル変換


[0]ホーム

URL:


CN116757938A - Remote sensing image full-color sharpening method and device based on contrast learning - Google Patents

Remote sensing image full-color sharpening method and device based on contrast learning
Download PDF

Info

Publication number
CN116757938A
CN116757938ACN202310420311.2ACN202310420311ACN116757938ACN 116757938 ACN116757938 ACN 116757938ACN 202310420311 ACN202310420311 ACN 202310420311ACN 116757938 ACN116757938 ACN 116757938A
Authority
CN
China
Prior art keywords
image
full
color
contrast
sharpening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310420311.2A
Other languages
Chinese (zh)
Inventor
满旺
王超
杜晓凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University of Technology
Original Assignee
Xiamen University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University of TechnologyfiledCriticalXiamen University of Technology
Priority to CN202310420311.2ApriorityCriticalpatent/CN116757938A/en
Publication of CN116757938ApublicationCriticalpatent/CN116757938A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention relates to the technical field of remote sensing images, and provides a remote sensing image full-color sharpening method based on contrast learning, which comprises the following steps: acquiring a full-color image set and a multispectral image set; establishing a full-color sharpening model; introducing a contrast loss function to the generated reactive network for contrast learning so as to obtain a full-color sharpening model; verifying the full-color sharpening model; the multispectral image to be full-color sharpened and the full-color image are input into a full-color sharpening model for full-color sharpening, so that a high-resolution multispectral image is obtained. According to the remote sensing image full-color sharpening method based on contrast learning, the contrast learning method is applied to the field of full-color sharpening of remote sensing images, the essence of things is learned by utilizing the relativity and the difference between contrast learning usage data, the full-color images and the low-resolution spectrum images are subjected to contrast learning fusion in a generator network by using a contrast loss function, and the spatial resolution of the fused images is improved.

Description

Translated fromChinese
一种基于对比学习的遥感图像全色锐化方法及装置A method and device for pan-color sharpening of remote sensing images based on contrast learning

技术领域Technical field

本发明涉及遥感影像技术领域,特别涉及一种基于对比学习的遥感图像全色锐化方法及装置。The invention relates to the technical field of remote sensing images, and in particular to a method and device for full-color sharpening of remote sensing images based on contrast learning.

背景技术Background technique

全色锐化是一种融合多光谱与全色图像的技术,其旨在使用全色图像提高多光谱图像的空间分辨率。全色锐化方法是遥感任务中一项基本且重要的预处理步骤,融合后的高空间分辨率的多光谱遥感图像同时具有全色图像中的空间结构信息和多光谱图像中的光谱特性,弥补了单一图像的局限性和不准确性,提升了图像中信息的完善度,能在军事、农业、环境监测等各个领域发挥更大的作用。Pansharpening is a technique that fuses multispectral and panchromatic images, which aims to use panchromatic images to improve the spatial resolution of multispectral images. The panchromatic sharpening method is a basic and important preprocessing step in remote sensing tasks. The fused high spatial resolution multispectral remote sensing image has both the spatial structure information in the panchromatic image and the spectral characteristics in the multispectral image. It makes up for the limitations and inaccuracies of a single image, improves the completeness of the information in the image, and can play a greater role in various fields such as military, agriculture, and environmental monitoring.

近几十年来,虽然在解决全色锐化问题上的研究成果丰富且各不相同,但是仍然受到各种不足之处的限制,因此难以满足日益增长的精度需求。下面介绍几类遥感图像全色锐化算法及主要存在的问题。Although the research results in solving the problem of panchromatic sharpening in recent decades are rich and varied, they are still limited by various shortcomings and therefore difficult to meet the increasing demand for accuracy. The following introduces several types of remote sensing image pan-sharpening algorithms and their main problems.

传统的方法通常有三种,包括:基于成分替换的方法,基于多分辨率分析的方法,基于变分优化框架的方法;但是这些方法存在光谱失真,空间信息退化,泛化能力不够以及时间效率不理想,无法提高融合图像的空间分辨率,因此需要改进。There are usually three traditional methods, including: methods based on component replacement, methods based on multi-resolution analysis, and methods based on variational optimization frameworks; however, these methods suffer from spectral distortion, spatial information degradation, insufficient generalization capabilities, and inefficient time. Ideally, the spatial resolution of the fused image cannot be improved, so improvements are needed.

发明内容Contents of the invention

为解决上述现有技术中无法提高融合图像的空间分辨率的不足,本发明提供一种基于对比学习的遥感图像全色锐化方法,包括:In order to solve the above-mentioned shortcomings in the prior art of being unable to improve the spatial resolution of fused images, the present invention provides a full-color sharpening method for remote sensing images based on contrast learning, which includes:

S100:获取全色图像集Ipan、多光谱图像集IMSS100: Acquire the panchromatic image setIpan and the multispectral image setIMS ;

S200:建立全色锐化模型;对多光谱图像集IMS中的原始多光谱图像进行预处理,再对全色图像集Ipan中的全色图像和预处理后的多光谱图像进行随机配对分组,得到训练集和测试集;将训练集输入到生成对抗网络中,在生成对抗网络引入对比损失函数Lcl进行对比学习,以得到全色锐化模型;S200: Establish a panchromatic sharpening model; preprocess the original multispectral image in the multispectral image set IMS , and then randomly pair the panchromatic image in the panchromatic image setIpan with the preprocessed multispectral image. Group into groups to obtain a training set and a test set; input the training set into the generative adversarial network, and introduce the contrast loss function Lcl in the generative adversarial network for comparative learning to obtain a full-color sharpening model;

S300:将测试集输入全色锐化模型进行验证;S300: Input the test set into the pan-sharpening model for verification;

S400:将待全色锐化的多光谱图像和全色图像输入到全色锐化模型中进行全色锐化,以得到融合图像。S400: Input the multispectral image and the panchromatic image to be panchromatically sharpened into the panchromatic sharpening model for panchromatic sharpening to obtain a fused image.

在一实施例中,对多光谱图像集IMS的原始多光谱图像进行预处理,是将原始多光谱图像转为低分辨率多光谱图像。In one embodiment, preprocessing the original multispectral images of the multispectral image set IMS is to convert the original multispectral images into low-resolution multispectral images.

在一实施例中,生成对抗网络包括生成器网络和判别器网络;其中,将训练集输入到生成对抗网络中,在生成对抗网络中引入对比损失函数Lcl进行对比学习的步骤为:In one embodiment, the generative adversarial network includes a generator network and a discriminator network; wherein the training set is input into the generative adversarial network, and the step of introducing the contrastive loss function Lcl in the generative adversarial network for comparative learning is:

S210:将训练集输入到生成器网络中,通过在生成器网络中引入对比损失函数Lcl进行对比学习,以输出生成图像IfakeS210: Input the training set into the generator network, and perform comparative learning by introducing the contrast loss function Lcl in the generator network to output the generated image Ifake ;

S220:将生成图像Ifake和原始多光谱图像输入判别器网络中,对判别器进行训练。S220: Input the generated image Ifake and the original multispectral image into the discriminator network, and train the discriminator.

在一实施例中,生成器网络包括编码器和解码器;其中,通过在生成器网络中引入对比损失函数Lcl进行对比学习,以输出生成图像Ifake的步骤为:In one embodiment, the generator network includes an encoder and a decoder; wherein, by introducing a contrast loss function Lcl in the generator network to perform contrastive learning, the step of outputting the generated image Ifake is:

S211:使用生成器网络中的两个子网对训练集中的低分辨率多光谱图像和全色图像进行特征提取得到拼接图像x;S211: Use two subnetworks in the generator network to extract features from the low-resolution multispectral images and panchromatic images in the training set to obtain the spliced image x;

S212:将拼接图像x和拼接图像xfake在生成器网络中使用对比损失函数Lcl进行对比学习得到生成图像IfakeS212: Use the contrast loss function Lcl to perform comparative learning on the spliced image x and the spliced image xfake in the generator network to obtain the generated image Ifake .

在一实施例中,使用生成器网络中的前四层卷积层分别对拼接图像x以及拼接图像xfake进行特征提取,得到拼接图像x的第一特征地图以及拼接图像xfake的第二特征地图,然后将提取到的第一特征地图和所示第二特征地图使用对比损失函数Lcl进行对比学习。In one embodiment, the first four convolutional layers in the generator network are used to perform feature extraction on the spliced image x and the spliced image xfake , respectively, to obtain the first feature map of the spliced image x and the second feature map of the spliced image xfake . map, and then use the contrastive loss function Lcl to perform comparative learning on the extracted first feature map and the second feature map shown.

在一实施例中,对比损失函数Lcl定义为:In one embodiment, the contrast loss function Lcl is defined as:

经过对比损失函数Lcl对比第一特征地图和第二特征地图,两者对应的位置为正样本v+,其他位置像素点为负样本v-,N=256,m=1,2,3…N,温度τ=0.07。After comparing the first feature map and the second feature map with the comparison loss function Lcl , the positions corresponding to the two are positive samples v+ , and the pixels at other positions are negative samples v- , N=256, m=1,2,3... N, temperature τ = 0.07.

在一实施例中,对生成的图像和真实图像进行对抗迭代生成融合图像,修正光谱信息和空间信息,获得最终的融合图像的公式为:In one embodiment, the generated image and the real image are iteratively generated against each other to generate a fusion image, and the spectral information and spatial information are corrected. The formula for obtaining the final fusion image is:

Dk=G(Dk-1)K=1,2,3…KDk =G(Dk-1 )K=1,2,3...K

其中,G为生成器网络,D为判别器网络,Dk指对图像在判别器网络中进行K次判别。Among them, G is the generator network, D is the discriminator network, and Dk refers to K times of discrimination of the image in the discriminator network.

本发明还提供一种基于对比学习的遥感图像全色锐化装置,包括:The invention also provides a device for full-color sharpening of remote sensing images based on contrast learning, which includes:

数据获取模块,获取全色图像Ipan、多光谱图像IMSThe data acquisition module acquires panchromatic imagesIpan and multispectral imagesIMS ;

训练集训练模块,建立有监督的全色锐化模型;对多光谱图像IMS进行预处理,再对全色图像Ipan和预处理后的多光谱图像IMS进行随机配对分组,得到训练集和测试集;将训练集输入到生成对抗网络中,在生成对抗网络引入对比损失函数Lcl进行对比学习,以得到全色锐化模型;The training set training module establishes a supervised pan-color sharpening model; pre-processes the multi-spectral image IMS , and then randomly pairs and groups the pan-color image Ipan and the pre-processed multi-spectral image IMS to obtain a training set and test set; input the training set into the generative adversarial network, and introduce the contrast loss function Lcl in the generative adversarial network for comparative learning to obtain a full-color sharpening model;

测试集训练模块,将所述测试集输入所述全色锐化模型进行验证;A test set training module that inputs the test set into the pan-sharpening model for verification;

全色锐化模块,将待全色锐化的多光谱图像和全色图像输入到全色锐化模型中进行全色锐化,以得到高分辨率多光谱图像。The pan-sharpening module inputs the multi-spectral image and pan-color image to be pan-sharpened into the pan-sharpening model for pan-color sharpening to obtain a high-resolution multi-spectral image.

本发明还提供一种计算机可读存储介质,其特征在于:计算机可读存储介质有计算机指令,计算机被处理器执行时实现如上述任一实施例所述的一种基于对比学习的遥感图像全色锐化方法The present invention also provides a computer-readable storage medium, which is characterized in that: the computer-readable storage medium has computer instructions, and when the computer is executed by the processor, a remote sensing image comprehensive method based on contrastive learning as described in any of the above embodiments is realized. Color sharpening method

本发明还提供一种电子设备,其特征在于:包括至少一个处理器、及与处理器通信连接的存储器,其中存储器存储可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使处理器执行如上述任一实施例所述的一种基于对比学习的遥感图像全色锐化方法。The present invention also provides an electronic device, which is characterized in that it includes at least one processor and a memory communicatively connected to the processor, wherein the memory stores instructions that can be executed by at least one processor, and the instructions are executed by at least one processor, so as to The processor is caused to execute a method for pan-color sharpening of remote sensing images based on contrast learning as described in any of the above embodiments.

基于上述,与现有技术相比,本发明提供的一种基于对比学习的遥感图像全色锐化方法,将对比学习的方法运用到遥感图像的全色锐化领域,利用了对比学习使用数据之间的相关性和差异性来学习事物的本质,将全色图像和低分辨率光谱图像在生成器网络中使用对比损失函数进行对比学习融合,提高了融合图像的空间分辨率。Based on the above, compared with the existing technology, the present invention provides a method of pan-color sharpening of remote sensing images based on contrast learning, which applies the method of contrast learning to the field of pan-color sharpening of remote sensing images and utilizes contrast learning using data. The correlation and difference between them are used to learn the essence of things. Panchromatic images and low-resolution spectral images are fused using contrast loss functions in the generator network to improve the spatial resolution of the fused image.

本发明的其它特征和有益效果将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他有益效果可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other beneficial effects of the present invention may be realized and obtained by the structure particularly pointed out in the specification, claims and drawings.

附图说明Description of the drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图;在下面描述中附图所述的位置关系,若无特别指明,皆是图示中组件绘示的方向为基准。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without exerting creative efforts; in the following description, the positional relationships described in the drawings, Unless otherwise specified, the directions of the components in the illustrations are used as the basis.

图1为本发明一实施例提供的基于对比学习的遥感图像全色锐化方法的流程图;Figure 1 is a flow chart of a method for pan-color sharpening of remote sensing images based on contrast learning provided by an embodiment of the present invention;

图2为本发明一实施例提供的基于对比学习的遥感图像全色锐化方法的流程框图;Figure 2 is a flow chart of a method for pan-color sharpening of remote sensing images based on contrast learning provided by an embodiment of the present invention;

图3为本发明生成器网络G结构示意图;Figure 3 is a schematic structural diagram of the generator network G of the present invention;

图4为本发明对比损失示意图;Figure 4 is a schematic diagram of contrast loss according to the present invention;

图5为本发明判别器网络D结构示意图;Figure 5 is a schematic structural diagram of the discriminator network D of the present invention;

图6为本发明未经过预处理的原始多光谱图像;Figure 6 is the original multispectral image without preprocessing according to the present invention;

图7位本发明的全色图像;Figure 7 is a full-color image of the present invention;

图8为本发明经过预处理后的低分辨率多光谱图像;Figure 8 is a low-resolution multispectral image after preprocessing according to the present invention;

图9为本发明经过全色锐化模型最后生成的融合图像。Figure 9 is the fusion image finally generated by the pan-color sharpening model of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例;下面所描述的本发明不同实施方式中所设计的技术特征只要彼此之间未构成冲突就可以相互结合;基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, not all of them; the technical features designed in different embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other; based on the embodiments of the present invention, All other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.

在本发明的描述中,需要说明的是,本发明所使用的所有术语(包括技术术语和科学术语)具有与本发明所属领域的普通技术人员通常所理解的含义相同的含义,不能理解为对本发明的限制;应进一步理解,本发明所使用的术语应被理解为具有与这些术语在本说明书的上下文和相关领域中的含义一致的含义,并且不应以理想化或过于正式的意义来理解,除本发明中明确如此定义之外。In the description of the present invention, it should be noted that all terms (including technical terms and scientific terms) used in the present invention have the same meaning as commonly understood by those of ordinary skill in the field to which the present invention belongs, and cannot be understood as being a definition of the present invention. Limitations of the Invention; It should be further understood that the terms used in the present invention should be understood to have meanings consistent with the meanings of these terms in the context of this specification and in the relevant fields, and should not be understood in an idealized or overly formal sense. , unless explicitly so defined in the present invention.

参考图1,其为本发明实施例提供的一种基于对比学习的遥感图像全色锐化方法的流程示意图,方法包括:Referring to Figure 1, which is a schematic flow chart of a method for pan-color sharpening of remote sensing images based on contrast learning provided by an embodiment of the present invention. The method includes:

S100:获取全色图像集Ipan、多光谱图像集IMSS100: Acquire the panchromatic image set Ipan and the multispectral image set IMS

S200:建立全色锐化模型;对多光谱图像集IMS中的原始多光谱图像进行预处理,再对全色图像集Ipan中的全色图像和预处理后的多光谱图像进行随机配对分组,得到训练集和测试集;将训练集输入到生成对抗网络中,在生成对抗网络引入对比损失函数Icl进行对比学习,以得到全色锐化模型;S200: Establish a panchromatic sharpening model; preprocess the original multispectral image in the multispectral image set IMS , and then randomly pair the panchromatic image in the panchromatic image setIpan with the preprocessed multispectral image. Group into groups to obtain a training set and a test set; input the training set into the generative adversarial network, and introduce the contrast loss function Icl in the generative adversarial network for comparative learning to obtain a full-color sharpening model;

S300:将测试集输入全色锐化模型进行验证;S300: Input the test set into the pan-sharpening model for verification;

S400:将待全色锐化的多光谱图像和全色图像输入到全色锐化模型中进行全色锐化,以得到融合图像。S400: Input the multispectral image and the panchromatic image to be panchromatically sharpened into the panchromatic sharpening model for panchromatic sharpening to obtain a fused image.

具体实施时,获取全色图像集Ipan、多光谱图像集IMSDuring specific implementation, the panchromatic image set Ipan and the multispectral image set IMS are obtained

本实施例中采用的全色图像集Ipan、多光谱图像集IMS来源于QuickBird和WorldView-2卫星,QuickBird数据集包含9对多光谱图像和全色图像,数据大小从558×2080×4到3162×2142×4和2232×8320×1到12648×28568×1;对于w×h大小的多光谱图像,在同一位置上有对应的4w×4h全色图像。WorldView-2数据集,多光谱图像数据大小为6096×6255×4,全色图像数据大小为24384×25020×1。由于没有真实的高分辨率的多光谱图像作为参考图像,遵循Wald的协议以因子r(r=4)进行多光谱图像集IMS和全色图像集Ipan的采样,然后,将原始多光谱图像当作参考图像。The panchromatic image set Ipan and the multispectral image set IMS used in this embodiment are from QuickBird and WorldView-2 satellites. The QuickBird data set contains 9 pairs of multispectral images and panchromatic images, with data sizes ranging from 558×2080×4 to 3162×2142×4 and 2232×8320×1 to 12648×28568×1; for a multispectral image of w×h size, there is a corresponding 4w×4h panchromatic image at the same position. WorldView-2 data set, the multispectral image data size is 6096×6255×4, and the panchromatic image data size is 24384×25020×1. Since there is no real high-resolution multispectral image as a reference image, the multispectral image set IMS and the panchromatic image set Ipan are sampled with a factor r (r=4) following Wald's protocol. Then, the original multispectral image set I pan is sampled. image as a reference image.

建立全色锐化模型具体的如图2所示,原始多光谱图像和全色图像作为输入生成端到端的融合图像。输出的融合图像应该尽可能接近理想的真实图像P,在训练全色锐化模型的过程中,是将采集到的原始多光谱图像即高分辨率的多光谱图像进行预处理得到低分辨率的多光谱图像,随后使用低分辨率的多光谱图像对全色锐化模型进行训练,所以在模型训练阶段最后输出的融合图像要尽可能地与未做预处理的原始多光谱图像相似,所以此真实图像P是指未进行预处理的原始多光谱图像。用一个大小为w×h×b的张量来描述原始多光谱图像,用rw×rh×1来描述全色图像,用rw×rh×b来描述融合图像和真实图像,其中r是全色图像集Ipan和多光谱图像集IMS的空间分辨率(r=4),b是波段数。全色锐化的最终目标采取如下的形式:The specific details of establishing a panchromatic sharpening model are shown in Figure 2. The original multispectral image and panchromatic image are used as input to generate an end-to-end fused image. The output fused image should be as close as possible to the ideal real image P. In the process of training the panchromatic sharpening model, the collected original multispectral image, that is, the high-resolution multispectral image, is preprocessed to obtain the low-resolution Multispectral images, and then use low-resolution multispectral images to train the pan-sharpening model, so the final output fusion image in the model training stage should be as similar as possible to the original multispectral image without preprocessing, so this The real image P refers to the original multispectral image without preprocessing. Use a tensor of size w × h × b to describe the original multispectral image, rw × rh × 1 to describe the panchromatic image, and rw × rh × b to describe the fused image and the real image, where r is the panchromatic image. The spatial resolution (r=4) of the image setIpan and the multispectral image setIMS , b is the number of bands. The ultimate goal of pan-sharpening takes the form of:

其中,为最后经过全色锐化模型对比学习所要得到的融合图像,f(·)是以全色图像和原始多光谱图像为输入并产生期望的/>的全色锐化模型,θ是全色锐化模型的参数集合。损失函数定义为:in, For the final fusion image to be obtained through contrastive learning of the panchromatic sharpening model, f(·) takes the panchromatic image and the original multispectral image as input and produces the desired/> The pan-sharpening model, θ is the parameter set of the pan-sharpening model. The loss function is defined as:

En=G((Ims,Ipan)n;θGG={W1,W2,...,Wi;B1,B2,...,Bi}En =G ((Ims , Ipan )n ; θGG = {W1 , W2 ,...,Wi ; B1 , B2 ,..., Bi }

其中,En为损失函数,G为生成器网络,n表示第n张图片,((Ims,Ipan)n)指卫星所采集到的第n张高分辨率的多光谱图像和全色图像,θG指生成器网络中的全部参数合集,Wi表示第i层的权值矩阵,Bi表示第i层偏置,i是生成器网络模型的总层数。Among them, En is the loss function, G is the generator network, n represents the nth picture, ((Ims , Ipan )n ) refers to the nth high-resolution multispectral image and panchromatic image collected by the satellite. Image, θG refers to the set of all parameters in the generator network,Wi represents the weight matrix of the i-th layer, Bi represents the bias of the i-th layer, and i is the total number of layers of the generator network model.

具体地,构建全色锐化图像的步骤为:对多光谱图像集IMS中的原始多光谱图像进行预处理,再对全色图像集Ipan中的全色图像和预处理后的多光谱图像进行随机配对分组,得到训练集和测试集;将训练集输入到生成对抗网络中,在生成对抗网络引入对比损失函数Lcl进行对比学习,以得到全色锐化模型;Specifically, the steps for constructing a panchromatic sharpened image are: preprocessing the original multispectral image in the multispectral image set IMS , and then preprocessing the panchromatic image in the panchromatic image setIpan and the preprocessed multispectral image. The images are randomly paired and grouped to obtain a training set and a test set; the training set is input into the generative adversarial network, and the contrast loss function Lcl is introduced in the generative adversarial network for comparative learning to obtain a full-color sharpening model;

首先,对原始多光谱图像进行预处理,是将原始多光谱图像转为低分辨率多光谱图像,其采用的是下采样的方法实现对原始多光谱图像的预处理。随后从采样的多光谱图像集IMS和全色图像集Ipan中随机裁剪配对形成多个训练样本,90%作为训练集,10%作为测试集。First, preprocessing the original multispectral image is to convert the original multispectral image into a low-resolution multispectral image. The downsampling method is used to preprocess the original multispectral image. Subsequently, multiple training samples were randomly cropped from the sampled multispectral image set IMS and the panchromatic image setIpan to form multiple training samples, 90% of which were used as the training set and 10% as the test set.

紧接着,将训练集输入到生成对抗网络中,其中,生成对抗网络包括生成器网络和判别器网络;然后在生成对抗网络中引入对比损失函数Lcl进行对比学习,步骤如下:Next, the training set is input into the generative adversarial network, where the generative adversarial network includes a generator network and a discriminator network; then the contrastive loss function Lcl is introduced into the generative adversarial network for comparative learning. The steps are as follows:

S210:将训练集输入到生成器网络中,通过在生成器网络中引入对比损失函数Lcl进行对比学习,以输出生成图像IfakeS210: Input the training set into the generator network, and perform comparative learning by introducing the contrast loss function Lcl in the generator network to output the generated image Ifake ;

如图2所示,具体实施时,将原始多光谱图像和全色图像输入全色锐化模型,全色锐化模型会先将原始多光谱图像转换为低分辨率多光谱图像,将原始多光谱图像下采样4倍得到低分辨率多光谱图像。As shown in Figure 2, during the specific implementation, the original multispectral image and the panchromatic image are input into the panchromatic sharpening model. The panchromatic sharpening model will first convert the original multispectral image into a low-resolution multispectral image, and then convert the original multispectral image into a low-resolution multispectral image. The spectral image is downsampled 4 times to obtain a low-resolution multispectral image.

S211:使用生成器网络中的两个子网对训练集中的低分辨率多光谱图像和全色图像进行特征提取得到拼接图像x;S211: Use two subnetworks in the generator network to extract features from the low-resolution multispectral images and panchromatic images in the training set to obtain the spliced image x;

具体实施时,低分辨率的多光谱图像和全色图像最先通过生成器网络中的两个子网,两个子网分别提取低分辨率的多光谱图像和全色图像的图像特征,其中一个子网对低分辨率的多光谱图像以一幅4波段的MS图像为输入,提取其光谱信息;而另外一个子网以单波段全景图像为输入提取全色图像包含的几何空间信息。在此之后,通过concat将经过两个子网提取之后的特征图拼接得到拼接图像x。During the specific implementation, the low-resolution multispectral image and the panchromatic image first pass through two subnets in the generator network. The two subnets extract the image features of the low-resolution multispectral image and the panchromatic image respectively. One of the subnets The network uses a 4-band MS image as input to extract the spectral information of low-resolution multispectral images; while the other sub-network uses a single-band panoramic image as input to extract the geometric spatial information contained in the panchromatic image. After that, the feature maps extracted by the two subnetworks are spliced through concat to obtain the spliced image x.

S212:将拼接图像x和拼接图像xfake在生成器网络中使用对比损失函数Lcl进行对比学习得到生成图像IfakeS212: Use the contrast loss function Lcl to perform comparative learning on the spliced image x and the spliced image xfake in the generator network to obtain the generated image Ifake .

具体实施时,如图3和图5所示,拼接图像x先通过特征重建之后初次得到的生成图像Ifake,因为还未经过对比学习,所以此时的生成图像Ifake还不具备直接输出的条件,随后将生成图像Ifake与全色图像再一次进行特征提取融合后所得到的是拼接图像xfake,得到拼接图像xfake之后,将拼接图像x和拼接图像xfake分别在生成器网络的前四层卷积层中进行特征提取,然后将提取到的两者的特征使用所述对比损失函数Lcl进行对比学习,对拼接图像xfake经过最终的对比学习进行迭代修正,最后确定的图片即为最终的生成图像IfakeIn specific implementation, as shown in Figure 3 and Figure 5, the generated image Ifake obtained for the first time after the spliced image Conditions, then the generated image Ifake and the full-color image are extracted and fused again to obtain the spliced image xfake . After the spliced image xfake is obtained, the spliced image x and spliced image xfake are placed in the generator network respectively. Feature extraction is performed in the first four convolutional layers, and then the extracted features of the two are used for comparative learning using the contrast loss function Lcl . The spliced image xfake is iteratively corrected through the final contrast learning, and the final image is determined That is the final generated image Ifake .

使用生成器网络中的前四层卷积层分别对拼接图像x以及拼接图像xfake进行特征提取,然后将提取到的两者的特征使用对比学习函数Lcl进行对比学习。The first four convolutional layers in the generator network are used to extract features from the spliced image x and the spliced image xfake respectively, and then the extracted features of the two are used for comparative learning using the contrastive learning function Lcl .

对比学习主要应用于双流提取拼接图像x和拼接图像xfake之间对应的特征,其核心思想是利用数据之间的相关性和差异性来学习事物的本质。对于一个给定的锚样本,对比学习将一个‘查询’对应正样本(语义相似)与数据集中的负样本(语义不同)进行比较;缩小与正样本间的距离,扩大与负样本间的距离,使正样本与锚点的相似度远远大于负样本与锚点的相似度,从而达到他们间原有空间分布的真实距离。比如,对输入的遥感影像进行对比学习,此遥感影像中同为高楼的建筑物之间的特征是相似的,因此高楼建筑物之间经过对比学习其关联性应该更加强烈;相反地,若影像中同时存在海洋和绿地,海洋和绿地的特征并不相同,经过对比学习此两者之间的关联性应该更弱。Contrastive learning is mainly used in dual-stream extraction of corresponding features between spliced image x and spliced image xfake . Its core idea is to use the correlation and difference between data to learn the essence of things. For a given anchor sample, contrastive learning compares a positive sample (semantically similar) corresponding to a 'query' with a negative sample (semantically different) in the data set; reducing the distance to the positive sample and widening the distance to the negative sample , so that the similarity between the positive sample and the anchor point is much greater than the similarity between the negative sample and the anchor point, thereby achieving the true distance of the original spatial distribution between them. For example, comparative learning is performed on input remote sensing images. The features of buildings that are both high-rise buildings in this remote sensing image are similar. Therefore, the correlation between high-rise buildings should be stronger after contrast learning. On the contrary, if the image There are oceans and green spaces at the same time. The characteristics of oceans and green spaces are not the same. Through comparative learning, the correlation between the two should be weaker.

我们在生成器网络中设置了四层卷积层,使用生成器网络中的前四层卷积层分别对拼接图像x以及拼接图像xfake进行特征提取,得到拼接图像x的第一特征地图以及拼接图像xfake的第二特征地图,然后将提取到的第一特征地图和所示第二特征地图使用对比损失函数Lcl进行对比学习。具体是先将拼接图像x重塑为Rc×hw,对重塑后的拼接图像x在hw维随机采样256个位置,这里每层的大小为hw,然后将得到的向量c×256通过四层卷积层中的两层MLP网络得到256×256的特征向量再进行对比学习。We set up four convolutional layers in the generator network, and used the first four convolutional layers in the generator network to perform feature extraction on the spliced image x and the spliced image xfake respectively, and obtained the first feature map of the spliced image x and Splice the second feature map of the image xfake , and then use the contrast loss function Lcl to perform comparative learning on the extracted first feature map and the second feature map shown. Specifically, the spliced image x is first reshaped into Rc The two-layer MLP network in the convolutional layer obtains a 256×256 feature vector and then performs comparative learning.

编码器G被用来产生图像特征提取,其特征堆栈很容易获得,因此该特征堆栈中的每个层和空间位置代表输入图像的一个块,更深的层对应于更大的块,并将第一特征地图和第二特征地图通过两层MLP网络m产生特征堆栈{zn}N={m(Gl(X))}N,其中Gl表示第l个所选层的输出。我们索引到层n∈{1,2,...,4},并表示S∈{1,...,Sn},其中S1是每层中的空间位置的数目。我们将相应的特征称为将其他特征称为/>其中C是每层的通道数,类似地拼接图像xfake编码为/>最终的目标是在特定位置匹配相应的输入和输出块,利用输入中的其他块作为负样本。Encoder G is used to produce image feature extraction, and its feature stack is easily obtained, so each layer and spatial location in this feature stack represents a patch of the input image, with deeper layers corresponding to larger patches, and the The first feature map and the second feature map generate a feature stack {zn }N={m(Gl (X))}N through a two-layer MLP network m, where Gl represents the output of the l-th selected layer. We index into layers n ∈ {1, 2, ..., 4} and denote S ∈ {1, ..., Sn }, where S1 is the number of spatial locations in each layer. We call the corresponding features Call other features/> where C is the number of channels per layer, similarly the stitched image xfake is encoded as/> The ultimate goal is to match corresponding input and output blocks at specific locations, utilizing other blocks in the input as negative samples.

对比学习损失函数的公式为:The formula of the comparative learning loss function is:

查询拼接图像x和选择的四层卷积层的像素点,拼接图像xfake中对应拼接图像x或选择的四层卷积层的像素点位置为正样本v+,其他位置像素点为负样本v-,N=256,m=1,2,3…N,温度τ=0.07。Query thepixels ofspliced image v- , N=256, m=1,2,3...N, temperature τ=0.07.

S220:将生成图像Ifake和原始多光谱图像输入判别器网络中,对判别器进行训练。S220: Input the generated image Ifake and the original multispectral image into the discriminator network, and train the discriminator.

进一步地,低分辨率的多光谱图像和生成图像Ifake之间还设置有损失函数L1+LGAN,损失函数L1+LGAN使用在全色锐化模型的训练阶段,每个批次的训练数据送入模型后,通过前向传播输出预测值,然后损失函数L1+LGAN会计算出预测值和真实值之间的差异值,也就是损失值。得到损失值之后,模型通过反向传播去更新各个参数,来降低真实值与预测值之间的损失,使得模型生成的预测值往真实值方向靠拢,从而达到学习的目的。损失函数越小,模型的鲁棒性就越好。Furthermore, a loss function L1 +LGAN is set between the low-resolution multispectral image and the generated image Ifake . The loss function L1 +LGAN is used in the training phase of the panchromatic sharpening model for each batch. After the training data is fed into the model, the predicted value is output through forward propagation, and then the loss function L1 +LGAN will calculate the difference between the predicted value and the true value, which is the loss value. After obtaining the loss value, the model updates each parameter through backpropagation to reduce the loss between the real value and the predicted value, so that the predicted value generated by the model moves closer to the real value, thereby achieving the purpose of learning. The smaller the loss function, the better the robustness of the model.

损失函数L1+LGAN与对比损失函数Lcl结合通过大量样本数据训练使得生成器的生成能力和判别器的判别能力在对抗中逐步提高,判别器能够更好的区分出生成样本和真实样本损失值,也即最终目的是使得生成器最后输出到判别器的生成图像Ifake更加接近于原始地多光谱图像,而输入判别器的图像若与真实图像接近,则使其判别更加近似,若与真实图像不接近,使其判别结果更远离,以最小化损失函数L1+LGANThe loss function L1 +LGAN and the contrast loss function Lcl are combined to train with a large number of sample data, so that the generation ability of the generator and the discriminating ability of the discriminator are gradually improved in the confrontation, and the discriminator can better distinguish between generated samples and real samples. The loss value, that is, the ultimate goal is to make the generated image Ifake output by the generator to the discriminator closer to the original multispectral image. If the image input to the discriminator is close to the real image, the discrimination will be more approximate. If It is not close to the real image, making its discrimination result further away to minimize the loss function L1 +LGAN .

具体实施时,先定义一个标签:valid=1,fake=0,再定义了两个优化器,用于训练生成器和判别器。During the specific implementation, a label is first defined: valid=1, fake=0, and then two optimizers are defined for training the generator and the discriminator.

首先是训练生成器,在训练生成器的时候需要将判别器固定不需要训练,判别器的权重不会被更新,将全色图像和低分辨率的多光谱图像作为生成器的输入,输出生成图像Ifake,将生成图像Ifake送入初始判别器进行判别,在判别过程中使用损失函数L1+LGAN计算判别器的输出与valid的差距g_loss,让g_loss越来越小,就是让生成图像Ifake作为判别器的输出的概率更接近valid,让生成图像Ifake与原始的多光谱图像越相似,然后反向传播更新生成器的权重。The first is to train the generator. When training the generator, the discriminator needs to be fixed and does not require training. The weight of the discriminator will not be updated. Panchromatic images and low-resolution multispectral images are used as the input of the generator, and the output is generated Image Ifake , the generated image Ifake is sent to the initial discriminator for discrimination. During the discrimination process, the loss function L1 + LGAN is used to calculate the difference g_loss between the output of the discriminator and valid, and the g_loss is made smaller and smaller, which is to make the generation The probability of image Ifake being the output of the discriminator is closer to valid, making the generated image Ifake more similar to the original multispectral image, and then back propagation updates the weight of the generator.

当生成器得到一次训练之后,将生成器的权重固定不再训练,开始训练判别器,将与生成图像Ifake以及与原始多光谱图像送入判别器,real_loss是判别器去判别原始多光谱图像的输出,让这个输出更接近与valid,fake_loss是判别器去判别生成图像Ifake的输出,让这个输出更接近与fake,d_loss是前两者的平均。损失函数向后传播,向后传播,就是为了让d_loss--->0。也就是让:After the generator is trained once, the weight of the generator is fixed and no longer trained, and the discriminator is trained. The generated image Ifake and the original multispectral image are sent to the discriminator. real_loss is the discriminator to distinguish the original multispectral image. The output of I fake makes this output closer to valid. fake_loss is the output of the discriminator to identify the generated image Ifake , making this output closer to fake. d_loss is the average of the first two. The loss function is propagated backward, propagated backward, just to make d_loss--->0. That is, let:

real_loss--->0===>让判别器的输出原始多光谱图像的概率接近valid;real_loss--->0===> Let the probability of the discriminator output the original multispectral image be close to valid;

fake_loss--->0===>让判别器的输出生成图像Ifake的概率接近fakefake_loss--->0===> Let the probability of generating image Ifake from the output of the discriminator be close to fake

也就是说,让判别器按照生成图像Ifake以及与原始多光谱图像的类别,分别按照不同的要求去更新判别器参数。g_loss越小,说明生成器生产的生成图像Ifake作为判别器的输入的输出概率越接近valid,就是生成图像Ifake越像原始多光谱图像,d_loss越小,说明判别器越能够将识别出原始多光谱图像和生成图像Ifake。所以,最后是要让g_loss更小,d_loss更大。In other words, let the discriminator update the discriminator parameters according to different requirements according to the categories of the generated image Ifake and the original multispectral image. The smaller g_loss is, it means that the output probability of the generated image Ifake produced by the generator as the input of the discriminator is closer to valid, that is, the generated image Ifake is more like the original multispectral image. The smaller d_loss is, it means that the discriminator is more able to recognize the original Multispectral images and generated images Ifake . Therefore, the final step is to make g_loss smaller and d_loss larger.

最后,为获得最终的融合图像,对生成的图像和真实图像在生成器网络和判别器网络中进行对抗迭代生成融合图像,修正光谱信息和空间信息的公式为:Finally, in order to obtain the final fused image, the generated image and the real image are iterated in the generator network and the discriminator network to generate the fused image. The formula for correcting the spectral information and spatial information is:

Dk=G(Dk-1)K=1,2,3…KDk =G(Dk-1 )K=1,2,3...K

其中,G为生成器网络,D为判别器网络,Dk指对图像在判别器网络中进行K次判别。Among them, G is the generator network, D is the discriminator network, and Dk refers to K times of discrimination of the image in the discriminator network.

循环更新k次判别器之后,更新1次生成器,使判别器尽可能区分不了真假。多次更新迭代后,理想状态下,最终判别器无法区分图片到底是来自真实的训练样本集合,还是来自生成器生成的样本即可,此时辨别的概率为0.5,完成训练,也即d_loss最后为0.5的时候,达到最好的效果,所以real_loss=0,将所有的生成图像Ifake识别错误,此时fake_loss=1,最后的d_loss=0.5。After cyclically updating the discriminator k times, the generator is updated once so that the discriminator cannot distinguish between true and false as much as possible. After multiple update iterations, under ideal conditions, the final discriminator cannot distinguish whether the image comes from the real training sample set or the sample generated by the generator. At this time, the probability of discrimination is 0.5, and the training is completed, that is, the final d_loss When it is 0.5, the best effect is achieved, so real_loss=0, all generated images Ifake will be recognized incorrectly, at this time fake_loss=1, and the final d_loss=0.5.

S300:将测试集输入全色锐化模型进行验证;S300: Input the test set into the pan-sharpening model for verification;

在通过上述训练集对全色锐化模型进行训练之后,将测试集输入全色锐化模型进行验证,以检测全色锐化模型的融合效果。After training the pan-sharpening model with the above training set, the test set is input to the pan-sharpening model for verification to detect the fusion effect of the pan-sharpening model.

S400:将待全色锐化的多光谱图像和全色图像输入到全色锐化模型中进行全色锐化,以得到高分辨率多光谱图像。S400: Input the multispectral image and panchromatic image to be panchromatically sharpened into the panchromatic sharpening model for panchromatic sharpening to obtain a high-resolution multispectral image.

在现实应用中所要全色锐化的低分辨率图像是没有相对应的高分辨率图像的,上述通过卫星采集到的原始多光谱图像是高分辨率的多光谱图像,其和全色锐化图像对我们所构建的全色锐化模型进行了训练和验证,使得全色锐化模型的全色锐化精度得到了保证,可以将全色锐化模型应用到现实生活中,需要说明的是,在真正应用中使用全色锐化模型对待全色锐化的多光谱图像进行处理时,因为其没有对应的高分辨率多光谱图像,所以并不存在对其进行预处理继续降低其分辨率以及在判别器进行判别这些步骤,只需在生成器中进行特征提取迭代,此处步骤与上述构建全色锐化模型的过程一样,此处不再赘述。因使用大量的训练集对全色锐化模型进行训练,所以本发明的全色锐化模型最后处理得到的融合图像质量极高,如图6到图9所示,图6是初始的待全色锐化低分辨率多光谱图像,图8是原始多光谱图像,图9是经过本发明全色锐化模型处理后所得到的融合图像,可以看出经过本发明全色锐化处理后得到的融合图像与初始的高分辨率多光谱图像极为近似,所以本发明对低分辨率的多光谱图像重建效果高,最后得到的融合图像空间分辨率高。In real-life applications, the low-resolution images that require full-color sharpening do not have corresponding high-resolution images. The above-mentioned original multispectral images collected by satellites are high-resolution multispectral images, which are different from those of full-color sharpening. The image has trained and verified the pan-sharpening model we built, so that the pan-sharpening accuracy of the pan-sharpening model is guaranteed, and the pan-sharpening model can be applied to real life. It should be noted that , when using the pan-sharpening model to process pan-sharpened multispectral images in real applications, because there is no corresponding high-resolution multispectral image, there is no need to preprocess it to continue to reduce its resolution. As well as the steps of discriminating in the discriminator, you only need to perform feature extraction iterations in the generator. The steps here are the same as the above-mentioned process of building a pan-sharpening model and will not be repeated here. Because a large number of training sets are used to train the pan-sharpening model, the quality of the fused image obtained by the final processing of the pan-sharpening model of the present invention is extremely high, as shown in Figures 6 to 9. Figure 6 is the initial image to be fully processed. Color-sharpened low-resolution multispectral image. Figure 8 is the original multispectral image. Figure 9 is the fused image obtained after being processed by the pan-color sharpening model of the present invention. It can be seen that the obtained image is obtained after the pan-color sharpening process of the present invention. The fused image is very similar to the initial high-resolution multispectral image, so the present invention has a high reconstruction effect on the low-resolution multispectral image, and the finally obtained fused image has high spatial resolution.

一种基于对比学习的遥感图像全色锐化装置,其特征在于:A full-color sharpening device for remote sensing images based on contrast learning, which is characterized by:

数据获取模块,获取全色图像Ipan、多光谱图像IMSThe data acquisition module acquires panchromatic imagesIpan and multispectral imagesIMS ;

训练集训练模块,建立有监督的全色锐化模型;建立全色锐化模型;对多光谱图像集IMS中的原始多光谱图像进行预处理,再对全色图像集Ipan中的全色图像和预处理后的多光谱图像进行随机配对分组,得到训练集和测试集;将训练集输入到生成对抗网络中,在生成对抗网络引入对比损失函数Lcl进行对比学习,以得到全色锐化模型;The training set training module establishes a supervised panchromatic sharpening model; establishes a panchromatic sharpening model; preprocesses the original multispectral images in the multispectral image set IMS , and then preprocesses the full color sharpening model in the panchromatic image setIpan . Color images and preprocessed multispectral images are randomly paired and grouped to obtain a training set and a test set; the training set is input into the generative adversarial network, and the contrast loss function Lc l is introduced in the generative adversarial network for comparative learning to obtain the full color sharpening model;

测试集训练模块,将测试集输入全色锐化模型进行验证;The test set training module inputs the test set into the pan-sharpening model for verification;

全色锐化模块,将待全色锐化的多光谱图像和全色图像输入到全色锐化模型中进行全色锐化,以得到融合图像。The pan-sharpening module inputs the multi-spectral image and pan-color image to be pan-sharpened into the pan-sharpening model for pan-sharpening to obtain a fused image.

一种计算机可读存储介质,其特征在于:计算机可读存储介质有计算机指令,计算机被处理器执行时实现如上述任一实施例所述的一种基于对比学习的遥感图像全色锐化方法。A computer-readable storage medium, characterized in that: the computer-readable storage medium has computer instructions, and when the computer is executed by the processor, the method for full-color sharpening of remote sensing images based on contrast learning is implemented as described in any of the above embodiments. .

一种电子设备,其特征在于:包括至少一个处理器、及与处理器通信连接的存储器,其中存储器存储可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使处理器执行如上述任一实施例所述的一种基于对比学习的遥感图像全色锐化方法。An electronic device, characterized by: comprising at least one processor and a memory communicatively connected to the processor, wherein the memory stores instructions that can be executed by at least one processor, and the instructions are executed by at least one processor, so that the processor executes A method for pan-color sharpening of remote sensing images based on contrast learning as described in any of the above embodiments.

另外,本领域技术人员应当理解,尽管现有技术中存在许多问题,但是,本发明的每个实施例或技术方案可以仅在一个或几个方面进行改进,而不必同时解决现有技术中或者背景技术中列出的全部技术问题。本领域技术人员应当理解,对于一个权利要求中没有提到的内容不应当作为对于该权利要求的限制。In addition, those skilled in the art should understand that although there are many problems in the prior art, each embodiment or technical solution of the present invention can only be improved in one or several aspects, without having to simultaneously solve the problems in the prior art or All technical issues listed in the background art. Those skilled in the art will understand that content not mentioned in a claim shall not be used as a limitation on the claim.

尽管本文中较多的使用了诸如全色锐化模型、多光谱图像集、全色图像集、对比损失函数等术语,但并不排除使用其它术语的可能性。使用这些术语仅仅是为了更方便地描述和解释本发明的本质;把它们解释成任何一种附加的限制都是与本发明精神相违背的;本发明实施例的说明书和权利要求书及上述附图中的术语“第一”、“第二”、等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。Although terms such as panchromatic sharpening model, multispectral image set, panchromatic image set, contrast loss function are frequently used in this article, the possibility of using other terms is not ruled out. The use of these terms is only to more conveniently describe and explain the essence of the present invention; interpreting them as any additional restrictions is contrary to the spirit of the present invention; the description and claims of the embodiments of the present invention and the above appendix The terms "first", "second", etc. (if present) in the figures are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence.

最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention, but not to limit it. Although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features can be equivalently replaced; and these modifications or substitutions do not deviate from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention. scope.

Claims (10)

CN202310420311.2A2023-04-192023-04-19Remote sensing image full-color sharpening method and device based on contrast learningPendingCN116757938A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310420311.2ACN116757938A (en)2023-04-192023-04-19Remote sensing image full-color sharpening method and device based on contrast learning

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310420311.2ACN116757938A (en)2023-04-192023-04-19Remote sensing image full-color sharpening method and device based on contrast learning

Publications (1)

Publication NumberPublication Date
CN116757938Atrue CN116757938A (en)2023-09-15

Family

ID=87957809

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310420311.2APendingCN116757938A (en)2023-04-192023-04-19Remote sensing image full-color sharpening method and device based on contrast learning

Country Status (1)

CountryLink
CN (1)CN116757938A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118396887A (en)*2024-02-272024-07-26西南林业大学 A panchromatic sharpening method based on pix2pix shading concept
CN119919313A (en)*2025-04-012025-05-02南京邮电大学 Panchromatic sharpening method and system based on high-frequency differential spatial attention mechanism

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118396887A (en)*2024-02-272024-07-26西南林业大学 A panchromatic sharpening method based on pix2pix shading concept
CN119919313A (en)*2025-04-012025-05-02南京邮电大学 Panchromatic sharpening method and system based on high-frequency differential spatial attention mechanism

Similar Documents

PublicationPublication DateTitle
CN113901900B (en) Unsupervised change detection method and system for remote sensing images of the same or different sources
CN111950453B (en)Random shape text recognition method based on selective attention mechanism
CN114187450B (en)Remote sensing image semantic segmentation method based on deep learning
CN116912595B (en) A cross-domain multimodal remote sensing image classification method based on contrastive learning
CN108537742A (en)A kind of panchromatic sharpening method of remote sensing images based on generation confrontation network
CN116757938A (en)Remote sensing image full-color sharpening method and device based on contrast learning
CN117576567B (en)Remote sensing image change detection method using multi-level difference characteristic self-adaptive fusion
CN110245683B (en) A Residual Relational Network Construction Method and Application for Few-Sample Target Recognition
CN114463340B (en)Agile remote sensing image semantic segmentation method guided by edge information
CN114742707B (en)Multi-source remote sensing image splicing method and device, electronic equipment and readable medium
CN115240072B (en)Hyperspectral multi-class change detection method based on multidirectional multi-scale spectrum-space residual convolution neural network
CN117710711B (en)Optical and SAR image matching method based on lightweight depth convolution network
CN119600604B (en)Remote sensing image ship segmentation method based on U-KAN network model
CN113112531B (en)Image matching method and device
CN113159158B (en)License plate correction and reconstruction method and system based on generation countermeasure network
CN116050498A (en) Network training method, device, electronic equipment and storage medium
Hughes et al.A semi-supervised approach to SAR-optical image matching
CN118447355A (en) A method for fusion of panchromatic and multispectral image features
CN108229273B (en)Method and device for training multilayer neural network model and recognizing road characteristics
CN114926827A (en)Cross visual angle geographical positioning method based on optimal transmission theory
CN114494827A (en) A small target detection method for detecting aerial pictures
CN119169555A (en) A complex scene structured road detection method based on deep learning
CN117764988A (en)Road crack detection method and system based on heteronuclear convolution multi-receptive field network
CN118334322A (en)Camouflage target detection method, camouflage target detection device, computer equipment and storage medium
CN117953526A (en) A method for detecting subgraphs of academic papers based on multi-scale features

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp