Movatterモバイル変換


[0]ホーム

URL:


CN117114984A - Remote sensing image super-resolution reconstruction method based on generative adversarial network - Google Patents

Remote sensing image super-resolution reconstruction method based on generative adversarial network
Download PDF

Info

Publication number
CN117114984A
CN117114984ACN202310787153.4ACN202310787153ACN117114984ACN 117114984 ACN117114984 ACN 117114984ACN 202310787153 ACN202310787153 ACN 202310787153ACN 117114984 ACN117114984 ACN 117114984A
Authority
CN
China
Prior art keywords
remote sensing
module
feature
sensing image
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310787153.4A
Other languages
Chinese (zh)
Inventor
杨蕾
宋晓炜
鹿德源
蔡文静
李梦龙
王浩震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyuan University of Technology
Original Assignee
Zhongyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyuan University of TechnologyfiledCriticalZhongyuan University of Technology
Priority to CN202310787153.4ApriorityCriticalpatent/CN117114984A/en
Publication of CN117114984ApublicationCriticalpatent/CN117114984A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention provides a remote sensing image super-resolution reconstruction method based on a generated countermeasure network, which comprises the following steps: firstly, constructing a remote sensing image high-order degradation model simulating an actual degradation process to obtain a pair of high-resolution remote sensing image and low-resolution remote sensing image; secondly, inputting the low-resolution remote sensing image into a remote sensing image generator comprising an ultra-dense residual error module to generate a reconstructed ultra-resolution image; then, classifying and judging the high-resolution remote sensing image and the reconstructed super-resolution image by using a U-shaped remote sensing image discriminator integrating the recursion residual error and the attention mechanism; optimizing network parameters by calculating L1 level loss, perception loss and GAN loss to obtain a remote sensing image super-resolution reconstruction model; and finally, generating a super-resolution reconstructed remote sensing image by using the remote sensing image super-resolution reconstruction model. The invention can effectively eliminate the influence of the blurring and the artifacts in the reconstructed image and realize the super-resolution reconstruction of the high-quality remote sensing image.

Description

Translated fromChinese
基于生成对抗网络的遥感图像超分辨率重建方法Remote sensing image super-resolution reconstruction method based on generative adversarial network

技术领域Technical field

本发明涉及遥感图像重构技术领域,特别是指一种基于生成对抗网络的遥感图像超分辨率重建方法。The invention relates to the technical field of remote sensing image reconstruction, and in particular, to a remote sensing image super-resolution reconstruction method based on a generative adversarial network.

背景技术Background technique

随着遥感技术的不断发展,遥感图像由于其覆盖面积广、受地面影响小等优点,已经广泛应用于灾害预警、环境变化监测、军事侦察等领域。对于遥感图像而言,图像分辨率的大小能够决定其携带信息数量的多少。在使用遥感图像进行目标检测、分类、场景变化检测时,为了获得更加准确的结果,需要高分辨率的图像。目前我国遥感图像技术快速发展,"风云"、"高分"、"北斗"等卫星系列已逐步融入社会各行各业。而受成像传感器、光学元件制作工艺等硬件条件以及信号传输带宽的限制,难以直接获取高分辨率的遥感图像,并且在遥感图像成像和传输的过程中会发生模糊或噪声等未知的损失。提升拍摄设备质量、使用制造工艺更好、精度更高的光学元器件和图像传感器等方法,虽然可以提高拍摄图像的质量,但实现难度大,成本高,难以在实际应用中推广。因此,通过算法完成遥感图像超分辨率重建任务逐渐成为一种主流解决方案。With the continuous development of remote sensing technology, remote sensing images have been widely used in disaster early warning, environmental change monitoring, military reconnaissance and other fields due to their wide coverage area and small impact on the ground. For remote sensing images, the size of the image resolution can determine the amount of information it carries. When using remote sensing images for target detection, classification, and scene change detection, in order to obtain more accurate results, high-resolution images are required. At present, my country's remote sensing image technology is developing rapidly, and satellite series such as "Fengyun", "Gaofen" and "Beidou" have gradually been integrated into all walks of life. However, due to limitations of hardware conditions such as imaging sensors, optical component manufacturing processes, and signal transmission bandwidth, it is difficult to directly obtain high-resolution remote sensing images, and unknown losses such as blurring or noise may occur during the imaging and transmission of remote sensing images. Although methods such as improving the quality of shooting equipment and using optical components and image sensors with better manufacturing processes and higher precision can improve the quality of captured images, they are difficult to implement and costly, making it difficult to promote in practical applications. Therefore, completing the super-resolution reconstruction task of remote sensing images through algorithms has gradually become a mainstream solution.

在深度学习兴起之前,常用的遥感图像超分辨率方法包括基于插值重建的方法和基于图像先验信息重建的方法。前者采用像素插值法将低分辨率图像直接上采样为高分辨率图像,但会导致重建图像高频信息缺失、模糊和连续锯齿状等问题。后者需要利用明确的图像先验信息,限制了其在图像重建方面的表现。随着深度学习技术的发展,基于卷积神经网络的图像超分辨率重建方法在高频信息恢复方面表现出色,但由于网络层次有限,无法充分利用特征信息,因此重建后的图像仍然存在局部模糊的问题,特别是在地物交叉部分的细节纹理上存在连续锯齿状和伪影。Before the rise of deep learning, commonly used remote sensing image super-resolution methods included methods based on interpolation reconstruction and methods based on image prior information reconstruction. The former uses pixel interpolation to directly upsample low-resolution images into high-resolution images, but this will lead to problems such as missing high-frequency information, blur, and continuous jaggedness in the reconstructed image. The latter requires the use of explicit image prior information, which limits its performance in image reconstruction. With the development of deep learning technology, image super-resolution reconstruction methods based on convolutional neural networks perform well in high-frequency information recovery. However, due to the limited network layers, feature information cannot be fully utilized, so the reconstructed image still has local blur. problems, especially the existence of continuous jaggedness and artifacts on the detailed textures at intersections of ground objects.

发明内容Contents of the invention

针对现有遥感图像超分辨率重建方法存在的图像模糊和伪影等问题,本发明提出了一种基于生成对抗网络的遥感图像超分辨率重建方法,该方法融合了高阶图像退化模型和生成对抗网络结构,能够有效消除重建图像中模糊和伪影等的影响,实现高质量的遥感图像超分辨率重建。In view of the problems such as image blur and artifacts existing in the existing remote sensing image super-resolution reconstruction methods, the present invention proposes a remote sensing image super-resolution reconstruction method based on a generative adversarial network. This method combines a high-order image degradation model and a generative adversarial network. The adversarial network structure can effectively eliminate the effects of blur and artifacts in reconstructed images and achieve high-quality super-resolution reconstruction of remote sensing images.

本发明的技术方案是这样实现的:The technical solution of the present invention is implemented as follows:

一种基于生成对抗网络的遥感图像超分辨率重建方法,其步骤如下:A remote sensing image super-resolution reconstruction method based on generative adversarial networks. The steps are as follows:

步骤一:构建模拟实际退化过程的遥感图像高阶退化模型,将高分辨率遥感图像IHR输入遥感图像高阶退化模型中,获得与高分辨率遥感图像IHR相对应的低分辨率遥感图像ILRStep 1: Construct a high-order degradation model of remote sensing images that simulates the actual degradation process, input the high-resolution remote sensing image IHR into the high-order degradation model of remote sensing images, and obtain a low-resolution remote sensing image corresponding to the high-resolution remote sensing image IHR . ILR ;

步骤二:构建遥感图像超分辨率重建网络,包括包含超密集残差模块的遥感图像生成器、融合递归残差和注意力机制的U型遥感图像判别器;并将成对的高分辨率遥感图像IHR和低分辨率遥感图像ILR作为遥感图像超分辨率重建网络的训练集;Step 2: Construct a remote sensing image super-resolution reconstruction network, including a remote sensing image generator containing a super-dense residual module, a U-shaped remote sensing image discriminator that integrates recursive residuals and attention mechanisms; and combine paired high-resolution remote sensing images IHR and low-resolution remote sensing images ILR serve as the training set for the remote sensing image super-resolution reconstruction network;

步骤三:将低分辨率遥感图像ILR输入包含超密集残差模块的遥感图像生成器中,生成重建超分辨率图像ISRStep 3: Input the low-resolution remote sensing image ILR into the remote sensing image generator containing the ultra-dense residual module to generate the reconstructed super-resolution image ISR ;

步骤四:利用融合递归残差和注意力机制的U型遥感图像判别器对高分辨率遥感图像IHR和重建超分辨率图像ISR进行分类判决;Step 4: Use the U-shaped remote sensing image discriminator that fuses the recursive residual and attention mechanisms to make classification decisions on the high-resolution remote sensing image IHR and the reconstructed super-resolution image ISR ;

步骤五:通过计算L1级别损失、感知损失以及GAN损失优化遥感图像超分辨率重建网络的参数,使网络收敛,完成训练,得到遥感图像超分辨率重建模型;Step 5: Optimize the parameters of the remote sensing image super-resolution reconstruction network by calculating the L1 level loss, perceptual loss and GAN loss to make the network converge, complete the training, and obtain the remote sensing image super-resolution reconstruction model;

步骤六:将待重建的低分辨率遥感图像输入遥感图像超分辨率重建模型中,生成超分辨率重建遥感图像。Step 6: Input the low-resolution remote sensing image to be reconstructed into the remote sensing image super-resolution reconstruction model to generate a super-resolution reconstructed remote sensing image.

优选地,所述遥感图像高阶退化模型包括第一阶段、第二阶段和降采样;第一阶段的输入为高分辨率遥感图像IHR,第一阶段的输出与第二阶段的输入相连接,第二阶段的输出与降采样相连接,降采样的倍率随机选取;降采样后输出多张不同退化程度的低分辨率遥感图像ILR,与高分辨率遥感图像IHR构成多组成对的HR-LR遥感图像,作为训练集;其中,第一阶段和第二阶段均包括加模糊和加噪声两个过程,其中,模糊核类型包括高斯模糊核和sinc模糊核,噪声类型包括高斯噪声和泊松噪声;模糊核类型和噪声类型按设定概率发生,且第一阶段和第二阶段的设定概率不同,模糊核大小随机选取,进行多次迭代。Preferably, the high-order degradation model of remote sensing images includes a first stage, a second stage and downsampling; the input of the first stage is the high-resolution remote sensing image IHR , and the output of the first stage is connected to the input of the second stage. , the output of the second stage is connected to downsampling, and the downsampling ratio is randomly selected; after downsampling, multiple low-resolution remote sensing images ILR with different degrees of degradation are output, which form multiple pairs with the high-resolution remote sensing image IHR HR-LR remote sensing images are used as training sets; among them, the first stage and the second stage both include the two processes of adding blur and adding noise. Among them, the blur kernel types include Gaussian blur kernel and sinc blur kernel, and the noise types include Gaussian noise and Poise. Pine noise; the fuzzy kernel type and noise type occur according to the set probability, and the set probabilities in the first stage and the second stage are different. The fuzzy kernel size is randomly selected and multiple iterations are performed.

优选地,所述包含超密集残差模块的遥感图像生成器包括第一卷积层、超密集残差模块组、第二卷积层、上采样模块、第三卷积层和第四卷积层;低分辨率遥感图像ILR输入第一卷积层,第一卷积层的输出特征输入超密集残差模块组,超密集残差模块组的输出特征与第一卷积层的输出特征相加后输入第二卷积层,第二卷积层的输出特征输入上采样模块,上采样模块的输出特征输入第三卷积层,第三卷积层的输出特征输入第四卷积层,第四卷积层输出重建超分辨率图像ISR;其中,超密集残差模块组包括依次连接的23个超密集残差模块(RRSDB),最后一个RRSDB的输出特征与第一卷积层的输出特征进行相加后作为第二卷积层的输入特征。Preferably, the remote sensing image generator including a super-dense residual module includes a first convolution layer, a super-dense residual module group, a second convolution layer, an upsampling module, a third convolution layer and a fourth convolution layer. layer; the low-resolution remote sensing image ILR is input to the first convolution layer, the output features of the first convolution layer are input to the super-dense residual module group, and the output features of the super-dense residual module group are the same as the output features of the first convolution layer After addition, the second convolution layer is input, the output features of the second convolution layer are input to the upsampling module, the output features of the upsampling module are input to the third convolution layer, and the output features of the third convolution layer are input to the fourth convolution layer. , the fourth convolutional layer outputs the reconstructed super-resolution image ISR ; among them, the ultra-dense residual module group includes 23 ultra-dense residual modules (RRSDB) connected in sequence, and the output features of the last RRSDB are the same as those of the first convolutional layer The output features are added and used as the input features of the second convolutional layer.

优选地,所述RRSDB包括卷积层Ⅰ、LReLu激活函数Ⅰ、卷积层Ⅱ、LReLu激活函数Ⅱ、卷积层Ⅲ、LReLu激活函数Ⅲ、卷积层Ⅳ、LReLu激活函数Ⅳ和卷积层Ⅴ;其中,输入特征Fin输入卷积层Ⅰ,卷积层Ⅰ的输出特征输入LReLu激活函数Ⅰ,LReLu激活函数Ⅰ的输出特征与输入特征Fin相加后得到第一特征F1;第一特征F1输入卷积层Ⅱ,卷积层Ⅱ的输出特征输入LReLu激活函数Ⅱ,LReLu激活函数Ⅱ的输出特征与输入特征Fin、第一特征F1相加后得到第二特征F2;第二特征F2输入卷积层Ⅲ,卷积层Ⅲ的输出特征输入LReLu激活函数Ⅲ,LReLu激活函数Ⅲ的输出特征与输入特征Fin、第一特征F1、第二特征F2相加后得到第三特征F3;第三特征F3输入卷积层Ⅳ,卷积层Ⅳ的输出特征输入LReLu激活函数Ⅳ,LReLu激活函数Ⅳ的输出特征与输入特征Fin、第一特征F1、第二特征F2相加后得到第四特征F4;第四特征F4输入卷积层Ⅴ,经过卷积层Ⅴ得到输出特征Fout;第一特征F1、第二特征F2、第三特征F3、第四特征F4和输出特征Fout的表达式分别为:Preferably, the RRSDB includes convolution layer I, LReLu activation function I, convolution layer II, LReLu activation function II, convolution layer III, LReLu activation function III, convolution layer IV, LReLu activation function IV and convolution layer V; Among them, the input feature Fin is input into the convolution layer I, the output feature of the convolution layer I is input into the LReLu activation function I, and the output feature of the LReLu activation function I is added to the input feature Fin to obtain the first feature F1 ; A feature F1 is input to the convolution layer II, and the output feature of the convolution layer II is input to the LReLu activation function II. The output feature of the LReLu activation function II is added to the input feature Fin and the first feature F1 to obtain the second feature F2 ; The second feature F2 is input to the convolution layer III, the output feature of the convolution layer III is input to the LReLu activation function III, and the output feature of the LReLu activation function III is consistent with the input feature Fin , the first feature F1 , and the second feature F2 After addition, the third feature F3 is obtained; the third feature F3 is input to the convolution layer IV, the output feature of the convolution layer IV is input to the LReLu activation function IV, and the output feature of the LReLu activation function IV is the same as the input feature Fin and the first feature F1. The fourth feature F4 is obtained after adding the second feature F2 ; the fourth feature F4 is input to the convolution layer V, and the output feature Fout is obtained through the convolution layer V; the first feature F1 and the second feature F2 , the expressions of the third feature F3 , the fourth feature F4 and the output feature Fout are respectively:

F1=C(Fin)+FinF1 =C(Fin )+Fin ;

F2=C(F1)+2Fin+F1F2 =C(F1 )+2Fin +F1 ;

F3=C(F2)+Fin+F1+F2F3 =C(F2 )+Fin +F1 +F2 ;

F4=C(F3)+Fin+F1+2F2F4 =C(F3 )+Fin +F1 +2F2 ;

Fout=C(F4);Fout =C(F4 );

其中,C(·)表示卷积操作。Among them, C(·) represents the convolution operation.

优选地,所述融合递归残差和注意力机制的U型遥感图像判别器包括第一递归残差卷积模块、第一最大池化模块、第二递归残差卷积模块、第二最大池化模块、第三递归残差卷积模块、第三最大池化模块、第四递归残差卷积模块、第四最大池化模块、第五递归残差卷积模块、第一上采样模块、第六递归残差卷积模块、第二上采样模块、第七递归残差卷积模块、第三上采样模块、第八递归残差卷积模块、第四上采样模块、第九递归残差卷积模块和卷积层;Preferably, the U-shaped remote sensing image discriminator that fuses recursive residuals and attention mechanisms includes a first recursive residual convolution module, a first maximum pooling module, a second recursive residual convolution module, and a second maximum pooling module. module, the third recursive residual convolution module, the third maximum pooling module, the fourth recursive residual convolution module, the fourth maximum pooling module, the fifth recursive residual convolution module, the first upsampling module, The sixth recursive residual convolution module, the second upsampling module, the seventh recursive residual convolution module, the third upsampling module, the eighth recursive residual convolution module, the fourth upsampling module, the ninth recursive residual convolution module Convolutional modules and convolutional layers;

高分辨率遥感图像IHR和重建超分辨率图像ISR经过第一递归残差卷积模块提取第一特征;第一特征经注意力门模块得到第一输出特征;The high-resolution remote sensing image IHR and the reconstructed super-resolution image ISR extract the first feature through the first recursive residual convolution module; the first feature is passed through the attention gate module to obtain the first output feature;

第一特征依次经过第一最大池化模块和第二递归残差卷积模块,得到第二特征;第二特征经注意力门模块得到第二输出特征;The first feature passes through the first maximum pooling module and the second recursive residual convolution module in sequence to obtain the second feature; the second feature passes through the attention gate module to obtain the second output feature;

第二特征经第二最大池化模块和第三递归残差卷积模块,得到第三特征;第三特征经注意力门模块得到第三输出特征;The second feature is passed through the second maximum pooling module and the third recursive residual convolution module to obtain the third feature; the third feature is passed through the attention gate module to obtain the third output feature;

第三特征经第三最大池化模块和第四递归残差卷积模块,得到第四特征;第四特征经注意力门模块得到第四输出特征;The third feature is passed through the third maximum pooling module and the fourth recursive residual convolution module to obtain the fourth feature; the fourth feature is passed through the attention gate module to obtain the fourth output feature;

第四特征经第四最大池化模块和第五递归残差卷积模块,得到第五特征;第五特征经第一上采样模块得到第一上采样特征;The fourth feature is passed through the fourth maximum pooling module and the fifth recursive residual convolution module to obtain the fifth feature; the fifth feature is passed through the first upsampling module to obtain the first upsampling feature;

第一上采样特征与第四输出特征进行合并连接后依次经过第六递归残差卷积模块、第二上采样模块得到第二上采样特征;The first upsampling feature and the fourth output feature are merged and connected, and then passed through the sixth recursive residual convolution module and the second upsampling module to obtain the second upsampling feature;

第二上采样特征与第三输出特征进行合并连接后依次经过第七递归残差卷积模块、第三上采样模块得到第三上采样特征;The second upsampling feature and the third output feature are merged and connected, and then the third upsampling feature is obtained through the seventh recursive residual convolution module and the third upsampling module;

第三上采样特征与第二输出特征进行合并连接后依次经过第八递归残差卷积模块、第四上采样模块得到第四上采样特征;The third upsampling feature and the second output feature are merged and connected, and then the fourth upsampling feature is obtained through the eighth recursive residual convolution module and the fourth upsampling module;

第四上采样特征与第一输出特征进行合并连接后依次经过第九递归残差卷积模块和卷积层得到图像判别结果。The fourth upsampling feature and the first output feature are merged and connected, and then passed through the ninth recursive residual convolution module and convolution layer to obtain the image discrimination result.

优选地,所述递归残差卷积模块包括卷积层VI、残差卷积层和加法器;其中,输入特征依次经过卷积层VI、进行了两次递归的残差卷积层得到的特征与输入特征相加,获得输出特征。Preferably, the recursive residual convolution module includes a convolution layer VI, a residual convolution layer and an adder; wherein, the input features are obtained by sequentially passing through the convolution layer VI and the residual convolution layer that performs two recursions. The features are added to the input features to obtain the output features.

优选地,所述残差卷积层包括卷积层VⅠI、ReLu激活函数Ⅰ、卷积层VⅡI、ReLu激活函数Ⅱ和加法器;其中,输入特征依次经过卷积层VIⅠ、ReLu激活函数Ⅰ、卷积层VIⅡ、ReLu激活函数Ⅱ得到的特征与输入特征相加,获得输出特征。Preferably, the residual convolution layer includes a convolution layer VII, a ReLu activation function I, a convolution layer VIII, a ReLu activation function II, and an adder; wherein, the input features pass through the convolution layer VII, the ReLu activation function I, and The features obtained by the convolution layer VIII and the ReLu activation function II are added to the input features to obtain the output features.

优选地,所述L1级别损失、感知损失以及GAN损失的表达式分别为:Preferably, the expressions of the L1 level loss, perceptual loss and GAN loss are respectively:

L1_loss=||ISR-IHR||_|1|;L1_loss=||ISR -IHR ||_|1|;

Perceptual_loss=||F(ISR)-F(IHR)||_|1|;Perceptual_loss=||F(ISR )-F(IHR )||_|1|;

GAN_loss=ln[D(IHR)]-ln[1-D(ISR)];GAN_loss=ln[D(IHR )]-ln[1-D(ISR )];

Total_loss=λ1*L1_loss+λ2*Perceptual_loss+λ3*GAN_loss;Total_loss=λ1 *L1_loss+λ2 *Perceptual_loss+λ3 *GAN_loss;

其中,L1_loss为L1级别损失,Perceptual_loss为感知损失,GAN_loss为GAN损失,Total_loss为网络总损失;||·||_|1|表示L1范数;F表示特征提取网络,F(ISR)表示特征提取网络在重建超分辨率图像ISR中提取到的特征,F(IHR)表示特征提取网络在高分辨率遥感图像IHR中提取到的特征;D(IHR)表示判别器网络判断IHR为真的概率,D(ISR)表示判别器网络判断ISR为真的概率;λ1、λ2、λ3分别表示L1损失、感知损失和GAN损失的权重系数。Among them, L1_loss is the L1 level loss, Perceptual_loss is the perceptual loss, GAN_loss is the GAN loss, and Total_loss is the total network loss; ||·||_|1| represents the L1 norm; F represents the feature extraction network, and F(ISR ) represents The features extracted by the feature extraction network in the reconstructed super-resolution image ISR , F(IHR ) represents the features extracted by the feature extraction network in the high-resolution remote sensing image IHR ; D(IHR ) represents the judgment of the discriminator network The probability that IHR is true, D(ISR ) represents the probability that the discriminator network judges that ISR is true; λ1 , λ2 , and λ3 respectively represent the weight coefficients of L1 loss, perceptual loss, and GAN loss.

与现有技术相比,本发明产生的有益效果为:Compared with the existing technology, the beneficial effects produced by the present invention are:

1)本发明的遥感图像高阶退化模型可以模拟真实场景下的遥感图像退化过程,不仅能够明显提升重建高分辨率遥感图像的质量,还有效提升了网络模型的泛化能力,解决了现存多数算法采用单一双三次降采样难以模拟真实场景下遥感图像退化损失、从而造成重建图像质量不理想的问题。1) The high-order degradation model of remote sensing images of the present invention can simulate the degradation process of remote sensing images in real scenes. It can not only significantly improve the quality of reconstructed high-resolution remote sensing images, but also effectively improve the generalization ability of the network model and solve most of the existing problems. The algorithm using a single bicubic downsampling is difficult to simulate the degradation loss of remote sensing images in real scenes, resulting in unsatisfactory reconstructed image quality.

2)本发明的包含超密集残差模块RRSDB的遥感图像生成器能够在不加深网络深度的前提下有效提升重建遥感图像的质量,解决了现有基于GAN的图像超分辨率重建算法的生成器网络因网络深度限制不能充分利用网络中的遥感图像特征信息,导致生成遥感图像模糊的问题。2) The remote sensing image generator containing the ultra-dense residual module RRSDB of the present invention can effectively improve the quality of reconstructed remote sensing images without deepening the network depth, and solves the problem of existing GAN-based image super-resolution reconstruction algorithm generators. Due to the limitation of network depth, the network cannot fully utilize the feature information of remote sensing images in the network, resulting in the problem of blurred remote sensing images.

3)本发明的融合递归残差和注意力机制的U型遥感图像判别器能够对真实遥感图像和生成遥感图像进行更精细的分类判决,向生成器提供更准确详尽的反馈,使重建超分辨率图像具有更加精细的纹理细节和更好的主观感知,解决了现有基于GAN的图像超分辨率重建算法的判别器网络仅能全局判别图像、导致生成图像细节部分出现模糊和伪影的问题。3) The U-shaped remote sensing image discriminator of the present invention that combines recursive residuals and attention mechanisms can make more precise classification decisions on real remote sensing images and generated remote sensing images, provide more accurate and detailed feedback to the generator, and enable super-resolution reconstruction The rate image has finer texture details and better subjective perception, which solves the problem that the discriminator network of the existing GAN-based image super-resolution reconstruction algorithm can only globally distinguish the image, resulting in blur and artifacts in the detailed parts of the generated image. .

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without exerting creative efforts.

图1为本发明的网络训练流程图;Figure 1 is a network training flow chart of the present invention;

图2为本发明的遥感图像高阶退化模型网络结构图;Figure 2 is a network structure diagram of the high-order degradation model of remote sensing images of the present invention;

图3为本发明的包含RRSDB的遥感图像生成器网络结构图;Figure 3 is a network structure diagram of the remote sensing image generator including RRSDB of the present invention;

图4为本发明的融合递归残差和注意力机制的U型遥感图像判别器网络结构图;Figure 4 is a network structure diagram of the U-shaped remote sensing image discriminator that fuses recursive residuals and attention mechanisms of the present invention;

图5为本发明的递归残差结构图,其中(a)为递归残差卷积模块的结构图,(b)为残差卷积层的结构图;Figure 5 is a recursive residual structure diagram of the present invention, in which (a) is a structural diagram of the recursive residual convolution module, and (b) is a structural diagram of the residual convolution layer;

图6为本发明的融合递归残差和注意力机制的U型遥感图像判别器网络在不同迭代次数下的可视化图;Figure 6 is a visualization diagram of the U-shaped remote sensing image discriminator network that combines recursive residuals and attention mechanisms of the present invention under different numbers of iterations;

图7为本发明的遥感图像超分辨率重建网络与部分现存超分辨率算法在不同遥感图像数据集上重建图像的无参考客观评价指标对比和局部区域主观视觉效果对比图。Figure 7 is a comparison of the non-reference objective evaluation indicators and the subjective visual effects of local areas in reconstructing images on different remote sensing image data sets between the remote sensing image super-resolution reconstruction network of the present invention and some existing super-resolution algorithms.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without exerting creative efforts fall within the scope of protection of the present invention.

如图1所示,本发明实施例提供了一种基于生成对抗网络的遥感图像超分辨率重建方法,具体步骤如下:As shown in Figure 1, an embodiment of the present invention provides a remote sensing image super-resolution reconstruction method based on a generative adversarial network. The specific steps are as follows:

步骤一:构建模拟实际退化过程的遥感图像高阶退化模型,将高分辨率遥感图像IHR输入遥感图像高阶退化模型中,获得与高分辨率遥感图像IHR相对应的低分辨率遥感图像ILRStep 1: Construct a high-order degradation model of remote sensing images that simulates the actual degradation process, input the high-resolution remote sensing image IHR into the high-order degradation model of remote sensing images, and obtain a low-resolution remote sensing image corresponding to the high-resolution remote sensing image IHR . ILR .

如图2所示,遥感图像高阶退化模型包括第一阶段、第二阶段和降采样;第一阶段的输入为高分辨率遥感图像IHR,第一阶段的输出与第二阶段的输入相连接,第二阶段的输出与降采样相连接,降采样的倍率随机选取;降采样后输出多张不同退化程度的低分辨率遥感图像ILR,与高分辨率遥感图像IHR构成多组成对的HR-LR遥感图像,作为训练集;其中,第一阶段和第二阶段均包括加模糊和加噪声两个过程,其中,模糊核类型包括高斯模糊核和sinc模糊核,噪声类型包括高斯噪声和泊松噪声;模糊核类型和噪声类型按设定概率发生,且第一阶段和第二阶段的设定概率不同,模糊核大小随机选取,进行多次迭代。As shown in Figure 2, the high-order degradation model of remote sensing images includes the first stage, the second stage and downsampling; the input of the first stage is the high-resolution remote sensing image IHR , and the output of the first stage is similar to the input of the second stage. Connection, the output of the second stage is connected with downsampling, and the downsampling rate is randomly selected; after downsampling, multiple low-resolution remote sensing images ILR with different degrees of degradation are output, which form multiple pairs with the high-resolution remote sensing image IHR The HR-LR remote sensing images are used as training sets; among them, the first stage and the second stage both include the two processes of adding blur and adding noise. Among them, the blur kernel type includes Gaussian blur kernel and sinc blur kernel, and the noise type includes Gaussian noise. and Poisson noise; the fuzzy kernel type and noise type occur according to the set probability, and the set probabilities in the first stage and the second stage are different. The fuzzy kernel size is randomly selected and multiple iterations are performed.

步骤二:构建遥感图像超分辨率重建网络,包括包含超密集残差模块的遥感图像生成器、融合递归残差和注意力机制的U型遥感图像判别器;并将成对的高分辨率遥感图像IHR和低分辨率遥感图像ILR作为遥感图像超分辨率重建网络的训练集。Step 2: Construct a remote sensing image super-resolution reconstruction network, including a remote sensing image generator containing a super-dense residual module, a U-shaped remote sensing image discriminator that integrates recursive residuals and attention mechanisms; and combine paired high-resolution remote sensing images IHR and low-resolution remote sensing images ILR are used as training sets for the remote sensing image super-resolution reconstruction network.

步骤三:将低分辨率遥感图像ILR输入包含超密集残差模块的遥感图像生成器中,生成重建超分辨率图像ISRStep 3: Input the low-resolution remote sensing image ILR into the remote sensing image generator containing a super-dense residual module to generate a reconstructed super-resolution image ISR .

如图3所示,包含超密集残差模块的遥感图像生成器包括第一卷积层、超密集残差模块组、第二卷积层、上采样模块、第三卷积层和第四卷积层;第一卷积层、第二卷积层、第三卷积层和第四卷积层均为3×3卷积层。低分辨率遥感图像ILR输入第一卷积层,第一卷积层的输出特征输入超密集残差模块组,超密集残差模块组的输出特征与第一卷积层的输出特征相加后输入第二卷积层,第二卷积层的输出特征输入上采样模块,上采样模块的输出特征输入第三卷积层,第三卷积层的输出特征输入第四卷积层,第四卷积层输出重建超分辨率图像ISR;其中,超密集残差模块组包括依次连接的23个超密集残差模块(RRSDB),最后一个RRSDB的输出特征与第一卷积层的输出特征进行相加后作为第二卷积层的输入特征。As shown in Figure 3, the remote sensing image generator containing a super-dense residual module includes a first convolutional layer, a super-dense residual module group, a second convolutional layer, an upsampling module, a third convolutional layer and a fourth convolutional layer. Convolution layer; the first convolution layer, the second convolution layer, the third convolution layer and the fourth convolution layer are all 3×3 convolution layers. The low-resolution remote sensing image ILR is input to the first convolution layer, the output features of the first convolution layer are input to the super-dense residual module group, and the output features of the super-dense residual module group are added to the output features of the first convolution layer. Then the second convolution layer is input, the output features of the second convolution layer are input to the upsampling module, the output features of the upsampling module are input to the third convolution layer, the output features of the third convolution layer are input to the fourth convolution layer, The super-resolution image ISR is reconstructed from the output of the four convolutional layers; among them, the super-dense residual module group includes 23 super-dense residual modules (RRSDB) connected in sequence, and the output features of the last RRSDB are the same as the output of the first convolutional layer. The features are added and used as the input features of the second convolutional layer.

所述RRSDB包括卷积层Ⅰ、LReLu激活函数Ⅰ、卷积层Ⅱ、LReLu激活函数Ⅱ、卷积层Ⅲ、LReLu激活函数Ⅲ、卷积层Ⅳ、LReLu激活函数Ⅳ和卷积层Ⅴ;卷积层Ⅰ、卷积层Ⅱ、卷积层Ⅲ、卷积层Ⅳ和卷积层Ⅴ均为3×3卷积层。其中,输入特征Fin输入卷积层Ⅰ,卷积层Ⅰ的输出特征输入LReLu激活函数Ⅰ,LReLu激活函数Ⅰ的输出特征与输入特征Fin相加后得到第一特征F1;第一特征F1输入卷积层Ⅱ,卷积层Ⅱ的输出特征输入LReLu激活函数Ⅱ,LReLu激活函数Ⅱ的输出特征与输入特征Fin、第一特征F1相加后得到第二特征F2;第二特征F2输入卷积层Ⅲ,卷积层Ⅲ的输出特征输入LReLu激活函数Ⅲ,LReLu激活函数Ⅲ的输出特征与输入特征Fin、第一特征F1、第二特征F2相加后得到第三特征F3;第三特征F3输入卷积层Ⅳ,卷积层Ⅳ的输出特征输入LReLu激活函数Ⅳ,LReLu激活函数Ⅳ的输出特征与输入特征Fin、第一特征F1、第二特征F2相加后得到第四特征F4;第四特征F4输入卷积层Ⅴ,经过卷积层Ⅴ得到输出特征Fout;第一特征F1、第二特征F2、第三特征F3、第四特征F4和输出特征Fout的表达式分别为:The RRSDB includes convolution layer I, LReLu activation function I, convolution layer II, LReLu activation function II, convolution layer III, LReLu activation function III, convolution layer IV, LReLu activation function IV and convolution layer V; convolution layer Convolution layer I, convolution layer II, convolution layer III, convolution layer IV and convolution layer V are all 3×3 convolution layers. Among them, the input feature Fin is input into the convolution layer I, the output feature of the convolution layer I is input into the LReLu activation function I, and the output feature of the LReLu activation function I is added to the input feature Fin to obtain the first feature F1 ; the first feature F1 is input to the convolution layer II, and the output feature of the convolution layer II is input to the LReLu activation function II. The output feature of the LReLu activation function II is added to the input feature Fin and the first feature F1 to obtain the second feature F2 ; The second feature F2 is input to the convolution layer III, and the output feature of the convolution layer III is input to the LReLu activation function III. The output feature of the LReLu activation function III is added to the input feature Fin , the first feature F1 , and the second feature F2 The third feature F3 is obtained; the third feature F3 is input to the convolution layer IV, the output feature of the convolution layer IV is input to the LReLu activation function IV, and the output feature of the LReLu activation function IV is the same as the input feature Fin , the first feature F1 , The fourth feature F4 is obtained after the second feature F2 is added; the fourth feature F4 is input to the convolution layer V, and the output feature Fout is obtained through the convolution layer V; the first feature F1 , the second feature F2 , and the The expressions of the third feature F3 , the fourth feature F4 and the output feature Fout are respectively:

F1=C(Fin)+FinF1 =C(Fin )+Fin ;

F2=C(F1)+2Fin+F1F2 =C(F1 )+2Fin +F1 ;

F3=C(F2)+Fin+F1+F2F3 =C(F2 )+Fin +F1 +F2 ;

F4=C(F3)+Fin+F1+2F2F4 =C(F3 )+Fin +F1 +2F2 ;

Fout=C(F4);Fout =C(F4 );

其中,C(·)表示卷积操作。Among them, C(·) represents the convolution operation.

步骤四:利用融合递归残差和注意力机制的U型遥感图像判别器对高分辨率遥感图像IHR和重建超分辨率图像ISR进行分类判决。Step 4: Use the U-shaped remote sensing image discriminator that fuses the recursive residual and attention mechanisms to make classification decisions on the high-resolution remote sensing image IHR and the reconstructed super-resolution image ISR .

如图4所示,融合递归残差和注意力机制的U型遥感图像判别器包括第一递归残差卷积模块、第一最大池化模块、第二递归残差卷积模块、第二最大池化模块、第三递归残差卷积模块、第三最大池化模块、第四递归残差卷积模块、第四最大池化模块、第五递归残差卷积模块、第一上采样模块、第六递归残差卷积模块、第二上采样模块、第七递归残差卷积模块、第三上采样模块、第八递归残差卷积模块、第四上采样模块、第九递归残差卷积模块和卷积层。高分辨率遥感图像IHR和重建超分辨率图像ISR经过第一递归残差卷积模块提取第一特征;第一特征经注意力门模块得到第一输出特征;第一特征依次经过第一最大池化模块和第二递归残差卷积模块,得到第二特征;第二特征经注意力门模块得到第二输出特征;第二特征经第二最大池化模块和第三递归残差卷积模块,得到第三特征;第三特征经注意力门模块得到第三输出特征;第三特征经第三最大池化模块和第四递归残差卷积模块,得到第四特征;第四特征经注意力门模块得到第四输出特征;第四特征经第四最大池化模块和第五递归残差卷积模块,得到第五特征;第五特征经第一上采样模块得到第一上采样特征;第一上采样特征与第四输出特征进行合并连接后依次经过第六递归残差卷积模块、第二上采样模块得到第二上采样特征;第二上采样特征与第三输出特征进行合并连接后依次经过第七递归残差卷积模块、第三上采样模块得到第三上采样特征;第三上采样特征与第二输出特征进行合并连接后依次经过第八递归残差卷积模块、第四上采样模块得到第四上采样特征;第四上采样特征与第一输出特征进行合并连接后依次经过第九递归残差卷积模块和卷积层得到图像判别结果。As shown in Figure 4, the U-shaped remote sensing image discriminator that combines recursive residual and attention mechanisms includes the first recursive residual convolution module, the first maximum pooling module, the second recursive residual convolution module, the second maximum Pooling module, third recursive residual convolution module, third maximum pooling module, fourth recursive residual convolution module, fourth maximum pooling module, fifth recursive residual convolution module, first upsampling module , the sixth recursive residual convolution module, the second upsampling module, the seventh recursive residual convolution module, the third upsampling module, the eighth recursive residual convolution module, the fourth upsampling module, the ninth recursive residual convolution module Difference convolution modules and convolutional layers. The high-resolution remote sensing image IHR and the reconstructed super-resolution image ISR extract the first feature through the first recursive residual convolution module; the first feature passes through the attention gate module to obtain the first output feature; the first feature passes through the first The maximum pooling module and the second recursive residual convolution module obtain the second feature; the second feature is passed through the attention gate module to obtain the second output feature; the second feature is passed through the second maximum pooling module and the third recursive residual convolution module Product module, the third feature is obtained; the third feature is passed through the attention gate module to obtain the third output feature; the third feature is passed through the third maximum pooling module and the fourth recursive residual convolution module, the fourth feature is obtained; the fourth feature The fourth output feature is obtained through the attention gate module; the fourth feature is obtained through the fourth maximum pooling module and the fifth recursive residual convolution module, and the fifth feature is obtained; the fifth feature is obtained through the first upsampling module and the first upsampling Features; the first upsampling feature and the fourth output feature are merged and connected, and then passed through the sixth recursive residual convolution module and the second upsampling module to obtain the second upsampling feature; the second upsampling feature is combined with the third output feature. After merging and connecting, the third upsampling feature is obtained by passing through the seventh recursive residual convolution module and the third upsampling module in sequence; the third upsampling feature and the second output feature are merged and connected, and then pass through the eighth recursive residual convolution module in sequence. , the fourth upsampling module obtains the fourth upsampling feature; the fourth upsampling feature and the first output feature are merged and connected, and then go through the ninth recursive residual convolution module and the convolution layer to obtain the image discrimination result.

如图5所示,递归残差卷积模块包括卷积层VI、残差卷积层和加法器;其中,输入特征依次经过卷积层VI、进行了两次递归的残差卷积层得到的特征与输入特征相加,获得输出特征。残差卷积层包括卷积层VⅠI、ReLu激活函数Ⅰ、卷积层VⅡI、ReLu激活函数Ⅱ和加法器;其中,输入特征依次经过卷积层VIⅠ、ReLu激活函数Ⅰ、卷积层VIⅡ、ReLu激活函数Ⅱ得到的特征与输入特征相加,获得输出特征。As shown in Figure 5, the recursive residual convolution module includes a convolution layer VI, a residual convolution layer and an adder; among them, the input features pass through the convolution layer VI and the residual convolution layer that performs two recursions in sequence to obtain The features of are added to the input features to obtain the output features. The residual convolution layer includes convolution layer VII, ReLu activation function I, convolution layer VIII, ReLu activation function II and adder; among them, the input features pass through convolution layer VII, ReLu activation function I, convolution layer VIII, The features obtained by ReLu activation function II are added to the input features to obtain the output features.

图6示出了使用本发明的遥感图像超分辨率重建方法时U型判别器网络注意力在不同迭代次数下的示例性可视化图。从第5000次迭代开始,每间隔70000次迭代展示一幅可视化图,至355000次迭代为止。可以观察到,在训练初始阶段,即5000次迭代时,U型判别器网络注意力几乎均匀地分布在整幅图像所有区域;随着迭代次数的增加,即75000次、145000次、215000次、285000次迭代时,物体边缘区域和地物交叉区域的注意力逐渐增强,其它区域的注意力逐渐减弱;最终,即355000次迭代时,注意力集中于物体边缘区域和地物交叉区域,其它区域的注意力减弱到最小。Figure 6 shows an exemplary visualization diagram of U-shaped discriminator network attention under different iteration times when using the remote sensing image super-resolution reconstruction method of the present invention. Starting from the 5000th iteration, a visualization is displayed every 70,000 iterations until the 355,000th iteration. It can be observed that in the initial stage of training, that is, at 5,000 iterations, the U-shaped discriminator network attention is almost evenly distributed in all areas of the entire image; as the number of iterations increases, that is, 75,000 times, 145,000 times, 215,000 times, At 285,000 iterations, the attention on the object edge area and the intersection area of ground objects gradually increases, and the attention on other areas gradually weakens; finally, at 355,000 iterations, the attention focuses on the edge area of the object and the intersection area of ground objects, and other areas. attention is reduced to a minimum.

步骤五:利用重建超分辨率图像ISR和高分辨率遥感图像IHR计算L1级别损失和感知损失,利用判别器分类结果分布概率的二元交叉熵计算GAN损失,根据L1级别损失、感知损失和GAN损失优化遥感图像超分辨率重建网络的参数,使网络收敛,完成训练,得到遥感图像超分辨率重建模型。Step 5: Use the reconstructed super-resolution image ISR and the high-resolution remote sensing image IHR to calculate the L1 level loss and perceptual loss. Use the binary cross entropy of the discriminator classification result distribution probability to calculate the GAN loss. Based on the L1 level loss and perceptual loss and GAN loss to optimize the parameters of the remote sensing image super-resolution reconstruction network to make the network converge, complete the training, and obtain the remote sensing image super-resolution reconstruction model.

L1级别损失L1_loss、感知损失Perceptual_loss和GAN损失GAN_loss、网络总损失Total_loss计算如下式所示:L1 level loss L1_loss, perceptual loss Perceptual_loss and GAN loss GAN_loss, and total network loss Total_loss are calculated as follows:

L1_loss=||ISR-IHR||_|1|;L1_loss=||ISR -IHR ||_|1|;

Perceptual_loss=||F(ISR)-F(IHR)||_|1|;Perceptual_loss=||F(ISR )-F(IHR )||_|1|;

GAN_loss=ln[D(IHR)]-ln[1-D(ISR)];GAN_loss=ln[D(IHR )]-ln[1-D(ISR )];

Total_loss=λ1*L1_loss+λ2*Perceptual_loss+λ3*GAN_loss;Total_loss=λ1 *L1_loss+λ2 *Perceptual_loss+λ3 *GAN_loss;

其中,||·||_|1|表示L1范数;F表示特征提取网络,F(ISR)表示特征提取网络在重建超分辨率图像ISR中提取到的特征,F(IHR)表示特征提取网络在高分辨率遥感图像IHR中提取到的特征;D(IHR)表示判别器网络判断IHR为真的概率,D(ISR)表示判别器网络判断ISR为真的概率;λ1、λ2、λ3分别表示L1损失、感知损失和GAN损失的权重系数,其值分别设置为1、1、0.1。Among them, ||·||_|1| represents the L1 norm; F represents the feature extraction network, F(ISR ) represents the features extracted by the feature extraction network in the reconstructed super-resolution image ISR , F(IHR ) Represents the features extracted by the feature extraction network in the high-resolution remote sensing image IHR ; D(IHR ) indicates the probability that the discriminator network determines that IHR is true, and D(ISR ) indicates that the discriminator network determines that ISR is true. Probability; λ1 , λ2 , and λ3 represent the weight coefficients of L1 loss, perceptual loss, and GAN loss respectively, and their values are set to 1, 1, and 0.1 respectively.

使用L1_loss计算重建超分辨率图像ISR和原始高分辨率图像IHR之间的像素级差异,使网络参数朝着重建超分辨率图像ISR在像素级别上更接近原始高分辨率图像IHR的方向优化;使用Perceptual_loss计算重建超分辨率图像ISR和原始高分辨率图像IHR之间的特征差异,使网络参数朝着重建超分辨率图像ISR在感知上更接近原始高分辨率图像IHR的方向优化;使用GAN_loss计算生成器和判别器之间的生成对抗超分图像ISR的逼真度,通过对抗训练生成器和判别器不断竞争和优化使重建超分辨率图像ISR更加接近原始高分辨率图像IHR的分布;通过最小化Total_loss,当Total_loss下降趋势趋于平稳时,网络收敛,完成训练。Use L1_loss to calculate the pixel-level difference between the reconstructed super-resolution image ISR and the original high-resolution image IHR , so that the network parameters move toward the reconstructed super-resolution image ISR to be closer to the original high-resolution image IHR at the pixel level Optimize in the direction; use Perceptual_loss to calculate the feature difference between the reconstructed super-resolution image ISR and the original high-resolution image IHR , so that the network parameters move towards the reconstructed super-resolution image ISR that is perceptually closer to the original high-resolution image Directional optimization of IHR ; use GAN_loss to calculate the fidelity of the generated adversarial super-resolution image ISR between the generator and the discriminator. Through adversarial training, the generator and the discriminator continuously compete and optimize to make the reconstructed super-resolution image ISR closer. Distribution of the original high-resolution image IHR ; by minimizing Total_loss, when the downward trend of Total_loss levels off, the network converges and training is completed.

步骤六:将待重建的低分辨率遥感图像输入遥感图像超分辨率重建模型中,生成客观评价指标和主观感知效果均达到更优效果的超分辨率重建遥感图像。Step 6: Input the low-resolution remote sensing image to be reconstructed into the remote sensing image super-resolution reconstruction model to generate a super-resolution reconstructed remote sensing image with better objective evaluation indicators and subjective perception effects.

采用四个广泛使用的遥感图像数据集Kaggle、WHDLD、AID、SYSU-CD进行测试。其中,Kaggle数据集由324个不同场景组成,包含飞机在高空拍摄获得的1720张分辨率为3099×2329像素的图像,空间分辨率为每像素0.25m;WHDLD数据集包含裸地、建筑物、人行道、道路、植被、水域共6个场景类别的4960张分辨率为256×256像素的图像,空间分辨率为每像素2m;AID数据集包含机场、海滩、教堂、商业区、森林、水利灌溉、冰山、风车等共30个场景的10000张分辨率为600×600像素的图像,每种场景类别的图像数量为300张左右,空间分辨率为每像素0.5-8m;SYSU-CD数据集包含港口和城市建筑的20000张分辨率为256×256像素的图像,图像空间分辨率为每像素0.5m。Four widely used remote sensing image data sets Kaggle, WHDLD, AID, and SYSU-CD were used for testing. Among them, the Kaggle data set consists of 324 different scenes, including 1720 images with a resolution of 3099×2329 pixels taken by aircraft at high altitude, with a spatial resolution of 0.25m per pixel; the WHDLD data set includes bare land, buildings, There are 4960 images with a resolution of 256×256 pixels in 6 scene categories: sidewalks, roads, vegetation, and water, with a spatial resolution of 2m per pixel; the AID data set includes airports, beaches, churches, commercial areas, forests, and irrigation 10,000 images with a resolution of 600×600 pixels from a total of 30 scenes such as icebergs, windmills, etc. The number of images for each scene category is about 300, and the spatial resolution is 0.5-8m per pixel; the SYSU-CD data set contains 20,000 images of ports and urban buildings with a resolution of 256×256 pixels and an image spatial resolution of 0.5m per pixel.

在测试过程中选取Kaggle数据集中除训练集之外的1370张图像、WHDLD数据集的全部图像、AID数据集的全部图像、以及SUSY-CD数据集的全部图像作为测试集,将其输入训练好的遥感图像超分辨率重建网络模型中,将图像超分辨率重建倍数设置为4,完成超分辨率重建,再对超分辨率重建后的图像进行客观和主观图像质量评价。为了更好地验证本发明的先进性,分别选取了传统的基于插值的图像超分辨率重建算法Bicubic,基于卷积神经网络的图像超分辨率重建算法SRCNN、FSRCNN,基于生成对抗网络的图像超分辨率重建算法SRGAN、ESRGAN作为对比算法。选取四个无参考图像质量客观评价指标NIQE(NaturalnessImage Quality Evaluator)、PIQE(Perceptual Image Quality Evaluator)、HIQA(High-level Image Quality Assessment)、CEIQ(Color Image Enhancement QualityAssessment)对重建超分辨率遥感图像进行评价,其中,NIQE和PIQE越小,表明图像质量越好,HIQA和CEIQ越大,表明图像的质量越好。实验结果见表1。在表1中,分别展示了Bicubic、SRCNN、FSRCNN、SRGAN、ESRGAN以及本发明的方法在Kaggle、WHDLD、AID、SYSU-CD数据集上进行超分辨重建后图像在无参考图像质量评价指标NIQE、PIQ、HIQA、CEIQ上的表现。加粗指标指示该方法在同一数据集同一指标上表现最佳,如表1所示,本发明的方法在各个数据集以及评价指标上表现均为最优。During the test process, 1370 images in the Kaggle data set except the training set, all images in the WHDLD data set, all images in the AID data set, and all images in the SUSY-CD data set were selected as the test set, and they were input into the training set. In the remote sensing image super-resolution reconstruction network model, the image super-resolution reconstruction multiple is set to 4, the super-resolution reconstruction is completed, and then the objective and subjective image quality evaluation is performed on the super-resolution reconstructed image. In order to better verify the advancement of the present invention, the traditional interpolation-based image super-resolution reconstruction algorithm Bicubic, the image super-resolution reconstruction algorithms SRCNN and FSRCNN based on convolutional neural networks, and the image super-resolution reconstruction algorithm based on generative adversarial networks were selected. The resolution reconstruction algorithms SRGAN and ESRGAN are used as comparison algorithms. Four non-reference image quality objective evaluation indicators NIQE (NaturalnessImage Quality Evaluator), PIQE (Perceptual Image Quality Evaluator), HIQA (High-level Image Quality Assessment), and CEIQ (Color Image Enhancement Quality Assessment) were selected to reconstruct the super-resolution remote sensing images. Evaluation, among which, the smaller the NIQE and PIQE are, the better the image quality is, and the larger the HIQA and CEIQ are, the better the image quality is. The experimental results are shown in Table 1. Table 1 shows the non-reference image quality evaluation index NIQE, NIQE, Performance on PIQ, HIQA, and CEIQ. Bold indicators indicate that the method performs best on the same indicator in the same data set. As shown in Table 1, the method of the present invention performs optimally on each data set and evaluation index.

表1不同超分辨率重建算法在各个数据集上的客观评价指标比较Table 1 Comparison of objective evaluation indicators of different super-resolution reconstruction algorithms on various data sets

图7展示了不同超分辨率算法在Kaggle、WHDLD、AID、SUSY-CD数据集上重建结果的局部区域主观视觉效果对比图。对于Kaggle数据集的示例,本发明的方法能够更好地重建停车场中车辆密集区域的车辆边缘细节;对于WHDLD数据集的示例,本发明的方法在重建植被密集区域道路时能够更好的还原道路的细节信息,道路的边缘更加清晰;对于AID数据集的示例,本发明方法使重建后的桥梁道路标示线更加清晰;对于SYSU-CD数据集的示例,本发明方法使重建图像中的道路边缘更加清晰,因此,本发明方法重建图像的主观视觉效果最佳。在图7中,图像右侧的数据分别代表该图像无参考图像质量评价指标NIQE、PIQE、HIQA、CEIQ的值,可以看出,本发明方法重建图像的无参考图像质量评价指标也最佳。综上所述,本发明方法能够更好地重建图像,使重建后的超分辨率图像纹理细节更加精细、更符合人眼的主观感知。Figure 7 shows the comparison of subjective visual effects in local areas of the reconstruction results of different super-resolution algorithms on the Kaggle, WHDLD, AID, and SUSY-CD data sets. For the example of the Kaggle data set, the method of the present invention can better reconstruct the vehicle edge details in the vehicle-dense area in the parking lot; for the example of the WHDLD data set, the method of the present invention can better restore the road when reconstructing the road in the dense vegetation area. The detailed information of the road and the edges of the road are clearer; for the example of the AID data set, the method of the present invention makes the reconstructed bridge road marking lines clearer; for the example of the SYSU-CD data set, the method of the present invention makes the road in the reconstructed image clearer. The edges are clearer, so the subjective visual effect of the reconstructed image by the method of the present invention is the best. In Figure 7, the data on the right side of the image respectively represent the values of the no-reference image quality evaluation indexes NIQE, PIQE, HIQA, and CEIQ of the image. It can be seen that the no-reference image quality evaluation index of the reconstructed image using the method of the present invention is also the best. To sum up, the method of the present invention can better reconstruct the image, making the texture details of the reconstructed super-resolution image more fine and more in line with the subjective perception of the human eye.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present invention shall be included in the present invention. within the scope of protection.

Claims (8)

2. The method for generating a countermeasure network-based remote sensing image super-resolution reconstruction of claim 1, wherein the remote sensing image high-order degradation model includes a first stage, a second stage, and downsampling; the input of the first stage is a high-resolution remote sensing image IHR The output of the first stage is connected with the input of the second stage, the output of the second stage is connected with downsampling, and the multiplying power of downsampling is randomly selected; downsampling to output multiple low-resolution remote sensing images I with different degradation degreesLR And high resolution remote sensing image IHR Forming a plurality of pairs of HR-LR remote sensing images as a training set; the first stage and the second stage comprise two processes of adding blurring and adding noise, wherein the blurring kernel type comprises a Gaussian blurring kernel and a sinc blurring kernel, and the noise type comprises Gaussian noise and Poisson noise; the fuzzy core type and the noise type occur according to the set probability, the set probability of the first stage is different from that of the second stage, the size of the fuzzy core is randomly selected, and multiple iterations are performed.
3. The method for reconstructing the super-resolution of the remote sensing image based on the generation countermeasure network according to claim 1, wherein the remote sensing image generator comprising the ultra-dense residual modules comprises a first convolution layer, an ultra-dense residual module group, a second convolution layer, an up-sampling module, a third convolution layer and a fourth convolution layer; low resolution remote sensing image ILR Inputting a first convolution layer, inputting the output characteristics of the first convolution layer into an ultra-dense residual error module group, and outputting the characteristics of the ultra-dense residual error module group and the first convolution layerThe output characteristics of the second convolution layer are added and then input into a second convolution layer, the output characteristics of the second convolution layer are input into an up-sampling module, the output characteristics of the up-sampling module are input into a third convolution layer, the output characteristics of the third convolution layer are input into a fourth convolution layer, and the fourth convolution layer outputs and reconstructs super-resolution image ISR The method comprises the steps of carrying out a first treatment on the surface of the The ultra-dense residual error module group comprises 23 ultra-dense residual error modules (RRSDB) which are sequentially connected, and the output characteristic of the last RRSDB is added with the output characteristic of the first convolution layer to be used as the input characteristic of the second convolution layer.
4. The method for reconstructing a super-resolution of a remote sensing image based on a generated countermeasure network according to claim 3, wherein the RRSDB includes a convolution layer i, a lreuu activation function i, a convolution layer ii, a lreuu activation function ii, a convolution layer iii, a lreuu activation function iii, a convolution layer iv, a lreuu activation function iv, and a convolution layer v; wherein the feature F is inputin Input convolution layer I, output characteristics of convolution layer I input LReLu activation function I, output characteristics of LReLu activation function I and input characteristics Fin Adding to obtain a first characteristic F1 The method comprises the steps of carrying out a first treatment on the surface of the First feature F1 Input convolution layer II, output characteristics of convolution layer II input LReLu activation function II, output characteristics and input characteristics F of LReLu activation function IIin First feature F1 After addition, a second characteristic F is obtained2 The method comprises the steps of carrying out a first treatment on the surface of the Second feature F2 Input convolution layer III, output characteristics of convolution layer III input LReLu activation function III, output characteristics and input characteristics F of LReLu activation function IIIin First feature F1 Second characteristic F2 After addition, a third characteristic F is obtained3 The method comprises the steps of carrying out a first treatment on the surface of the Third feature F3 Inputting a convolution layer IV, inputting an LReLu activation function IV to the output characteristic of the convolution layer IV, and inputting the output characteristic and the input characteristic F of the LReLu activation function IVin First feature F1 Second characteristic F2 After addition, the fourth characteristic F is obtained4 The method comprises the steps of carrying out a first treatment on the surface of the Fourth feature F4 Inputting the convolution layer V, and obtaining an output characteristic F through the convolution layer Vout The method comprises the steps of carrying out a first treatment on the surface of the First feature F1 Second characteristic F2 Third feature F3 Fourth feature F4 And output feature Fout The expressions of (2) are respectively:
5. The method for generating an countermeasure network based remote sensing image super-resolution reconstruction of claim 1, wherein the U-shaped remote sensing image discriminator fusing a recursive residual error and an attention mechanism comprises a first recursive residual error convolution module, a first max-pooling module, a second recursive residual error convolution module, a second max-pooling module, a third recursive residual error convolution module, a third max-pooling module, a fourth recursive residual error convolution module, a fourth max-pooling module, a fifth recursive residual error convolution module, a first upsampling module, a sixth recursive residual error convolution module, a second upsampling module, a seventh recursive residual error convolution module, a third upsampling module, an eighth recursive residual error convolution module, a fourth upsampling module, a ninth recursive residual error convolution module, and a convolution layer;
CN202310787153.4A2023-08-282023-08-28 Remote sensing image super-resolution reconstruction method based on generative adversarial networkPendingCN117114984A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310787153.4ACN117114984A (en)2023-08-282023-08-28 Remote sensing image super-resolution reconstruction method based on generative adversarial network

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310787153.4ACN117114984A (en)2023-08-282023-08-28 Remote sensing image super-resolution reconstruction method based on generative adversarial network

Publications (1)

Publication NumberPublication Date
CN117114984Atrue CN117114984A (en)2023-11-24

Family

ID=88802827

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310787153.4APendingCN117114984A (en)2023-08-282023-08-28 Remote sensing image super-resolution reconstruction method based on generative adversarial network

Country Status (1)

CountryLink
CN (1)CN117114984A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118229962A (en)*2024-05-232024-06-21安徽大学Remote sensing image target detection method, system, electronic equipment and storage medium
CN118469842A (en)*2024-05-072024-08-09广东工业大学Remote sensing image defogging method based on generation countermeasure network
CN118570457A (en)*2024-08-052024-08-30山东航天电子技术研究所 An image super-resolution method driven by remote sensing target recognition task
CN118628714A (en)*2024-06-042024-09-10中国科学院长春光学精密机械与物理研究所 A small target detection method for low and medium resolution optical remote sensing images
CN119048352A (en)*2024-10-312024-11-29之江实验室Image super-resolution method and device based on two-stage generation countermeasure network
CN119494781A (en)*2025-01-202025-02-21中国科学院长春光学精密机械与物理研究所 Super-resolution reconstruction method of remote sensing images based on degradation mechanism
CN119991527A (en)*2025-04-152025-05-13烟台大学 A super-resolution remote sensing image generation method and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118469842A (en)*2024-05-072024-08-09广东工业大学Remote sensing image defogging method based on generation countermeasure network
CN118229962A (en)*2024-05-232024-06-21安徽大学Remote sensing image target detection method, system, electronic equipment and storage medium
CN118628714A (en)*2024-06-042024-09-10中国科学院长春光学精密机械与物理研究所 A small target detection method for low and medium resolution optical remote sensing images
CN118570457A (en)*2024-08-052024-08-30山东航天电子技术研究所 An image super-resolution method driven by remote sensing target recognition task
CN119048352A (en)*2024-10-312024-11-29之江实验室Image super-resolution method and device based on two-stage generation countermeasure network
CN119494781A (en)*2025-01-202025-02-21中国科学院长春光学精密机械与物理研究所 Super-resolution reconstruction method of remote sensing images based on degradation mechanism
CN119991527A (en)*2025-04-152025-05-13烟台大学 A super-resolution remote sensing image generation method and system

Similar Documents

PublicationPublication DateTitle
CN117114984A (en) Remote sensing image super-resolution reconstruction method based on generative adversarial network
CN111598778B (en)Super-resolution reconstruction method for insulator image
Engin et al.Cycle-dehaze: Enhanced cyclegan for single image dehazing
CN113673590B (en) Rain removal method, system and medium based on multi-scale hourglass densely connected network
CN113962878B (en)Low-visibility image defogging model method
Xiao et al.Physics-based GAN with iterative refinement unit for hyperspectral and multispectral image fusion
CN111667407B (en)Image super-resolution method guided by depth information
CN117474764B (en) A high-resolution reconstruction method for remote sensing images under complex degradation models
CN111179196A (en)Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN114841856A (en)Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention
Shao et al.Multi-focus image fusion based on transformer and depth information learning
CN114118199A (en)Image classification method and system for fault diagnosis of intelligent pump cavity endoscope
Babu et al.An efficient image dahazing using Googlenet based convolution neural networks
Zhang et al.Proxy and cross-stripes integration transformer for remote sensing image dehazing
CN116091492B (en) A pixel-level detection method and system for image changes
CN112200752B (en) A multi-frame image deblurring system based on ER network and its method
CN116596782A (en) Image restoration method and system
Chen et al.Attentive generative adversarial network for removing thin cloud from a single remote sensing image
CN117422619A (en)Training method of image reconstruction model, image reconstruction method, device and equipment
Xu et al.Swin transformer and ResNet based deep networks for low-light image enhancement
Liu et al.Dual UNet low-light image enhancement network based on attention mechanism
Wang et al.A CBAM‐GAN‐based method for super‐resolution reconstruction of remote sensing image
Song et al.DRGAN: A Detail Recovery-Based Model for Optical Remote Sensing Images Super-Resolution
Gupta et al.A robust and efficient image de-fencing approach using conditional generative adversarial networks
Liu et al.An Asymptotic Multi-Scale Symmetric Fusion Network for Hyperspectral and Multispectral Image Fusion

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp