技术领域technical field
本发明涉及遥感图像处理、图像融合、深度学习等领域,尤其是涉及一种在遥感领域中全色图像和多光谱图像的融合方法,属于遥感图像融合技术领域。The invention relates to the fields of remote sensing image processing, image fusion, deep learning, etc., in particular to a fusion method of panchromatic images and multispectral images in the field of remote sensing, and belongs to the technical field of remote sensing image fusion.
背景技术Background technique
具有较高空间分辨率和光谱分辨率的遥感图像,对于物体检测、地理制图和环境监测等方面具有明显的实际应用意义。然而,由于信号传输波段和成像传感器存储的限制,大多数遥感卫星仅提供低空间/高光谱分辨率的多光谱(MSI)图像和高空间/低光谱分辨率的全色(PAN)图像,这两种图像具有很强的互补性。利用两种图像的差异性优势和互补性,融合成空间细节清晰并且光谱信息丰富的融合图像,这种融合技术也被称为全色锐化,是许多遥感应用中的关键预处理步骤。Remote sensing images with high spatial resolution and spectral resolution have obvious practical significance for object detection, geographic mapping and environmental monitoring. However, due to the limitations of signal transmission bands and imaging sensor storage, most remote sensing satellites only provide multispectral (MSI) images with low spatial/high spectral resolution and panchromatic (PAN) images with high spatial/low spectral resolution. The two images are highly complementary. Taking advantage of the differences and complementarities of the two images, they are fused into a fused image with clear spatial details and rich spectral information. This fusion technique, also known as panchromatic sharpening, is a key preprocessing step in many remote sensing applications.
在本发明中,多光谱(MSI)图像和高光谱(HIS)图像可被统称成光谱图像。通常,光谱成像包含传感器获得的多个窄光谱波段,即其含有多个分量。In the present invention, a multispectral (MSI) image and a hyperspectral (HIS) image may be collectively referred to as a spectral image. Typically, spectral imaging consists of multiple narrow spectral bands acquired by the sensor, ie it contains multiple components.
在全色锐化过程中,全色与光谱图像融合方法的关键和要求在于:光谱分辨率保真,即融合图像应与光谱图像的光谱信息保持一致;空间分辨率保真,即融合图像应与全色图像的空间信息保持一致;时间和计算冗余度低,即大规模多尺寸的遥感图像融合过程能够快速完成。In the process of panchromatic sharpening, the key and requirement of the panchromatic and spectral image fusion method are: spectral resolution fidelity, that is, the fusion image should be consistent with the spectral information of the spectral image; spatial resolution fidelity, that is, the fusion image should be It is consistent with the spatial information of panchromatic images; the time and calculation redundancy is low, that is, the fusion process of large-scale and multi-scale remote sensing images can be completed quickly.
目前,主流的全色锐化方法可被分为几类,即分量替换法、多分辨率分析法和相对光谱贡献法。分量替换法主要通过主成分分析(PCA),施密特正交化(SchmidtOrthogonalization),和强度、色调、饱和度(IHS)变换等,对多光谱图像进行颜色空间域上的变换,用全色图像替换多光谱图像的空间信息通道,由逆变换得到融合图像。而多分辨率分析法,多基于小波变换、拉普拉斯金字塔(Laplacian Pyramid)等工具,将图像从空间域转换到变换域,针对变换系数的特点制定响应的融合规则,最后逆变换得到融合图像。相对光谱贡献法,包括Brovey变换融合方法、全色加多光谱方法,代替影像分量,应用光谱波段的线性组合进行处理。At present, the mainstream panchromatic sharpening methods can be divided into several categories, namely component replacement method, multi-resolution analysis method and relative spectral contribution method. The component replacement method mainly uses principal component analysis (PCA), Schmidt Orthogonalization (Schmidt Orthogonalization), and intensity, hue, saturation (IHS) transformation, etc., to transform the multispectral image in the color space domain, using panchromatic The image replaces the spatial information channel of the multispectral image, and the fusion image is obtained by inverse transformation. The multi-resolution analysis method is mostly based on wavelet transform, Laplacian Pyramid and other tools to convert the image from the spatial domain to the transform domain, formulate the fusion rules of the response according to the characteristics of the transform coefficients, and finally the inverse transform is fused image. The relative spectral contribution method, including the Brovey transformation fusion method, panchromatic plus multispectral method, replaces the image component, and applies the linear combination of spectral bands for processing.
近年来,基于大规模数据的出现和深度神经网络的发展,深度学习方法成为了机器学习领域的重要研究方向。卷积神经网络(Convolutional Neural Network,CNN)有很强的特征学习能力,深层网络模型学习得到的特征数据对原数据有更本质的代表性,通过大规模数据训练好的深度学习架构模型更能提取其丰富的内在信息,有利于可视化和分类问题处理。In recent years, based on the emergence of large-scale data and the development of deep neural networks, deep learning methods have become an important research direction in the field of machine learning. Convolutional Neural Network (CNN) has a strong feature learning ability. The feature data learned by the deep network model is more representative of the original data. The deep learning architecture model trained through large-scale data is better. Extracting its rich intrinsic information is beneficial for visualization and classification problem processing.
生成对抗网络(Generative Adversarial Networks,GAN)是深度学习算法中的一种新型的网络,通过由卷积神经网络构建的生成网络和判别网络进行对抗式的训练,利用二元零和博弈的原理完成生成模型的建模,最终达到纳什均衡。针对数据缺失的场景,生成模型可以帮助生成相关数据,提高数据数量,从而利用半监督学习提升学习效率。而判别模型可以判断样本的真实度,生成模型则不断加强,通过不断迭代,使生成样本越来越接近真实样本。Generative Adversarial Networks (GAN) is a new type of network in deep learning algorithms. It conducts adversarial training through the generation network and discriminant network constructed by the convolutional neural network, and uses the principle of binary zero-sum game to complete The modeling of the generative model eventually reaches the Nash equilibrium. For scenarios where data is missing, generative models can help generate relevant data and increase the amount of data, thereby using semi-supervised learning to improve learning efficiency. The discriminant model can judge the authenticity of the sample, and the generative model is continuously strengthened, and through continuous iteration, the generated sample is getting closer and closer to the real sample.
发明内容Contents of the invention
本发明的目的是提供一种在简化计算冗余度的同时可以提高空间分辨率和光谱分辨率遥感图像全色锐化方法。本发明基于深度学习方法和生成对抗网络模型,通过现有遥感图像数据,训练对全色图和光谱图的端到端融合模型。本发明采用的技术方案如下所述:The purpose of the present invention is to provide a remote sensing image panchromatic sharpening method that can improve spatial resolution and spectral resolution while simplifying calculation redundancy. The invention is based on a deep learning method and a generated confrontation network model, and trains an end-to-end fusion model for panchromatic images and spectral images through existing remote sensing image data. The technical scheme that the present invention adopts is as follows:
一种基于生成对抗网络的遥感图像全色锐化方法,包括下列步骤:A method for panchromatic sharpening of remote sensing images based on generative confrontation networks, comprising the following steps:
1)构建数据集:对遥感图像数据进行目标分类和标注,构建包含全色图像和光谱图像的原图像数据集,并划分训练集和测试集。1) Construct a dataset: Classify and label remote sensing image data, construct an original image dataset including panchromatic images and spectral images, and divide the training set and test set.
2)设计生成网络和判别网络模型:GAN的生成网络模型G从随机噪声向量和所述数据集中的图像学习,训练出到生成的样本图像y的映射,即G:{x,z}→y,G经过训练,生成的样本图像不能被判别网络模型判别为假,而判别网络模型D使用卷积神经网络,经过训练后,可对生成数据的真假进行判别,能尽可能好地完成判别生成样本图像真假性的分类问题,在最终输出样本图像的生成过程,输入的全色图像和光谱图像具有相同的底层结构,共享突出边缘的位置;生成网络模型增加跳转连接,并采用U-Net的整体结构,分为编码层和解码层两部分,每编码一层,特征图长和宽减半,特征层数增加一半,每解码一层,特征图长和宽加倍,特征层数增加加倍,和对应的编码层通过通道串接,然后进行反卷积处理;基于用于分类的卷积神经网络设计判别网络模型,该网络被设计为含有一个串接层,和四层卷积层,只对生成的融合图像中的每个区块的真假性做分类,在图像上卷积运行该网络,对所有响应做均值,来提供D的最终输出;2) Design the generation network and discriminant network model: GAN’s generation network model G learns from random noise vectors and images in the data set, and trains the mapping to the generated sample image y, that is, G: {x,z}→y , G after training, the generated sample images cannot be judged as false by the discriminant network model, and the discriminant network model D uses a convolutional neural network. After training, it can discriminate the true and false of the generated data and can complete the discrimination as well The classification problem of the authenticity of the generated sample image, in the process of generating the final output sample image, the input panchromatic image and spectral image have the same underlying structure and share the position of the prominent edge; the generated network model adds jump connections, and uses U - The overall structure of the Net is divided into two parts: the encoding layer and the decoding layer. For each encoding layer, the length and width of the feature map are halved, and the number of feature layers is increased by half. For each decoding layer, the length and width of the feature map are doubled, and the number of feature layers Double the increase, and the corresponding coding layer is concatenated through the channel, and then deconvolution is performed; the discriminant network model is designed based on the convolutional neural network for classification, which is designed to contain a concatenated layer and four convolutional layers Layer, only classifies the authenticity of each block in the generated fusion image, runs the network convolutionally on the image, and averages all responses to provide the final output of D;
3)训练生成对抗网络模型:基于设计好成网络和判别网络模型,对网络生成网络G,从随机噪声或者潜在变量中生成样本,判别网络D,提供应用于全色图和融合图像的损失函数,两者同时训练,直到达到纳什均衡,此时判别网络也无法正确的区分生成数据和真实数据,利用数据集对网络结构进行训练,得到充分优化的网络,此时网络中的权重参数收敛到全局最优。3) Training Generative Adversarial Network Model: Based on the designed network and discriminant network model, generate network G for the network, generate samples from random noise or latent variables, and discriminant network D, providing loss functions applied to panchromatic images and fused images , the two are trained at the same time until the Nash equilibrium is reached. At this time, the discriminative network cannot correctly distinguish the generated data from the real data. The data set is used to train the network structure to obtain a fully optimized network. At this time, the weight parameters in the network converge to global optimum.
与现有的技术相比,本发明的提升和优势在于:Compared with the prior art, the promotion and advantages of the present invention are:
一、与所有现有的全色锐化方法的思路不同,本发明创新性地提出基于深度学习方法的图像融合技术。1. Unlike all existing panchromatic sharpening methods, the present invention innovatively proposes an image fusion technology based on a deep learning method.
相比于传统方法,GAN能够利用深度卷积神经网络,提取大规模数据中隐含的高维深层特征,其结构也能够最大程度地减少卷积过程的信息损失。GAN不需对原图进行颜色空间或其他变换域的正逆变换,能够保持数据信息的稳定性和连贯性;同时通过训练好的模型直接对图像进行融合处理,其速度更快,更为高效。Compared with traditional methods, GAN can use deep convolutional neural networks to extract high-dimensional deep features hidden in large-scale data, and its structure can also minimize the information loss in the convolution process. GAN does not need to perform positive and negative transformations of color space or other transformation domains on the original image, and can maintain the stability and coherence of data information; at the same time, it directly fuses images through the trained model, which is faster and more efficient. .
二、和传统生成模型相比,生成对抗网络过程更高效,输出更真实。GAN不需要在采样序列生成不同的数据,和完全明显信念网络相比,如NADE、Pixel RNN、WaveNet等,产生样本的时间冗余度要低。相比于变分自编码器(VAE)引入决定性偏置来优化对数似然的下界,GAN优化似然度本身,所以生成实例更为真实。相比NICE、Real NVE等非线性独立分量方法,GAN不要求生成模型输入的潜在变量有任何特定的维度或可逆。相比玻尔兹曼机和生成随机网络,GAN生成实例的过程只需要模型运行一次,而不是以马尔科夫链的形式多次迭代。2. Compared with the traditional generative model, the generative confrontation network process is more efficient and the output is more realistic. GAN does not need to generate different data in the sampling sequence. Compared with completely obvious belief networks, such as NADE, Pixel RNN, WaveNet, etc., the time redundancy of generating samples is lower. Compared to Variational Autoencoder (VAE) which introduces a deterministic bias to optimize the lower bound of the log-likelihood, GAN optimizes the likelihood itself, so the generated instances are more realistic. Compared with nonlinear independent component methods such as NICE and Real NVE, GAN does not require any specific dimensionality or reversibility of the latent variables that generate the model input. Compared with Boltzmann machines and generating random networks, the process of GAN generating instances only requires the model to run once, rather than multiple iterations in the form of a Markov chain.
附图说明Description of drawings
图1为本发明所需实验的流程图。Figure 1 is a flowchart of the experiments required for the present invention.
图2为本发明所用生成对抗网络结构示意图。Fig. 2 is a schematic diagram of the structure of the generative confrontation network used in the present invention.
图3(a)全色遥感图像数据(b)光谱遥感图像数据(c)传统方法融合遥感图像数据(d)本发明融合遥感图像数据。Fig. 3 (a) Panchromatic remote sensing image data (b) Spectral remote sensing image data (c) Traditional method fusion remote sensing image data (d) The present invention fusion remote sensing image data.
图4为本发明的检测应用效果图。Fig. 4 is a detection application effect diagram of the present invention.
具体实施方式Detailed ways
为使本发明的技术方案更加清楚,下面对本发明具体实施方式做进一步地描述。如图1所示,本发明按以下步骤具体实现:In order to make the technical solution of the present invention clearer, the specific implementation manners of the present invention will be further described below. As shown in Figure 1, the present invention is concretely realized according to the following steps:
1.大规模遥感图像数据集构建1. Construction of large-scale remote sensing image datasets
本发明主要选用网络公开的SpaceNet on AWS等遥感图像集进行数据集构建,以SpaceNet中的RIO数据集为例,其包含50cm图像数据及坐标,图像来源为DigitalGlobe的WorldView-2卫星,可用于图像分割和检测等多种应用场合。The present invention mainly selects remote sensing image collections such as SpaceNet on AWS disclosed on the Internet to construct data sets. Taking the RIO data set in SpaceNet as an example, it contains 50cm image data and coordinates. The source of the image is the WorldView-2 satellite of DigitalGlobe, which can be used for image Segmentation and detection and many other applications.
SpaceNet是托管于Amazon公司AWS云服务平台的大规模遥感图像数据集,为DigitalGlobe、CosmiQ Works以及NVIDIA共同完成,其包含卫星图像的在线存储库和已经标注好的训练数据,是公开发布的高分辨率、专用于训练机器学习算法的卫星图像数据平台。除此之外,本发明也结合NWPU VHR-10、中科院地理空间数据云平台、美国地质勘探局(USGS)和谷歌公司的相关遥感数据来搭建训练和测试所需的数据集。SpaceNet is a large-scale remote sensing image data set hosted on Amazon's AWS cloud service platform. It was jointly completed by DigitalGlobe, CosmiQ Works and NVIDIA. It contains an online repository of satellite images and marked training data. It is a publicly released high-resolution High-rate, satellite imagery data platform dedicated to training machine learning algorithms. In addition, the present invention also combines relevant remote sensing data from NWPU VHR-10, Chinese Academy of Sciences geospatial data cloud platform, United States Geological Survey (USGS) and Google to build the data sets required for training and testing.
以SpaceNet数据集中的3band_AOI_1_RIO数据为例,其光谱图像包括红、绿、蓝三个通道。全色图像和光谱图像的空间分辨率比例为4:1,输入的全色图像分辨率为256*256,网络可以以光谱图像为基准来对全色图像进行锐化。每张全色图像具有唯一与其对应的目标光谱图像,我们对其进行目标分类和标注后,构建了包含全色图像和光谱图像的原图像数据集,将上述数据集按照4:1的比例分为了训练集和测试集。Taking the 3band_AOI_1_RIO data in the SpaceNet dataset as an example, its spectral image includes three channels of red, green, and blue. The spatial resolution ratio of the panchromatic image and the spectral image is 4:1, and the resolution of the input panchromatic image is 256*256. The network can sharpen the panchromatic image based on the spectral image. Each panchromatic image has a unique target spectral image corresponding to it. After classifying and labeling the target, we construct the original image data set including panchromatic image and spectral image, and divide the above data set according to the ratio of 4:1. For training set and test set.
2.基于生成对抗网络的全色锐化模型设计2. Design of panchromatic sharpening model based on generative confrontation network
如图2结构所示,生成网络模型的目标,就是尽可能最小化判别网络模型判别生成数据为假的概率。生成网络在训练过程中尽可能地去“欺骗”判别网络的判断,而判别网络尽可能对图像做出正确判别的分类任务。As shown in the structure of Figure 2, the goal of generating the network model is to minimize the probability that the discriminant network model judges the generated data as false. During the training process, the generation network tries to "deceive" the judgment of the discriminant network as much as possible, and the discriminant network tries to make a correct classification task for the image as much as possible.
GAN的生成网络G是可以学习从随机噪声向量z和1中所述数据集中的图像x,到生成的样本图像y的映射,即G:{x,z}→y,G经过训练,生成的样本图像不能被判别网络模型判别为假。而判别网络模型D使用卷积神经网络,D经过训练后,可对生成数据的真假进行判别,能尽可能好地完成判别生成样本图像真假性的分类问题。The generation network G of GAN can learn the mapping from the image x in the data set described in the random noise vector z and 1 to the generated sample image y, that is, G: {x,z}→y, G is trained and generated The sample image cannot be judged as fake by the discriminative network model. The discriminative network model D uses a convolutional neural network. After D is trained, it can discriminate the authenticity of the generated data, and can complete the classification problem of discriminating the authenticity of the generated sample images as well as possible.
GAN的训练目标可用如下公式表示:The training objective of GAN can be expressed by the following formula:
LcGAN(G,D)=Ex,y[logD(x,y)]+Ex,z[log(1-D(x,G(x,z)))]LcGAN (G,D)=Ex,y [logD(x,y)]+Ex,z [log(1-D(x,G(x,z)))]
上述公式即为生成模型和判别模型的博弈问题中,所需要优化的目标函数。其中,Ex,y[logD(x,y)]表示求判别损失函数logD(x,y)的期望,x,z表示服从先验分布,z为[-1,1]内的均匀分布采集的向量,E(x,z)表示根据该随机向量求得的期望。G函数和D函数分别表示生成网络的输出图像和判别网络的输出结果。The above formula is the objective function that needs to be optimized in the game problem of the generative model and the discriminant model. Among them, Ex, y [logD(x, y)] represents the expectation of the discriminant loss function logD(x, y), x, z represents the prior distribution, and z is the uniform distribution collection in [-1,1] A vector of , E(x,z) represents the expectation obtained from this random vector. The G function and D function represent the output image of the generator network and the output result of the discriminant network, respectively.
生成模型和判别模型都是采用卷积(Convolution)—批规范化(BatchNormalization)—非线性运算单元(Rectified Linear Units,ReLu)的结构,在训练的过程中,生成模型来使该目标最小化,而判别模型使其最大化,即:Both the generative model and the discriminative model adopt the structure of Convolution-BatchNormalization-ReLu (Rectified Linear Units, ReLu). During the training process, the model is generated to minimize the target, while The discriminative model maximizes it, i.e.:
G*=argminGmaxDLcGAN(G,D)G* =argminG maxD LcGAN (G,D)
在最终输出样本图像的生成过程,输入的全色图像和光谱图像具有相同的底层结构,共享突出边缘的位置。为了使生成模型获取该信息,生成模型增加了跳转连接,并采用U-Net的整体结构。网络分为编码层(共八层),解码层(共八层)两部分,每编码一层,特征图(feature map)长和宽减半,特征层数增加一半,每解码一层,特征图长和宽加倍,特征层数增加加倍,和对应的编码层,通过通道串接,然后进行反卷积处理。During the generation of the final output sample image, the input panchromatic image and spectral image have the same underlying structure, sharing the location of prominent edges. In order for the generative model to obtain this information, the generative model adds jump connections and adopts the overall structure of U-Net. The network is divided into two parts, the encoding layer (eight layers in total) and the decoding layer (eight layers in total). For each encoding layer, the length and width of the feature map (feature map) are halved, and the number of feature layers is increased by half. For each decoding layer, the feature map The length and width of the image are doubled, the number of feature layers is doubled, and the corresponding coding layer is concatenated through channels, and then deconvolution is performed.
对输入图像的四周做了镜像操作,卷积层的数量设计在20个,4次下采样,4次上采样。具体地,对于n层网络,我们在每一个第i层和第n-i层之间添加跳转连接,把第i层和第n-i层中的所有通道相连接。整个网络中除最后一层外的卷积层后接ReLu激活层,来为网络增加非线性单元,降低网络过拟合的风险;最后一层卷积层后进行归一化处理。A mirror operation is performed around the input image, the number of convolutional layers is designed to be 20, 4 times of downsampling, and 4 times of upsampling. Specifically, for n-layer networks, we add jump connections between each i-th layer and n-i-th layer, connecting all channels in i-th layer and n-i-th layer. In the entire network, the convolutional layer except the last layer is followed by the ReLu activation layer to add nonlinear units to the network and reduce the risk of network overfitting; the last convolutional layer is followed by normalization.
通过下采样过程,能够减轻运算的冗余度;最后用两层反卷积层实现图像的上采样,将前阶段卷积层输出的特征与后阶段卷积层输出的特征进行叠加处理,能够实现特征的重复利用。Through the down-sampling process, the redundancy of the operation can be reduced; finally, two layers of deconvolution layers are used to realize the up-sampling of the image, and the features output by the convolution layer in the previous stage and the features output by the convolution layer in the later stage are superimposed, which can Realize the reuse of features.
基于用于分类的卷积神经网络(CNN),设计判别网络模型。该网络被设计为含有一个串接层,和四层卷积层。减少参数设计的CNN,只是对生成的融合图像中的每个区块的真假性做分类,在图像上卷积运行该网络,对所有响应做均值,来提供D的最终输出。Based on the convolutional neural network (CNN) for classification, a discriminative network model is designed. The network is designed with one concatenated layer and four convolutional layers. The CNN with reduced parameter design only classifies the authenticity of each block in the generated fused image, runs the network convolutionally on the image, and averages all the responses to provide the final output of D.
3.基于生成对抗网络的全色锐化模型构建3. Construction of panchromatic sharpening model based on generative confrontation network
对于数据集中成对的图像,可以在训练中定义其方向为全色图像至光谱图像。图像的输入输出对已经被预设大小,亦可通过设置相关参数来使用不同长宽比的图像数据。生成网络G,从随机噪声或者潜在变量中生成逼真的样本,判别网络D,提供应用于全色图和融合图像的损失函数,两者同时训练,直到达到纳什均衡,此时判别网络也无法正确的区分生成数据和真实数据。利用大规模图像数据集对网络结构进行训练,得到充分优化的网络,此时网络中的权重参数收敛到全局最优。接下来分别对生成网络和判别网络训练过程的具体实施步骤进行详解。For pairs of images in the dataset, the orientation can be defined as panchromatic image to spectral image during training. The input and output pairs of images have been preset in size, and image data with different aspect ratios can also be used by setting related parameters. The generation network G generates realistic samples from random noise or latent variables, and the discriminant network D provides a loss function applied to panchromatic images and fused images. The two are trained at the same time until the Nash equilibrium is reached. At this time, the discriminative network cannot be correct. The distinction between generated data and real data. A large-scale image dataset is used to train the network structure, and a fully optimized network is obtained. At this time, the weight parameters in the network converge to the global optimum. Next, the specific implementation steps of the generation network and the discriminant network training process are explained in detail.
(1)生成网络模型(1) Generate a network model
基于可编程深度学习框架Tensorflow,首先,定义一个存储U-Net各层网络的存储器layers,构建第一层编码层encoder_1。将一批全色的遥感图像数据输入该编码层,经过卷积函数处理,输出的output是分辨率降低,通道数增加的特征图(feature map),把output存入layers。conv函数的参数是一批特征图,输出通道数,和卷积核步伐。Based on the programmable deep learning framework Tensorflow, first, define a memory layer that stores each layer of U-Net network, and construct the first layer of encoding layer encoder_1. A batch of full-color remote sensing image data is input into the coding layer, and after convolution function processing, the output output is a feature map with reduced resolution and increased number of channels, and the output is stored in layers. The parameters of the conv function are a batch of feature maps, the number of output channels, and the convolution kernel step.
第二步,定义接下来的第二至第八层编码层各输出通道数,循环产生第二至第八编码层。In the second step, the number of output channels of the next second to eighth coding layers is defined, and the second to eighth coding layers are cyclically generated.
第三步,定义接下来的七层解码层输出通道数与dropout的比例。然后进入跳转连接循环处理。第一层循环是先把第八层编码层layers[-1]的输出,反卷积输出。第二层循环把上述反卷积输出和相对应的编码器的第七层输出,通过通道串接输出,然后再反卷积输出。The third step is to define the ratio of the number of output channels of the next seven decoding layers to the dropout. Then enter the jump connection cycle processing. The first layer of loop is to first deconvolute the output of the eighth layer coding layer layers[-1]. The second layer loops the above-mentioned deconvolution output and the corresponding seventh layer output of the encoder, concatenates the output through the channel, and then deconvolutes the output.
第四步,按第三步骤类推,接下来把目前第七层解码层的输出和编码器的第一层输出通过通道串接输出,然后反卷积输出。经过上述步骤,完成生成模型U-Net。The fourth step is analogous to the third step. Next, the output of the current seventh layer of decoding layer and the output of the first layer of the encoder are concatenated and output through channels, and then deconvoluted and output. After the above steps, the generation model U-Net is completed.
(2)判别网络模型(2) Discriminant network model
判别模型D的训练目的就是要尽量最大化自己的判别准确率。判别网络模型被设计为含有一个串接层,和四层卷积层。同样定义一个存储器,layers=[],串接层把成对的全色图像和光谱图像通过通道串接再输出。每一层卷积层都是把上一层的结果,经过卷积操作后再输出。The purpose of training the discriminant model D is to maximize its discriminative accuracy. The discriminative network model is designed to contain one concatenated layer and four convolutional layers. Also define a memory, layers=[], the concatenation layer concatenates pairs of panchromatic images and spectral images through channels and then outputs them. Each convolutional layer is to output the result of the previous layer after convolution operation.
还需要构建真判别模型,假判别模型,和判别目标函数和生成目标函数。判别目标函数(discriminator_loss)需要让判真率(predict_real)趋近于1,判假率(predict_fake)趋近于0。生成目标函数(generator_loss)需要让判假率(predict_fake)趋近于0,并且targets-outputs的值也应趋近于0。It is also necessary to construct a true discriminant model, a false discriminant model, and a discriminative objective function and a generative objective function. The discriminator objective function (discriminator_loss) needs to make the true rate (predict_real) approach 1, and the false rate (predict_fake) approach 0. Generating the objective function (generator_loss) needs to make the false rate (predict_fake) approach 0, and the value of targets-outputs should also approach 0.
构建好判别训练函数(discrim_train)和生成训练函数(gen_train),即可根据1中所构建数据库进行模型的多次迭代和训练任务,网络的训练策略选取批梯度下降法。After constructing the discriminative training function (discrim_train) and generating training function (gen_train), multiple iterations and training tasks of the model can be performed according to the database constructed in 1. The training strategy of the network is the batch gradient descent method.
4.全色锐化模型的应用效果验证4. Verification of the application effect of the panchromatic sharpening model
本发明首先基于1所述的数据集,对比了本方法和PCI锐化法和ENVI-GS变换法等典型方法。如图3所示,为使主观展示更为明显,我们这里以三通道图像为例进行融合实验,我们进行了对比算法的结果分析,在主观视觉效果上,本发明的方法对测试所用实验数据均能保持较好的光谱信息和空间细节。The present invention first compares this method with typical methods such as the PCI sharpening method and the ENVI-GS transformation method based on the data set described in 1. As shown in Figure 3, in order to make the subjective display more obvious, we take the three-channel image as an example to carry out the fusion experiment, and we have carried out the result analysis of the comparison algorithm. Both can maintain better spectral information and spatial details.
为了验证上述网络模型的应用效果,在发明内容1构建的测试数据集上,使用基于深度学习的Faster R-CNN和YOLO9000等现有的目标检测算法,分别根据融合前后的光谱图像构建基于PASCAL VOC竞赛格式的数据集,对图中的飞机、轮船、储油罐、桥梁、港口五类军民目标进行标注,训练目标检测模型并测试模型。In order to verify the application effect of the above network model, on the test data set constructed in the content of the invention 1, existing target detection algorithms such as Faster R-CNN and YOLO9000 based on deep learning are used to construct a PASCAL VOC based on the spectral images before and after fusion respectively. The data set in the competition format marks the five types of military and civilian targets in the picture, including planes, ships, oil storage tanks, bridges, and ports, and trains the target detection model and tests the model.
在训练前调用内置脚本转换图片格式,将标签文本中的类别和其他信息写入进LMDB格式文件。还需计算训练集中图像的均值文件,去除图像中大量无用的背景像素,隐去与目标无关的冗余信息。将分类器与检测器级联,将分类器的输出作为检测器的输入,组合为最终的识别模型。最终对测试数据进行分析,统计准确性、快速性等相关指标,并评价本系统的识别度。Call the built-in script to convert the image format before training, and write the category and other information in the label text into the LMDB format file. It is also necessary to calculate the mean file of the images in the training set, remove a large number of useless background pixels in the image, and hide redundant information that has nothing to do with the target. The classifier is cascaded with the detector, and the output of the classifier is used as the input of the detector to combine into the final recognition model. Finally, analyze the test data, count related indicators such as accuracy and rapidity, and evaluate the recognition degree of the system.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810193748.6ACN108537742B (en) | 2018-03-09 | 2018-03-09 | A Panchromatic Sharpening Method for Remote Sensing Images Based on Generative Adversarial Networks |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810193748.6ACN108537742B (en) | 2018-03-09 | 2018-03-09 | A Panchromatic Sharpening Method for Remote Sensing Images Based on Generative Adversarial Networks |
| Publication Number | Publication Date |
|---|---|
| CN108537742Atrue CN108537742A (en) | 2018-09-14 |
| CN108537742B CN108537742B (en) | 2021-07-09 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810193748.6AActiveCN108537742B (en) | 2018-03-09 | 2018-03-09 | A Panchromatic Sharpening Method for Remote Sensing Images Based on Generative Adversarial Networks |
| Country | Link |
|---|---|
| CN (1) | CN108537742B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108460391A (en)* | 2018-03-09 | 2018-08-28 | 西安电子科技大学 | Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network |
| CN109102029A (en)* | 2018-08-23 | 2018-12-28 | 重庆科技学院 | Information, which maximizes, generates confrontation network model synthesis face sample quality appraisal procedure |
| CN109344778A (en)* | 2018-10-10 | 2019-02-15 | 成都信息工程大学 | A method of extracting road information from UAV images based on generative adversarial network |
| CN109389080A (en)* | 2018-09-30 | 2019-02-26 | 西安电子科技大学 | Hyperspectral image classification method based on semi-supervised WGAN-GP |
| CN109409508A (en)* | 2018-11-06 | 2019-03-01 | 成都信息工程大学 | A method of model avalanche is solved based on confrontation Web vector graphic perception loss is generated |
| CN109495744A (en)* | 2018-10-29 | 2019-03-19 | 西安电子科技大学 | The big multiplying power remote sensing image compression method of confrontation network is generated based on joint |
| CN109508647A (en)* | 2018-10-22 | 2019-03-22 | 北京理工大学 | A kind of spectra database extended method based on generation confrontation network |
| CN109671125A (en)* | 2018-12-17 | 2019-04-23 | 电子科技大学 | A kind of GAN network model that height merges and the method for realizing text generation image |
| CN109765462A (en)* | 2019-03-05 | 2019-05-17 | 国家电网有限公司 | Fault detection method, device and terminal equipment for transmission line |
| CN109785253A (en)* | 2018-12-25 | 2019-05-21 | 西安交通大学 | A kind of panchromatic sharpening post-processing approach based on enhancing back projection |
| CN109885091A (en)* | 2019-03-21 | 2019-06-14 | 华北电力大学(保定) | A method and system for autonomous flight control of an unmanned aerial vehicle |
| CN109948117A (en)* | 2019-03-13 | 2019-06-28 | 南京航空航天大学 | A Satellite Anomaly Detection Method for Adversarial Network Autoencoders |
| CN109948796A (en)* | 2019-03-13 | 2019-06-28 | 腾讯科技(深圳)有限公司 | Self-encoding encoder learning method, device, computer equipment and storage medium |
| CN110009013A (en)* | 2019-03-21 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Encoder training and characterization information extracting method and device |
| CN110163815A (en)* | 2019-04-22 | 2019-08-23 | 桂林电子科技大学 | Low-light (level) restoring method based on multistage variation self-encoding encoder |
| CN110189336A (en)* | 2019-05-30 | 2019-08-30 | 上海极链网络科技有限公司 | Image generating method, system, server and storage medium |
| CN110211046A (en)* | 2019-06-03 | 2019-09-06 | 重庆邮电大学 | A kind of remote sensing image fusion method, system and terminal based on generation confrontation network |
| CN110322423A (en)* | 2019-04-29 | 2019-10-11 | 天津大学 | A kind of multi-modality images object detection method based on image co-registration |
| CN110533578A (en)* | 2019-06-05 | 2019-12-03 | 广东世纪晟科技有限公司 | Image translation method based on conditional countermeasure neural network |
| CN110751699A (en)* | 2019-10-15 | 2020-02-04 | 西安电子科技大学 | Color reconstruction method of optical remote sensing image based on convolutional neural network |
| CN111027603A (en)* | 2019-11-27 | 2020-04-17 | 湖北工业大学 | Image generation method for improving GAN model |
| CN111105417A (en)* | 2020-03-17 | 2020-05-05 | 珠海欧比特宇航科技股份有限公司 | Image noise positioning method and system |
| CN111260594A (en)* | 2019-12-22 | 2020-06-09 | 天津大学 | An Unsupervised Multimodal Image Fusion Method |
| CN111274602A (en)* | 2020-01-15 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Image characteristic information replacement method, device, equipment and medium |
| WO2020123101A1 (en)* | 2018-12-11 | 2020-06-18 | Exxonmobil Upstream Research Company | Automated reservoir modeling using deep generative networks |
| CN111325660A (en)* | 2020-02-20 | 2020-06-23 | 中国地质大学(武汉) | Remote sensing image style conversion method based on text data |
| CN111340743A (en)* | 2020-02-18 | 2020-06-26 | 云南大学 | Semi-supervised multispectral and panchromatic remote sensing image fusion method and system |
| CN111351502A (en)* | 2018-12-21 | 2020-06-30 | 赫尔环球有限公司 | Method, apparatus and computer program product for generating an overhead view of an environment from a perspective view |
| CN111368935A (en)* | 2020-03-17 | 2020-07-03 | 北京航天自动控制研究所 | SAR time-sensitive target sample augmentation method based on generation countermeasure network |
| CN111428761A (en)* | 2020-03-11 | 2020-07-17 | 深圳先进技术研究院 | Image feature visualization method, image feature visualization device and electronic equipment |
| CN111476269A (en)* | 2020-03-04 | 2020-07-31 | 中国平安人寿保险股份有限公司 | Method, device, equipment and medium for constructing balanced sample set and identifying copied image |
| CN111797885A (en)* | 2019-04-05 | 2020-10-20 | 三星显示有限公司 | System and method for classification |
| CN111914935A (en)* | 2020-08-03 | 2020-11-10 | 哈尔滨工程大学 | Ship image target detection method based on deep learning |
| CN112070853A (en)* | 2019-06-10 | 2020-12-11 | 阿里巴巴集团控股有限公司 | Image generation method and device |
| CN112116022A (en)* | 2020-09-27 | 2020-12-22 | 中国空间技术研究院 | Data generation method and device based on continuous mixed latent distribution model |
| CN112287998A (en)* | 2020-10-27 | 2021-01-29 | 佛山市南海区广工大数控装备协同创新研究院 | A method for object detection under low light conditions |
| CN112437451A (en)* | 2020-11-10 | 2021-03-02 | 南京大学 | Wireless network flow prediction method and device based on generation countermeasure network |
| CN112561782A (en)* | 2020-12-15 | 2021-03-26 | 哈尔滨工程大学 | Method for improving reality degree of simulation picture of offshore scene |
| CN113012069A (en)* | 2021-03-17 | 2021-06-22 | 中国科学院西安光学精密机械研究所 | Optical remote sensing image quality improving method combining deep learning under wavelet transform domain |
| CN113344846A (en)* | 2021-04-20 | 2021-09-03 | 山东师范大学 | Remote sensing image fusion method and system based on generation countermeasure network and compressed sensing |
| CN114036829A (en)* | 2021-11-02 | 2022-02-11 | 中国地质大学(武汉) | Geological profile generation method, system, equipment and storage medium |
| CN116664468A (en)* | 2023-06-20 | 2023-08-29 | 华中师范大学 | High-resolution hyperspectral image fusion method |
| CN117094924A (en)* | 2023-06-15 | 2023-11-21 | 西北工业大学 | Multispectral and panchromatic image fusion method based on cross prediction |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104112263A (en)* | 2014-06-28 | 2014-10-22 | 南京理工大学 | Method for fusing full-color image and multispectral image based on deep neural network |
| CN106295714A (en)* | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
| CN107103590A (en)* | 2017-03-22 | 2017-08-29 | 华南理工大学 | A kind of image for resisting generation network based on depth convolution reflects minimizing technology |
| CN107180248A (en)* | 2017-06-12 | 2017-09-19 | 桂林电子科技大学 | Strengthen the hyperspectral image classification method of network based on associated losses |
| CN107239766A (en)* | 2017-06-08 | 2017-10-10 | 深圳市唯特视科技有限公司 | A kind of utilization resists network and the significantly face of three-dimensional configuration model ajusts method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104112263A (en)* | 2014-06-28 | 2014-10-22 | 南京理工大学 | Method for fusing full-color image and multispectral image based on deep neural network |
| CN106295714A (en)* | 2016-08-22 | 2017-01-04 | 中国科学院电子学研究所 | A kind of multi-source Remote-sensing Image Fusion based on degree of depth study |
| CN107103590A (en)* | 2017-03-22 | 2017-08-29 | 华南理工大学 | A kind of image for resisting generation network based on depth convolution reflects minimizing technology |
| CN107239766A (en)* | 2017-06-08 | 2017-10-10 | 深圳市唯特视科技有限公司 | A kind of utilization resists network and the significantly face of three-dimensional configuration model ajusts method |
| CN107180248A (en)* | 2017-06-12 | 2017-09-19 | 桂林电子科技大学 | Strengthen the hyperspectral image classification method of network based on associated losses |
| Title |
|---|
| GUANG YANG等: "DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction", 《IEEE》* |
| JUNFENG YANG等: "PanNet: A deep network architecture for pan-sharpening", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》* |
| PEDRAM GHAMISI等: "IMG2DSM: Height Simulation From Single Imagery Using Conditional Generative Adversarial Net", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108460391A (en)* | 2018-03-09 | 2018-08-28 | 西安电子科技大学 | Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network |
| CN108460391B (en)* | 2018-03-09 | 2022-03-22 | 西安电子科技大学 | Hyperspectral image unsupervised feature extraction method based on generation countermeasure network |
| CN109102029A (en)* | 2018-08-23 | 2018-12-28 | 重庆科技学院 | Information, which maximizes, generates confrontation network model synthesis face sample quality appraisal procedure |
| CN109389080A (en)* | 2018-09-30 | 2019-02-26 | 西安电子科技大学 | Hyperspectral image classification method based on semi-supervised WGAN-GP |
| CN109344778A (en)* | 2018-10-10 | 2019-02-15 | 成都信息工程大学 | A method of extracting road information from UAV images based on generative adversarial network |
| CN109508647A (en)* | 2018-10-22 | 2019-03-22 | 北京理工大学 | A kind of spectra database extended method based on generation confrontation network |
| CN109495744A (en)* | 2018-10-29 | 2019-03-19 | 西安电子科技大学 | The big multiplying power remote sensing image compression method of confrontation network is generated based on joint |
| CN109495744B (en)* | 2018-10-29 | 2019-12-24 | 西安电子科技大学 | Large-magnification remote sensing image compression method based on joint generative adversarial network |
| CN109409508A (en)* | 2018-11-06 | 2019-03-01 | 成都信息工程大学 | A method of model avalanche is solved based on confrontation Web vector graphic perception loss is generated |
| CN109409508B (en)* | 2018-11-06 | 2022-03-15 | 成都信息工程大学 | Method for solving model collapse based on generation of confrontation network use perception loss |
| US11520077B2 (en) | 2018-12-11 | 2022-12-06 | Exxonmobil Upstream Research Company | Automated reservoir modeling using deep generative networks |
| WO2020123101A1 (en)* | 2018-12-11 | 2020-06-18 | Exxonmobil Upstream Research Company | Automated reservoir modeling using deep generative networks |
| CN109671125A (en)* | 2018-12-17 | 2019-04-23 | 电子科技大学 | A kind of GAN network model that height merges and the method for realizing text generation image |
| CN111351502A (en)* | 2018-12-21 | 2020-06-30 | 赫尔环球有限公司 | Method, apparatus and computer program product for generating an overhead view of an environment from a perspective view |
| US11720992B2 (en) | 2018-12-21 | 2023-08-08 | Here Global B.V. | Method, apparatus, and computer program product for generating an overhead view of an environment from a perspective image |
| CN109785253A (en)* | 2018-12-25 | 2019-05-21 | 西安交通大学 | A kind of panchromatic sharpening post-processing approach based on enhancing back projection |
| CN109785253B (en)* | 2018-12-25 | 2021-01-19 | 西安交通大学 | Panchromatic sharpening post-processing method based on enhanced back projection |
| CN109765462A (en)* | 2019-03-05 | 2019-05-17 | 国家电网有限公司 | Fault detection method, device and terminal equipment for transmission line |
| CN109948117A (en)* | 2019-03-13 | 2019-06-28 | 南京航空航天大学 | A Satellite Anomaly Detection Method for Adversarial Network Autoencoders |
| CN109948796A (en)* | 2019-03-13 | 2019-06-28 | 腾讯科技(深圳)有限公司 | Self-encoding encoder learning method, device, computer equipment and storage medium |
| CN109948796B (en)* | 2019-03-13 | 2023-07-04 | 腾讯科技(深圳)有限公司 | Self-encoder learning method, self-encoder learning device, computer equipment and storage medium |
| CN109885091A (en)* | 2019-03-21 | 2019-06-14 | 华北电力大学(保定) | A method and system for autonomous flight control of an unmanned aerial vehicle |
| CN110009013A (en)* | 2019-03-21 | 2019-07-12 | 腾讯科技(深圳)有限公司 | Encoder training and characterization information extracting method and device |
| CN111797885B (en)* | 2019-04-05 | 2025-07-01 | 三星显示有限公司 | Systems and methods for classification |
| CN111797885A (en)* | 2019-04-05 | 2020-10-20 | 三星显示有限公司 | System and method for classification |
| CN110163815B (en)* | 2019-04-22 | 2022-06-24 | 桂林电子科技大学 | Low-illumination reduction method based on multi-stage variational self-encoder |
| CN110163815A (en)* | 2019-04-22 | 2019-08-23 | 桂林电子科技大学 | Low-light (level) restoring method based on multistage variation self-encoding encoder |
| CN110322423A (en)* | 2019-04-29 | 2019-10-11 | 天津大学 | A kind of multi-modality images object detection method based on image co-registration |
| CN110189336B (en)* | 2019-05-30 | 2020-05-01 | 上海极链网络科技有限公司 | Image generation method, system, server and storage medium |
| CN110189336A (en)* | 2019-05-30 | 2019-08-30 | 上海极链网络科技有限公司 | Image generating method, system, server and storage medium |
| CN110211046A (en)* | 2019-06-03 | 2019-09-06 | 重庆邮电大学 | A kind of remote sensing image fusion method, system and terminal based on generation confrontation network |
| CN110211046B (en)* | 2019-06-03 | 2023-07-14 | 重庆邮电大学 | A remote sensing image fusion method, system and terminal based on generative confrontation network |
| CN110533578A (en)* | 2019-06-05 | 2019-12-03 | 广东世纪晟科技有限公司 | Image translation method based on conditional countermeasure neural network |
| CN112070853A (en)* | 2019-06-10 | 2020-12-11 | 阿里巴巴集团控股有限公司 | Image generation method and device |
| CN110751699B (en)* | 2019-10-15 | 2023-03-10 | 西安电子科技大学 | Color reconstruction method of optical remote sensing image based on convolutional neural network |
| CN110751699A (en)* | 2019-10-15 | 2020-02-04 | 西安电子科技大学 | Color reconstruction method of optical remote sensing image based on convolutional neural network |
| CN111027603B (en)* | 2019-11-27 | 2022-07-05 | 湖北工业大学 | An Image Generation Method for Improved GAN Model |
| CN111027603A (en)* | 2019-11-27 | 2020-04-17 | 湖北工业大学 | Image generation method for improving GAN model |
| CN111260594B (en)* | 2019-12-22 | 2023-10-31 | 天津大学 | Unsupervised multi-mode image fusion method |
| CN111260594A (en)* | 2019-12-22 | 2020-06-09 | 天津大学 | An Unsupervised Multimodal Image Fusion Method |
| CN111274602A (en)* | 2020-01-15 | 2020-06-12 | 腾讯科技(深圳)有限公司 | Image characteristic information replacement method, device, equipment and medium |
| CN111340743A (en)* | 2020-02-18 | 2020-06-26 | 云南大学 | Semi-supervised multispectral and panchromatic remote sensing image fusion method and system |
| CN111325660A (en)* | 2020-02-20 | 2020-06-23 | 中国地质大学(武汉) | Remote sensing image style conversion method based on text data |
| CN111476269A (en)* | 2020-03-04 | 2020-07-31 | 中国平安人寿保险股份有限公司 | Method, device, equipment and medium for constructing balanced sample set and identifying copied image |
| CN111476269B (en)* | 2020-03-04 | 2024-04-16 | 中国平安人寿保险股份有限公司 | Balanced sample set construction and image reproduction identification method, device, equipment and medium |
| CN111428761A (en)* | 2020-03-11 | 2020-07-17 | 深圳先进技术研究院 | Image feature visualization method, image feature visualization device and electronic equipment |
| CN111428761B (en)* | 2020-03-11 | 2023-03-28 | 深圳先进技术研究院 | Image feature visualization method, image feature visualization device and electronic equipment |
| CN111105417B (en)* | 2020-03-17 | 2023-07-07 | 珠海欧比特宇航科技股份有限公司 | Image noise positioning method and system |
| CN111368935A (en)* | 2020-03-17 | 2020-07-03 | 北京航天自动控制研究所 | SAR time-sensitive target sample augmentation method based on generation countermeasure network |
| CN111105417A (en)* | 2020-03-17 | 2020-05-05 | 珠海欧比特宇航科技股份有限公司 | Image noise positioning method and system |
| CN111368935B (en)* | 2020-03-17 | 2023-06-09 | 北京航天自动控制研究所 | SAR time-sensitive target sample amplification method based on generation countermeasure network |
| CN111914935A (en)* | 2020-08-03 | 2020-11-10 | 哈尔滨工程大学 | Ship image target detection method based on deep learning |
| CN112116022B (en)* | 2020-09-27 | 2024-05-31 | 中国空间技术研究院 | Data generation method and device based on continuous mixed potential distribution model |
| CN112116022A (en)* | 2020-09-27 | 2020-12-22 | 中国空间技术研究院 | Data generation method and device based on continuous mixed latent distribution model |
| CN112287998A (en)* | 2020-10-27 | 2021-01-29 | 佛山市南海区广工大数控装备协同创新研究院 | A method for object detection under low light conditions |
| CN112437451A (en)* | 2020-11-10 | 2021-03-02 | 南京大学 | Wireless network flow prediction method and device based on generation countermeasure network |
| CN112561782B (en)* | 2020-12-15 | 2023-01-03 | 哈尔滨工程大学 | Method for improving reality degree of simulation picture of offshore scene |
| CN112561782A (en)* | 2020-12-15 | 2021-03-26 | 哈尔滨工程大学 | Method for improving reality degree of simulation picture of offshore scene |
| CN113012069A (en)* | 2021-03-17 | 2021-06-22 | 中国科学院西安光学精密机械研究所 | Optical remote sensing image quality improving method combining deep learning under wavelet transform domain |
| CN113012069B (en)* | 2021-03-17 | 2023-09-05 | 中国科学院西安光学精密机械研究所 | Optical remote sensing image quality improving method combining deep learning under wavelet transform domain |
| CN113344846A (en)* | 2021-04-20 | 2021-09-03 | 山东师范大学 | Remote sensing image fusion method and system based on generation countermeasure network and compressed sensing |
| CN113344846B (en)* | 2021-04-20 | 2023-02-21 | 山东师范大学 | Remote sensing image fusion method and system based on generative confrontation network and compressed sensing |
| CN114036829A (en)* | 2021-11-02 | 2022-02-11 | 中国地质大学(武汉) | Geological profile generation method, system, equipment and storage medium |
| CN117094924A (en)* | 2023-06-15 | 2023-11-21 | 西北工业大学 | Multispectral and panchromatic image fusion method based on cross prediction |
| CN117094924B (en)* | 2023-06-15 | 2025-01-28 | 西北工业大学 | A multispectral and panchromatic image fusion method based on cross prediction |
| CN116664468A (en)* | 2023-06-20 | 2023-08-29 | 华中师范大学 | High-resolution hyperspectral image fusion method |
| Publication number | Publication date |
|---|---|
| CN108537742B (en) | 2021-07-09 |
| Publication | Publication Date | Title |
|---|---|---|
| CN108537742B (en) | A Panchromatic Sharpening Method for Remote Sensing Images Based on Generative Adversarial Networks | |
| Jia et al. | A semisupervised Siamese network for hyperspectral image classification | |
| CN118212532B (en) | A method for extracting building change areas in dual-temporal remote sensing images based on twin hybrid attention mechanism and multi-scale feature fusion | |
| Zhong et al. | An end-to-end dense-inceptionnet for image copy-move forgery detection | |
| CN110363215B (en) | A method for converting SAR image to optical image based on generative adversarial network | |
| CN109993072B (en) | Low-resolution pedestrian re-identification system and method based on super-resolution image generation | |
| CN111368896A (en) | A classification method of hyperspectral remote sensing images based on dense residual 3D convolutional neural network | |
| CN108564109A (en) | A kind of Remote Sensing Target detection method based on deep learning | |
| He et al. | Hypervitgan: Semisupervised generative adversarial network with transformer for hyperspectral image classification | |
| CN115205590A (en) | A Hyperspectral Image Classification Method Based on Complementary Integrated Transformer Network | |
| CN117671509B (en) | Remote sensing target detection method and device, electronic equipment and storage medium | |
| CN111860124A (en) | Remote sensing image classification method based on empty-spectrum capsule generative adversarial network | |
| Ma et al. | Decomposition-based unsupervised domain adaptation for remote sensing image semantic segmentation | |
| Li et al. | HTDFormer: Hyperspectral target detection based on transformer with distributed learning | |
| Li et al. | Small Object Detection Algorithm Based on Feature Pyramid‐Enhanced Fusion SSD | |
| CN111639697B (en) | Hyperspectral image classification method based on non-repeated sampling and prototype network | |
| Dias et al. | Semantic segmentation and colorization of grayscale aerial imagery with W‐Net models | |
| Hazer et al. | Deep learning based point cloud processing techniques | |
| Shang et al. | Spectral–spatial generative adversarial network for super-resolution land cover mapping with multispectral remotely sensed imagery | |
| Zhang et al. | Semisupervised change detection based on bihierarchical feature aggregation and extraction network | |
| Pang et al. | Ptrsegnet: a patch-to-region bottom–up pyramid framework for the semantic segmentation of large-format remote sensing images | |
| CN119229278A (en) | Rotating small target detection method in remote sensing images based on cognitive features | |
| Li et al. | Cycle-YOLO: A efficient and robust framework for pavement damage detection | |
| Wang et al. | Remote sensing image semantic segmentation based on cascaded transformer | |
| Annadani et al. | Augment and adapt: A simple approach to image tampering detection |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |