Movatterモバイル変換


[0]ホーム

URL:


CN109086668B - A method of extracting road information from UAV remote sensing images based on multi-scale generative adversarial network - Google Patents

A method of extracting road information from UAV remote sensing images based on multi-scale generative adversarial network
Download PDF

Info

Publication number
CN109086668B
CN109086668BCN201810707890.8ACN201810707890ACN109086668BCN 109086668 BCN109086668 BCN 109086668BCN 201810707890 ACN201810707890 ACN 201810707890ACN 109086668 BCN109086668 BCN 109086668B
Authority
CN
China
Prior art keywords
network
image
remote sensing
size
generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810707890.8A
Other languages
Chinese (zh)
Other versions
CN109086668A (en
Inventor
李玉霞
彭博
童玲
杨超
范琨龙
程渊
李凡
袁浪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of ChinafiledCriticalUniversity of Electronic Science and Technology of China
Priority to CN201810707890.8ApriorityCriticalpatent/CN109086668B/en
Publication of CN109086668ApublicationCriticalpatent/CN109086668A/en
Application grantedgrantedCritical
Publication of CN109086668BpublicationCriticalpatent/CN109086668B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于多尺度生成对抗网络的无人机影像道路信息提取方法,将遥感图像分别通过卷积和反卷积操作,得到长宽各缩小一倍和放大一倍的图像,然后通过一个端到端训练的图像分割网络,得到三种对应尺度的输出特征图,然后通过卷积和反卷积操作,统一到原始大小尺度,通过逐像素相加的方法,将融合在一起并输入到判别网络中与标签图像进行对比得到误差,并更新生成网络和判别网络参数。经过一定数量训练数据的训练,最终将对抗生成网络中的生成网络的生成结果作为应用中的分割结果即提取的道路区域图像。针对无人机图像中道路区域占单张图像比例过大或过小的情况,本发明能很好的提取到道路区域,同时提高了无人机遥感影像中道路区域分割精度。

Figure 201810707890

The invention discloses a method for extracting road information from UAV images based on a multi-scale generation confrontation network. The remote sensing images are respectively subjected to convolution and deconvolution operations to obtain images whose length and width are doubled and doubled, and then Through an image segmentation network trained end-to-end, the output feature maps of three corresponding scales are obtained, and then unified to the original size scale through convolution and deconvolution operations. The input is input into the discriminant network and compared with the label image to get the error, and the parameters of the generation network and the discriminant network are updated. After training with a certain amount of training data, the generated result of the generation network in the confrontation generation network is finally used as the segmentation result in the application, that is, the extracted road area image. Aiming at the situation that the proportion of the road area in the UAV image in a single image is too large or too small, the invention can extract the road area well, and at the same time improve the segmentation accuracy of the road area in the UAV remote sensing image.

Figure 201810707890

Description

Unmanned aerial vehicle remote sensing image road information extraction method based on multi-scale generation countermeasure network
Technical Field
The invention relates to the technical field of unmanned aerial vehicle remote sensing image automatic processing, in particular to a method for extracting high-resolution unmanned aerial vehicle remote sensing image road information based on generation of a countermeasure network and fusion of multi-scale image processing.
Background
The unmanned aerial vehicle remote sensing is one of the development trends of remote sensing, has the advantages of strong timeliness, pertinence, high flexibility and the like in the data acquisition process, and is an important way for acquiring remote sensing data. The road is one of the most common ground feature information in the remote sensing image, and the extraction of the road information has important significance in the fields of military strategy, space mapping, urban construction, traffic management, traffic navigation and the like which concern the national civilians.
In recent years, with the rapid development of deep learning, a wide range of fields such as machine learning including computer vision is rapidly occupied by deep learning, including image classification, target detection and image semantic segmentation. Compared with the traditional algorithm, the deep learning is improved by 20% -30%, which is mainly attributed to the strong learning ability of the convolutional neural network to the image features, which cannot be compared with the traditional algorithm based on pixel and boundary identification.
Although many existing convolutional neural network models have been successfully represented in image semantic segmentation, but sometimes some features in the training data are not well learned, especially in the image semantic segmentation task, segmentation models typically predict the class of each pixel, which may be highly accurate at the pixel level, but the correlation between the pixels is easy to be ignored, so that the objects in the segmentation result are not complete enough or the sizes and shapes of some objects in the segmentation result are different from those in the labels, and the complexity and the changeability of the real scene always cause the lack of the general type of the convolutional neural network, and the influence factors of the lack of generalization capability of the segmentation model are the large change of the target object, the shielding and the overlapping of objects in different scenes, the lack of high-resolution characteristics, the lack of illumination change and the like.
The generation of the countermeasure network is a method proposed for solving the above problems, and is to generate a generation model in the countermeasure network, learn random noise through a convolutional neural network to generate an image similar to the input data distribution, control the difference between the generated false image and the original input image through a discrimination network, make the generated false image as close as possible to the original image, and finally make the discrimination impossible.
In the image semantic segmentation task, a generative model in a generation countermeasure network generates a probability map of label class prediction at a pixel level by learning the characteristics of input RGB images, and the difference between the probability map generated by the generative model and a real sample label is judged through a judgment network. Compared with the traditional convolutional neural network, the generation of the confrontation network model can not only improve the integrity of a single object in the image semantic segmentation result, but also keep the mutual independence among the objects and improve the segmentation precision.
Due to the characteristic problem of the unmanned aerial vehicle remote sensing platform, the flying heights of unmanned aerial vehicles with different numbers are often different, so that the imaging sizes and sizes of the same ground objects are different, and for a road area, when the flying height of the unmanned aerial vehicle is lower, the road area may occupy more than 90% of the area of a single image in an imaging image of the unmanned aerial vehicle, even reach 100%; when the unmanned aircraft is high in altitude, the area of the road area may only account for 10% or even less of the area of a single image. In the conventional convolutional neural network structure, after a network model is designed, when a convolution operation is performed on an image by using a large convolutional kernel to extract features, an object with a small target is often ignored, and when the convolution operation is performed on the image by using a small convolutional kernel to extract features, a discontinuous phenomenon is easily generated in the segmentation result of the object with the large target, so that the image segmentation precision is influenced.
Disclosure of Invention
The invention aims to solve the problem that the extraction precision of a road region is influenced by overlarge or undersize proportion of the road region area in a single remote sensing image when a camera images due to inconsistent flight heights of unmanned aerial vehicles, and provides an unmanned aerial vehicle image road information extraction method based on a multi-scale generation countermeasure network.
In order to achieve the purpose, the invention provides an unmanned aerial vehicle image road information extraction method based on a multi-scale generation countermeasure network, which is characterized by comprising the following steps:
(1) obtaining training data
Cutting an original unmanned aerial vehicle remote sensing image into a series of remote sensing images with the size of n multiplied by n, then making label images for marking a road area, and taking each remote sensing image and the corresponding label image as training data;
(2) building a generating network
2.1) in the generation network, obtaining RGB three-channel images with the sizes of 0.5 nx0.5 n and 2 nx2 n respectively through convolution operation and deconvolution operation on the RGB three-channel images of the remote sensing images with the sizes of nxn;
2.2) in the network generation, the RGB three-channel image with the size of 2n multiplied by 2n obtained in the step 2.1) is subjected to an image segmentation network of end-to-end training to obtain a classification probability feature map with the size of 2n multiplied by 2n, and the probability feature map with the size of n multiplied by n is obtained through convolution operation;
2.3) in the network generation, the RGB three-way image with the size of 0.5n multiplied by 0.5n obtained in the step 2.1) is subjected to an image segmentation network with the same structure and end-to-end training in the step 2.2) to obtain a classification probability feature map with the size of 0.5n multiplied by 0.5n, and a probability feature map with the size of n multiplied by n is obtained through deconvolution operation;
2.4) in the generation network, the RGB image of the remote sensing image with the size of n multiplied by n is subjected to the image segmentation network with the same structure as that in the step 2.2) to obtain a classification probability feature map with the size of n multiplied by n, namely an n multiplied by n probability feature map;
2.4), in the generation network, finally fusing the three probability characteristic graphs with the size of n multiplied by n obtained in the steps 2.2, 2.3 and 2.4 by a pixel-by-pixel addition method to obtain an output characteristic graph of the generation network;
(3) inputting the remote sensing image with the size of n multiplied by n in the training data into the generating network constructed in the step (2) to obtain an output characteristic diagram, respectively carrying out convolution operation on the output characteristic diagram and the remote sensing image with the size of n multiplied by n, connecting the characteristic diagrams obtained by convolution to be used as the input of a judging network, obtaining an output between 0 and 1 after the judging network passes through the judging network, taking the input as the input of a false image by the judging network, wherein the expected output of the judging network is 0 at the moment, and subtracting the output of the judging network from the expected output at the moment to obtain an error;
(4) respectively carrying out convolution operation on the remote sensing image with the size of n multiplied by n in the training data and the label image corresponding to the remote sensing image, then connecting feature maps obtained by convolution to be used as input of a discrimination network, obtaining output between 0 and 1 after passing through the discrimination network, using the input as the input of a real image by the discrimination network, wherein the expected output of the discrimination network is 1 at the moment, and subtracting the output of the discrimination network from the expected output at the moment to obtain an error;
(5) reversely propagating the errors obtained in the steps (3) and (4), updating the generated network and judging network parameters, wherein the image segmentation networks of the end-to-end training in the steps (2.2), (2.3) and (2.4) share weight parameters;
(6) training the generation network through steps (3), (4) and (5) by all remote sensing images and the label images corresponding to the remote sensing images in the training data obtained in the step (1), so that the generation network and the discrimination network in the generation countermeasure network reach a balanced state, and the difference between the output characteristic diagram (i.e. false diagram) generated by the generation network and the label images is very small, so that the discrimination network cannot discriminate whether the input image is from the label image or the output characteristic diagram (i.e. false diagram) generated by the generation network;
(7) and independently taking out and applying a generation network in the generation countermeasure network in the balanced state, cutting the remote sensing image shot by the actual unmanned aerial vehicle into a series of remote sensing images with the size of n multiplied by n, taking the remote sensing images as input, and taking an output characteristic diagram of the generation network as an extracted road area image which is a segmentation result.
The object of the invention is thus achieved.
The invention relates to an unmanned aerial vehicle image road information extraction method based on a multi-scale generation countermeasure network, which comprises the steps of firstly, carrying out convolution and deconvolution operations on a remote sensing image respectively to obtain an image with the length and the width reduced by one time and an image with the length and the width enlarged by one time; secondly, passing the images of the three scales through an end-to-end training image segmentation network to obtain pixel level prediction probability graphs of the three corresponding scales, namely output characteristic graphs; thirdly, unifying the pixel-level output feature graphs of the three scales to the original training data image size scale through convolution and deconvolution operations, and fusing the three scale image features together through a pixel-by-pixel addition method; and finally, inputting the output characteristic diagram fused with the three scale characteristics into a discrimination network, comparing the output characteristic diagram with a real sample label to obtain an error, reversely propagating the error, and updating the generated network and the discrimination network parameters. After training of a certain amount of training data, a generation network and a discrimination network in the generated countermeasure network reach a balance, and a false image generated by the generation network has little difference with a real label image, so that the discrimination network cannot discriminate whether an input image is from the label image or the false image generated by the generation network. And finally, taking the generation result of the generation network in the countermeasure generation network as the segmentation result in the application, namely the extracted road area image.
According to the invention, the characteristics of the unmanned aerial vehicle remote sensing image are learned through the convolutional neural network, the advantages of the generated countermeasure network are combined, and meanwhile, a multi-scale image processing method is fused, so that the road area can be well extracted under the condition that the road area occupies too large or too small of the area of a single image, and the road area segmentation precision in the unmanned aerial vehicle remote sensing image is improved.
Drawings
FIG. 1 is a diagram of a generation of a global structure of a countermeasure network;
FIG. 2 is a flowchart of an embodiment of the method for extracting image road information of an unmanned aerial vehicle based on a multi-scale generation countermeasure network according to the present invention;
FIG. 3 is a diagram of a fused multi-scale generation network architecture in accordance with the present invention;
FIG. 4 is a diagram of a discrimination network architecture;
FIG. 5 is a set of maps of the present invention against road region images without fused multi-scale features that generate a network output;
FIG. 6 is another set of maps of the present invention with road region images that generate network outputs without fusing multi-scale features.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Fig. 1 is a general configuration diagram of a generation countermeasure network.
As shown in fig. 1, in the generation of the countermeasure network, the remote sensing image is input to the generation network to obtain a false map which is an output characteristic map, the tag image or the false map and the remote sensing image are input to the discrimination network to obtain a true/false probability, and then the true/false probability is subtracted from the expected output 1/0 to obtain an error. And (5) error back propagation, updating the generated network and judging the network parameters. The remote sensing image used as training and the corresponding label image are continuously input, so that a generating network and a judging network in the generated countermeasure network reach a balanced state, the difference between the output characteristic diagram, namely the false diagram, generated by the generating network and the label image is small, and the judging network cannot judge whether the input image is from the label image or the output characteristic diagram, namely the false diagram, generated by the generating network. Therefore, the obtained generation network can be applied to the remote sensing image segmentation task shot by the actual unmanned aerial vehicle.
Fig. 2 is a flowchart of an embodiment of the method for extracting the image road information of the unmanned aerial vehicle based on the multi-scale generation countermeasure network.
In this embodiment, as shown in fig. 1, the method for extracting the image road information of the unmanned aerial vehicle based on the multi-scale generation countermeasure network of the present invention includes the following steps:
step S1: obtaining training data
Cutting an original unmanned aerial vehicle remote sensing image into a series of remote sensing images with the size of n multiplied by n, then making label images for marking a road area, and taking each remote sensing image and the corresponding label image as training data.
In this embodiment, the original unmanned aerial vehicle remote sensing image is cut into a series of remote sensing images of 500 × 500, and then a label image for marking a road area is made manually. In order to verify the object segmentation capability of the present invention, in the present embodiment, 90% of the remote sensing images and their corresponding label images are used as training data, and the remaining 10% of the remote sensing images and their corresponding label images are used as test data.
Step S2: building a generative network
As shown in fig. 3, in the generation network, for the RGB three-channel image I of the remote sensing image with the size of n × n, the RGB three-channel image I with the sizes of 0.5n × 0.5n and 2n × 2n is obtained by convolution operation and deconvolution operation, respectively1、I2. In the present embodiment, the RGB three-channel image I is 1000 × 1000 and 250 × 250, respectively1、I2
The obtained RGB three-channel image I with the size of 2n multiplied by 2n1Obtaining a classification probability characteristic diagram I with the size of 2n multiplied by 2n through an image segmentation network trained end to end3Obtaining a probability characteristic diagram I with the size of n multiplied by n through convolution operation4
The obtained RGB three-way image I with the size of 0.5n multiplied by 0.5n is used2Obtaining a classification probability characteristic diagram I with the size of 0.5n multiplied by 0.5n through an image segmentation network with the same structure and end-to-end training5Obtaining a probability characteristic diagram I with the size of n multiplied by n through deconvolution operation6
An RGB image I of a remote sensing image with the size of n multiplied by n is subjected to an image segmentation network with the same structure to obtain a classification probability feature map with the size of n multiplied by n, namely an n multiplied by n probability feature map I7
Finally, the obtained three probability characteristic graphs I with the size of n multiplied by n4、I6、I7Fusing image characteristics of three scales by a pixel-by-pixel addition method to obtain an output characteristic diagram I of a generated network8
Step S3: the remote sensing image with the size of n × n in the training data is input to the generation network constructed in step S2, and an output feature map is obtained. As shown in fig. 4, the output feature map and the remote sensing image with the size of n × n are respectively subjected to a convolution operation, the feature maps obtained by convolution are connected to be used as the input of the discrimination network, an output between 0 and 1 is obtained after the feature maps pass through the discrimination network, the discrimination network takes the input as the input of the false image, the expected output of the discrimination network is 0 at the moment, and the output of the discrimination network and the expected output at the moment are subtracted to obtain an error.
Step S4: the remote sensing image with the size of n multiplied by n in the training data and the corresponding label image are respectively subjected to convolution operation once, then feature maps obtained by convolution are connected to be used as the input of a discrimination network, an output between 0 and 1 is obtained after the feature maps pass through the discrimination network, the discrimination network takes the input as the input of a real image, the expected output of the discrimination network is 1 at the moment, and the output of the discrimination network and the expected output at the moment are subtracted to obtain an error.
Step S5: and (4) reversely propagating the errors obtained in the steps S3 and S4, updating the generated network and judging network parameters, wherein the weight parameters are shared by the three end-to-end trained image segmentation networks in the step S2.
Step S6: all the remote sensing images and the corresponding label images in the training data obtained in the step S1 are subjected to training of the generation network through the steps S3, S4 and S5, so that the generation network and the discrimination network in the generation countermeasure network reach a balanced state, and the output characteristic diagram, namely the false diagram, generated by the generation network is slightly different from the label images, so that the discrimination network cannot discriminate whether the input image is from the label image or the output characteristic diagram, namely the false diagram, generated by the generation network.
Step S7: the generated network in the generated countermeasure network in the balanced state is independently taken out and applied, the remote sensing images shot by the actual unmanned aerial vehicle are cut into a series of remote sensing images with the size of n multiplied by n, the remote sensing images are used as input, and the output characteristic diagram of the generated network is used as the segmentation result, namely the extracted road area image.
Test data are input into a generation network, and the obtained road area image is compared with the label image, so that the extraction effect is good.
FIG. 5 is a set of maps of the present invention against road region images without fused multi-scale features that generate the output of the network.
In fig. 5, a first column of images is an input original unmanned aerial vehicle remote sensing image, a second column of images is a corresponding artificially labeled tag image, a third column of data is a road area image output by a generation network that fuses multi-scale features, and a fourth column of data is a road area image output by a generation network that does not fuse multi-scale features.
Through comparison, the fact that under the conditions that the image background is relatively simple and the outline of the road target area is obvious, the road area information is well extracted through the network which is fused with the multi-scale feature structure and the network which is not fused with the multi-scale feature structure;
FIG. 6 is another set of maps of the present invention with road region images that generate network outputs without fusing multi-scale features.
In fig. 6, a first column of images is an input original unmanned aerial vehicle remote sensing image, a second column of images is a corresponding artificially labeled tag image, a third column of data is a road area image output by a generation network that fuses multi-scale features, and a fourth column of data is a road area image output by a generation network that does not fuse multi-scale features.
Through comparison, the road information extracted by the network with the multi-scale feature structure is more accurate than the road information extracted by the network without the multi-scale feature structure under the condition that the road area in the image is shaded and other objects similar to the road features exist.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (2)

Translated fromChinese
1.一种基于多尺度生成对抗网络的无人机影像道路信息提取方法,其特征在于,包括以下步骤:1. a method for extracting road information from drone images based on multi-scale generation confrontation network, is characterized in that, comprises the following steps:(1)、获取训练数据(1), obtain training data将原始无人机遥感图像裁剪成n×n大小的一系列遥感图像,然后制作标记出道路区域的标签图像,将各遥感图像及其对应的标签图像作为训练数据;Crop the original UAV remote sensing image into a series of remote sensing images of n×n size, and then make a label image marking the road area, and use each remote sensing image and its corresponding label image as training data;(2)、构建生成网络(2), build a generative network2.1)、在生成网络中,对于n×n大小的遥感图像的RGB三通道图像,分别通过卷积操作和反卷积操作,得到大小分别为0.5n×0.5n和2n×2n的RGB三通道图像;2.1) In the generation network, for the RGB three-channel image of the n×n remote sensing image, the RGB three-channel image with the size of 0.5n×0.5n and 2n×2n is obtained through the convolution operation and the deconvolution operation respectively. image;2.2)、在生成网络中,将步骤2.1)中得到的大小为2n×2n的RGB三通道图像经过一个端到端训练的图像分割网络,得到一个大小为2n×2n的分类概率特征图,经过卷积操作,得到大小为n×n的概率特征图;2.2) In the generation network, the RGB three-channel image with a size of 2n × 2n obtained in step 2.1) is subjected to an end-to-end training image segmentation network to obtain a classification probability feature map with a size of 2n × 2n. Convolution operation to obtain a probability feature map of size n×n;2.3)、在生成网络中,将步骤2.1)中得到的大小为0.5n×0.5n的RGB三通图像经过与步骤2.2)中相同结构的端到端训练的图像分割网络,得到一个大小为0.5n×0.5n的分类概率特征图,经过反卷积操作,得到大小为n×n的概率特征图;2.3) In the generation network, the RGB three-pass image with a size of 0.5n × 0.5n obtained in step 2.1) is passed through the end-to-end training image segmentation network with the same structure as in step 2.2), and a size of 0.5 is obtained. The classification probability feature map of n×0.5n, after deconvolution operation, obtains the probability feature map of size n×n;2.4)、在生成网络中,将n×n大小的遥感图像的RGB图像经过与步骤2.2)中相同结构的图像分割网络,得到一个大小为n×n的分类概率特征图,即n×n的概率特征图;2.4) In the generation network, the RGB image of the n×n remote sensing image is passed through the image segmentation network with the same structure as in step 2.2) to obtain a classification probability feature map of size n×n, that is, the size of n×n. Probabilistic feature map;2.5)、在生成网络中,最后将步骤2.2)、2.3)、2.4)得到的三个大小均为n×n的概率特征图,通过逐像素相加的方法,融合三个尺度的图像特征,得到生成网络的输出特征图;2.5) In the generation network, finally, the three probabilistic feature maps obtained in steps 2.2), 2.3), and 2.4) are all n×n, and the image features of the three scales are fused by the pixel-by-pixel addition method. Get the output feature map of the generated network;(3)、将训练数据中n×n大小的遥感图像输入到步骤(2)构建的生成网络中,得到输出特征图,将输出特征图与n×n大小的遥感图像分别经过一次卷积操作,将卷积得到的特征图连接起来作为判别网络的输入,经过判别网络之后得到一个介于0和1之间的输出,判别网络将此输入当做是假图像的输入,此时判别网络的期望输出为0,判别网络输出与此时的期望输出相减得到误差;(3) Input the remote sensing image of size n×n in the training data into the generation network constructed in step (2) to obtain the output feature map, and subject the output feature map and the remote sensing image of size n×n to a convolution operation respectively , the feature maps obtained by convolution are connected as the input of the discriminant network, and an output between 0 and 1 is obtained after the discriminant network, and the discriminant network regards this input as the input of a fake image. The output is 0, and the discriminant network output is subtracted from the expected output at this time to obtain the error;(4)、将训练数据中n×n大小的遥感图像及其对应的标签图像分别经过一次卷积操作,然后将卷积得到的特征图连接起来作为判别网络的输入,经过判别网络之后得到一个介于0和1之间的输出,判别网络将此输入当做是真实图像的输入,此时判别网络的期望输出为1,判别网络输出与此时的期望输出相减得到误差;(4) The remote sensing images of size n×n in the training data and their corresponding label images are respectively subjected to a convolution operation, and then the feature maps obtained by convolution are connected as the input of the discriminant network. After the discriminant network, a For the output between 0 and 1, the discriminant network regards this input as the input of the real image. At this time, the expected output of the discriminant network is 1, and the discriminant network output is subtracted from the expected output at this time to obtain the error;(5)、将步骤(3)、(4)得到的误差反向传播,更新生成网络和判别网络参数,其中,步骤2.2)、2.3)、2.4)中的端到端训练的图像分割网络共用权值参数;(5) Backpropagating the errors obtained in steps (3) and (4) to update the parameters of the generation network and the discriminant network, where the end-to-end training image segmentation networks in steps 2.2), 2.3) and 2.4) are shared weight parameter;(6)、步骤(1)得到的训练数据中的所有遥感图像以及各自对应的标签图像,经过步骤(3)、(4)、(5)对生成网络进行训练,使生成对抗网络中的生成网络与判别网络达到一个平衡状态,判别网络也判别不了其输入的图像是来自于标签图像还是来自于生成网络所产生的输出特征图;(6) All remote sensing images in the training data obtained in step (1) and their corresponding label images are trained through steps (3), (4) and (5) to generate The network and the discriminant network reach a balance state, and the discriminant network cannot distinguish whether the input image comes from the label image or the output feature map generated by the generation network;(7)、将达到平衡状态下的生成对抗网络中的生成网络单独取出来进行应用,将实际无人机拍摄到的遥感图像剪成n×n大小的一系列遥感图像,并将其作为输入,将生成网络的输出特征图作为分割结果即提取的道路区域图像。(7) Take out the generation network in the generation confrontation network in a balanced state and apply it separately, cut the remote sensing images captured by the actual UAV into a series of remote sensing images of n×n size, and use them as input , and take the output feature map of the generated network as the segmentation result, that is, the extracted road area image.2.根据权利要求1所述的基于多尺度生成对抗网络的无人机影像道路信息提取方法,其特征在于,n=500。2 . The method for extracting road information from UAV images based on a multi-scale generative adversarial network according to claim 1 , wherein n=500. 3 .
CN201810707890.8A2018-07-022018-07-02 A method of extracting road information from UAV remote sensing images based on multi-scale generative adversarial networkActiveCN109086668B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810707890.8ACN109086668B (en)2018-07-022018-07-02 A method of extracting road information from UAV remote sensing images based on multi-scale generative adversarial network

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810707890.8ACN109086668B (en)2018-07-022018-07-02 A method of extracting road information from UAV remote sensing images based on multi-scale generative adversarial network

Publications (2)

Publication NumberPublication Date
CN109086668A CN109086668A (en)2018-12-25
CN109086668Btrue CN109086668B (en)2021-05-14

Family

ID=64836907

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810707890.8AActiveCN109086668B (en)2018-07-022018-07-02 A method of extracting road information from UAV remote sensing images based on multi-scale generative adversarial network

Country Status (1)

CountryLink
CN (1)CN109086668B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111376910B (en)*2018-12-292022-04-15北京嘀嘀无限科技发展有限公司User behavior identification method and system and computer equipment
CN109993820B (en)*2019-03-292022-09-13合肥工业大学Automatic animation video generation method and device
CN109978897B (en)*2019-04-092020-05-08中国矿业大学 Heterogeneous remote sensing image registration method and device based on multi-scale generative adversarial network
CN110598673A (en)*2019-09-242019-12-20电子科技大学Remote sensing image road extraction method based on residual error network
CN111339950B (en)*2020-02-272024-01-23北京交通大学Remote sensing image target detection method
CN111428678B (en)*2020-04-022023-06-23山东卓智软件股份有限公司Method for generating remote sensing image sample expansion of countermeasure network under space constraint condition
CN111582104B (en)*2020-04-282021-08-06中国科学院空天信息创新研究院 Remote sensing image semantic segmentation method and device based on self-attention feature aggregation network
CN111582175B (en)*2020-05-092023-07-21中南大学 A Semantic Segmentation Method for High-Resolution Remote Sensing Imagery Using Shared Multi-Scale Adversarial Features
CN111985464B (en)*2020-08-132023-08-22山东大学Court judgment document-oriented multi-scale learning text recognition method and system
CN113033608B (en)*2021-02-082024-08-09北京工业大学Remote sensing image road extraction method and device
CN113538615B (en)*2021-06-292024-01-09中国海洋大学Remote sensing image coloring method based on double-flow generator depth convolution countermeasure generation network
WO2023277793A2 (en)*2021-06-302023-01-05Grabtaxi Holdings Pte. LtdSegmenting method for extracting a road network for use in vehicle routing, method of training the map segmenter, and method of controlling a vehicle
CN113688873B (en)*2021-07-282023-08-22华东师范大学 A Vector Road Network Generation Method with Intuitive Interactive Capability
CN113361508B (en)*2021-08-112021-10-22四川省人工智能研究院(宜宾)Cross-view-angle geographic positioning method based on unmanned aerial vehicle-satellite
CN115641512B (en)*2022-12-262023-04-07成都国星宇航科技股份有限公司Satellite remote sensing image road identification method, device, equipment and medium
CN116935043B (en)*2023-06-142025-09-30电子科技大学 A typical ground object remote sensing image generation method based on multi-task generative adversarial network

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108090902A (en)*2017-12-302018-05-29中国传媒大学A kind of non-reference picture assessment method for encoding quality based on multiple dimensioned generation confrontation network
CN108230264A (en)*2017-12-112018-06-29华南农业大学 A Single Image Dehazing Method Based on ResNet Neural Network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10430685B2 (en)*2016-11-162019-10-01Facebook, Inc.Deep multi-scale video prediction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108230264A (en)*2017-12-112018-06-29华南农业大学 A Single Image Dehazing Method Based on ResNet Neural Network
CN108090902A (en)*2017-12-302018-05-29中国传媒大学A kind of non-reference picture assessment method for encoding quality based on multiple dimensioned generation confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Improved image automatic segmentation method based on;Peizhi Wen等;《 Application Research of Computers》;20170930;第1-7页*

Also Published As

Publication numberPublication date
CN109086668A (en)2018-12-25

Similar Documents

PublicationPublication DateTitle
CN109086668B (en) A method of extracting road information from UAV remote sensing images based on multi-scale generative adversarial network
CN111862126B (en)Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN110119728B (en)Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
CN112749594B (en) Information completion method, lane line recognition method, intelligent driving method and related products
CN109087510B (en)Traffic monitoring method and device
CN111222395A (en)Target detection method and device and electronic equipment
CN111738110A (en) Vehicle target detection method in remote sensing images based on multi-scale attention mechanism
US20240029303A1 (en)Three-dimensional target detection method and apparatus
CN112801158A (en)Deep learning small target detection method and device based on cascade fusion and attention mechanism
Wan et al.A novel neural network model for traffic sign detection and recognition under extreme conditions
CN115240168A (en)Perception result obtaining method and device, computer equipment and storage medium
CN113763447B (en)Method for completing depth map, electronic device and storage medium
CN113052108A (en)Multi-scale cascade aerial photography target detection method and system based on deep neural network
CN115223146A (en)Obstacle detection method, obstacle detection device, computer device, and storage medium
WO2020259416A1 (en)Image collection control method and apparatus, electronic device, and storage medium
CN116863354A (en) An aerial target saliency detection method based on double fovea imitation of eagle eye vision
Ren et al.Environment influences on uncertainty of object detection for automated driving systems
CN116128718A (en) Method and device for target detection in high-resolution remote sensing images based on attention mechanism
CN117994748A (en)Road side aerial view target detection method and device, computing equipment and storage medium
CN117911900A (en) A method and system for detecting obstacles and targets of substation inspection drones
CN110909656B (en)Pedestrian detection method and system integrating radar and camera
CN108229273B (en)Method and device for training multilayer neural network model and recognizing road characteristics
CN115424150A (en)Target identification positioning and presenting method, device, equipment and storage medium
CN118366065B (en) A method and system for detecting vehicles using drone images based on altitude information
Ahmad et al.Resource efficient mountainous skyline extraction using shallow learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp