Integrating Aerial and Street View Images for Urban Land Use Classification




Abstract
:1. Introduction
2. Related Work
2.1. Land Use and Land Cover Classification
2.2. DNN-based Semantic Segmentation
3. Methodology
3.1. Ground Feature Map Construction
3.1.1. Semantic Feature Extraction
3.1.2. Spatial Interpolation
3.2. DNN-Based Data Fusion
3.2.1. Semantic Segmentation Network
3.2.2. Data Fusion
4. Experiments
4.1. Dataset
4.2. Evaluation Metrics
- (1)
- Pixel accuracy:
- (2)
- Kappa coefficient:, ()
- (3)
- Mean IoU:, ()
- (4)
- F1 score:
4.3. Study on Integrating Aerial and Street View Images
- (1)
- Aerial images only. In this group of study, the input data only include aerial images, and original SegNet is used to conduct the segmentation task.
- (2)
- Street view images only. In this experiment, we first extract semantic features from GSVs, and then interpolate them in the spatial domain to acquire ground feature maps. Next, we use the spatially densified ground feature maps as inputs to SegNet by modifying the shape of input filters to match the dimensions of input ground feature maps, and finally make the dense prediction.
- (3)
- Integrating aerial and street view images. In this study, we try to fuse aerial images and ground feature maps constructed from GSVs. We use the proposed method (described inSection 3) to fuse aerial images and ground feature maps, and then acquire the final segmentation results.
4.3.1. Implementation Details
4.3.2. Results
4.4. Study on the Impact of Aerial Image Resolution
4.4.1. Implementation Details
4.4.2. Results
5. Discussion
5.1. Discussion on Classification Results
- (1)
- The coverage of ground-level information is limited, because the street view images are very sparsely distributed and only limited scenes near streets can be captured by the available street view images. Besides, in our study, spatial interpolation is used to project semantic information of street view images, which suffers from certain loss of information. Although cutoff distance threshold is set to limit the interpolation in local visual areas of available street view images and the weights satisfy distance decay assumption which limit noise introduced by the interpolation, the operation may still bring in certain level of noise and thus affect the final classification accuracy. In the future, better processing strategies of street view images will be further explored.
- (2)
- The base neural network we used may limit the performance of semantic segmentation results. As the focus of the present study is to investigate methods for integrating different sources of information, specifically street view images and aerial imagery, for land use classification, we choose to use SegNet because of its simple and elegant architecture, its efficiency and effectiveness in both aerial and natural image segmentation as shown in [11,42]. However, ever since the introduction of FCN, the development of DNN-based semantic segmentation networks emerge frequently. There are many alternative neural network architectures apart from SegNet can be used in the context of this work. Segmentation networks with state-of-the-art performance may well improve the accuracy of the final results in our case. It would be interesting to compare performances of different state-of-the-art CNN architectures on fusing the two sources of data in future work.
- (3)
- The two sources of data contain duplicated information, and the aerial images may already include much of what there is in the street view images. The classification results using aerial images only have achieved a relatively high accuracy, which suggests that aerial images contain most of the information for urban land use classification and the addition of street view images improve the results but not dramatically. In addition, street view images add more values when the resolutions of the aerial images are lower, which also implies that the contribution of street view images to the classification results is associated with the information provided by aerial images.
5.2. Case Study on Segmentation Refinement
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Tu, W.; Hu, Z.; Li, L.; Cao, J.; Jiang, J.; Li, Q.; Li, Q. Portraying Urban Functional Zones by Coupling Remote Sensing Imagery and Human Sensing Data.Remote Sens.2018,10, 141. [Google Scholar] [CrossRef]
- Pacifici, F.; Chini, M.; Emery, W.J. A neural network approach using multi-scale textural metrics from very high-resolution panchromatic imagery for urban land-use classification.Remote Sens. Environ.2009,113, 1276–1292. [Google Scholar] [CrossRef]
- Jia, Y.; Ge, Y.; Ling, F.; Guo, X.; Wang, J.; Wang, L.; Chen, Y.; Li, X. Urban Land Use Mapping by Combining Remote Sensing Imagery and Mobile Phone Positioning Data.Remote Sens.2018,10, 446. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning.Nature2015,521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A Review on Deep Learning Techniques Applied to Semantic Segmentation.arXiv, 2017; arXiv:1704.06857. [Google Scholar]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources.IEEE Geosci. Remote Sens. Mag.2017,5, 8–36. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Y.; Newsam, S. Land Use Classification Using Convolutional Neural Networks Applied to Ground-level Images. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 3–6 November 2015; ACM: New York, NY, USA, 2015; pp. 61:1–61:4. [Google Scholar] [CrossRef]
- Lefèvre, S.; Tuia, D.; Wegner, J.D.; Produit, T.; Nassaar, A.S. Toward Seamless Multiview Scene Analysis From Satellite to Street Level.Proc. IEEE2017,105, 1884–1899. [Google Scholar] [CrossRef] [Green Version]
- Anguelov, D.; Dulong, C.; Filip, D.; Frueh, C.; Lafon, S.; Lyon, R.; Ogale, A.; Vincent, L.; Weaver, J. Google Street View: Capturing the World at Street Level.Computer2010,43, 32–38. [Google Scholar] [CrossRef]
- Zhang, W.; Li, W.; Zhang, C.; Hanink, D.M.; Li, X.; Wang, W. Parcel-based urban land use classification in megacity using airborne LiDAR, high resolution orthoimagery, and Google Street View.Comput. Environ. Urb. Syst.2017,64, 215–228. [Google Scholar] [CrossRef]
- Audebert, N.; Saux, B.L.; Lefèvre, S. Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks.ISPRS J. Photogramm. Remote Sens.2018,140, 20–32. [Google Scholar] [CrossRef] [Green Version]
- Audebert, N.; Le Saux, B.; Lefèvre, S. Semantic Segmentation of Earth Observation Data Using Multimodal and Multi-scale Deep Networks. InComputer Vision—ACCV 2016; Lai, S.H., Lepetit, V., Nishino, K., Sato, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 180–196. [Google Scholar]
- Kampffmeyer, M.; Salberg, A.B.; Jenssen, R. Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 680–688. [Google Scholar] [CrossRef]
- Albert, A.; Kaur, J.; Gonzalez, M.C. Using Convolutional Networks and Satellite Imagery to Identify Patterns in Urban Environments at a Large Scale. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17 August 2017; pp. 1357–1366. [Google Scholar] [CrossRef]
- Hu, S.; Wang, L. Automated urban land-use classification with remote sensing.Int. J. Remote Sens.2013,34, 790–803. [Google Scholar] [CrossRef]
- Hernandez, I.E.R.; Shi, W. A Random Forests classification method for urban land-use mapping integrating spatial metrics and texture analysis.Int. J. Remote Sens.2018,39, 1175–1198. [Google Scholar] [CrossRef]
- Lv, Q.; Dou, Y.; Niu, X.; Xu, J.; Xu, J.; Xia, F. Urban Land Use and Land Cover Classification Using Remotely Sensed SAR Data through Deep Belief Networks.J. Sens.2015,2015, 538063. [Google Scholar] [CrossRef]
- Pei, T.; Sobolevsky, S.; Ratti, C.; Shaw, S.L.; Li, T.; Zhou, C. A new insight into land use classification based on aggregated mobile phone data.Int. J. Geogr. Inf. Sci.2014,28, 1988–2007. [Google Scholar] [CrossRef] [Green Version]
- Antoniou, V.; Fonte, C.C.; See, L.; Estima, J.; Arsanjani, J.J.; Lupia, F.; Minghini, M.; Foody, G.M.; Fritz, S. Investigating the Feasibility of Geo-Tagged Photographs as Sources of Land Cover Input Data.ISPRS Int. J. Geo-Inf.2016,5, 64. [Google Scholar] [CrossRef] [Green Version]
- Torres, M.; Qiu, G. Habitat image annotation with low-level features, medium-level knowledge and location information.Multimed. Syst.2016,22, 767–782. [Google Scholar] [CrossRef]
- Kang, J.; Körner, M.; Wang, Y.; Taubenböck, H.; Zhu, X.X. Building instance classification using street view images.ISPRS J. Photogramm. Remote Sens.2018. [Google Scholar] [CrossRef]
- Tu, W.; Cao, J.; Yue, Y.; Shaw, S.L.; Zhou, M.; Wang, Z.; Chang, X.; Xu, Y.; Li, Q. Coupling mobile phone and social media data: A new approach to understanding urban functions and diurnal patterns.Int. J. Geogr. Inf. Sci.2017,31, 2331–2358. [Google Scholar] [CrossRef]
- Cao, J.; Tu, W.; Li, Q.; Zhou, M.; Cao, R. Exploring the distribution and dynamics of functional regions using mobile phone data and social media data. In Proceedings of the 14th International Conference on Computers in Urban Planning and Urban Management, Boston, MA, USA, 7–10 July 2015; pp. 264:1–264:16. [Google Scholar]
- Akhmad Nuzir, F.; Julien Dewancker, B. Dynamic Land-Use Map Based on Twitter Data.Sustainability2017,9, 2158. [Google Scholar] [CrossRef]
- Tu, W.; Cao, R.; Yue, Y.; Zhou, B.; Li, Q.; Li, Q. Spatial variations in urban public ridership derived from GPS trajectories and smart card data.J. Trans. Geogr.2018,69, 45–57. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, F.; Xiao, Y.; Gao, S. Urban land uses and traffic ‘source-sink areas’: Evidence from GPS-enabled taxi data in Shanghai.Landsc. Urb. Plan.2012,106, 73–87. [Google Scholar] [CrossRef]
- Liu, X.; He, J.; Yao, Y.; Zhang, J.; Liang, H.; Wang, H.; Hong, Y. Classifying urban land use by integrating remote sensing and social media data.Int. J. Geogr. Inf. Sci.2017,31, 1675–1696. [Google Scholar] [CrossRef]
- Hu, T.; Yang, J.; Li, X.; Gong, P. Mapping Urban Land Use by Using Landsat Images and Open Social Data.Remote Sens.2016,8, 151. [Google Scholar] [CrossRef]
- Jendryke, M.; Balz, T.; McClure, S.C.; Liao, M. Putting people in the picture: Combining big location-based social media data and remote sensing imagery for enhanced contextual urban information in Shanghai.Comput. Environ. Urb. Syst.2017,62, 99–112. [Google Scholar] [CrossRef] [Green Version]
- Workman, S.; Zhai, M.; Crandall, D.J.; Jacobs, N. A Unified Model for Near and Remote Sensing. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2707–2716. [Google Scholar] [CrossRef]
- Cao, R.; Qiu, G. Urban land use classification based on aerial and ground images. In Proceedings of the 2018 International Conference on Content-Based Multimedia Indexing, La Rochelle, France, 4–6 September 2018. [Google Scholar]
- Sakurada, K.; Okatani, T.; Kitani, K.M. Hybrid macro–micro visual analysis for city-scale state estimation.Comput. Vis. Image Underst.2016,146, 86–98. [Google Scholar] [CrossRef]
- Sakurada, K.; Okatani, T.; Kitani, K.M. Massive City-Scale Surface Condition Analysis Using Ground and Aerial Imagery. InAsian Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 49–64. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1106–1114. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition.arXiv, 2014; arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.E.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition.arXiv, 2015; arXiv:1512.03385. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.IEEE Trans. Pattern Anal. Mach. Intell.2017,39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation.IEEE Trans. Pattern Anal. Mach. Intell.2017,39, 640–651. [Google Scholar] [CrossRef] [PubMed]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.IEEE Trans. Pattern Anal. Mach. Intell.2017,39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
- Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation.arXiv, 2016; arXiv:1611.09326. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.arXiv, 2016; arXiv:1606.00915. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation.arXiv, 2017; arXiv:1706.05587. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation.arXiv, 2018; arXiv:1802.02611. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs.arXiv, 2014; arXiv:1412.7062. [Google Scholar]
- Liu, Y.; Fan, B.; Wang, L.; Bai, J.; Xiang, S.; Pan, C. Semantic labeling in very high resolution images via a self-cascaded convolutional neural network.ISPRS J. Photogramm. Remote Sens.2017. [Google Scholar] [CrossRef]
- Liu, Y.; Minh Nguyen, D.; Deligiannis, N.; Ding, W.; Munteanu, A. Hourglass-ShapeNetwork Based Semantic Segmentation for High Resolution Aerial Imagery.Remote Sens.2017,9, 522. [Google Scholar] [CrossRef]
- Wang, H.; Wang, Y.; Zhang, Q.; Xiang, S.; Pan, C. Gated Convolutional Neural Network for Semantic Segmentation in High-Resolution Images.Remote Sens.2017,9, 446. [Google Scholar] [CrossRef]
- Zhang, M.; Hu, X.; Zhao, L.; Lv, Y.; Luo, M.; Pang, S. Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images.Remote Sens.2017,9, 500. [Google Scholar] [CrossRef]
- ISPRS Working Group II/4. ISPRS 2D Semantic Labeling Contest. 2018. Available online:http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html (accessed on 24 July 2018).
- Zhang, W.; Huang, H.; Schmitz, M.; Sun, X.; Wang, H.; Mayer, H. Effective Fusion of Multi-Modal Remote Sensing Data in a Fully Convolutional Network for Semantic Labeling.Remote Sens.2017,10, 52. [Google Scholar] [CrossRef]
- Zhang, W.; Witharana, C.; Li, W.; Zhang, C.; Li, X.; Parent, J.; Zhang, W.; Witharana, C.; Li, W.; Zhang, C.; et al. Using Deep Learning to Identify Utility Poles with Crossarms and Estimate Their Locations from Google Street View Images.Sensors2018,18, 2484. [Google Scholar] [CrossRef] [PubMed]
- Zhou, B.; Lapedriza, A.; Khosla, A.; Oliva, A.; Torralba, A. Places: A 10 Million Image Database for Scene Recognition.IEEE Trans. Pattern Anal. Mach. Intell.2018,40, 1452–1464. [Google Scholar] [CrossRef] [PubMed]
- Anjyo, K.; Lewis, J.P.; Pighin, F. Scattered Data Interpolation for Computer Graphics. InACM SIGGRAPH 2014 Courses; ACM: New York, NY, USA, 2014; pp. 27:1–27:69. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Microsoft. Bing Maps. 2018. Available online:https://www.bing.com/maps/aerial (accessed on 24 July 2018).
- Google Developers. Developer Guide of Street View API. 2018. Available online:https://developers.google.com/maps/documentation/streetview/intro (accessed on 24 July 2018).
- Department of City Planning of New York City. BYTES of the BIG APPLE. 2018. Available online:https://www1.nyc.gov/site/planning/data-maps/open-data.page (accessed on 15 July 2018).
- PyTorch Core Team. PyTorch. 2018. Available online:https://pytorch.org (accessed on 15 July 2018).
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar] [CrossRef]
- Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. In Proceedings of the Advances in Neural Information Processing Systems 30: 31th Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6405–6416. [Google Scholar]
ID | OID | Code | Land Use Type | Descriptions |
---|---|---|---|---|
1 | - | BG | Background | Roads, water areas near the boundaries of the study areas |
2 | 1 | FB-1&2 | One and two family buildings | Single-family detached home, two-unit dwelling group, and duplex |
3 | 2 | FB-WU | Multi-family walk-up buildings | Two-flat, three-flat, four-flat, and townhouse |
4 | 3 | FB-E | Multi-family elevator buildings | Apartment building and apartment community |
5 | 4 | Mix. | Mixed residential and commercial buildings | Mixed use building for both commercial and residential use |
6 | 5 | Com. | Commercial and office buildings | Retail and general merchandise, shopping mall, restaurant, and entertainment |
7 | 6 | Ind. | Industrial and manufacturing | Manufacturing, warehousing, equipment sales and service |
8 | 7 | Trans. | Transportation and utility | Automobile service and multi-story car park |
9 | 8 | Public | Public facilities and institutions | Government services, hospital, and educational facilities |
10 | 9 | Open | Open space and outdoor recreation | Public parks, urban parks, recreational facilities, golf courses, and reservoir |
11 | 10 | Parking | Parking facilities | Outdoor parking facilities |
12 | 11 | Vacant | Vacant land | Areas with vacant space |
13 | - | Unknown | Unknown | Areas without land use labels |
Ground | Aerial | Fused | ||
---|---|---|---|---|
Brooklyn | Accuracy | |||
Kappa | ||||
Avg. F1 | ||||
mIoU | ||||
Queens | Accuracy | |||
Kappa | ||||
Avg. F1 | ||||
mIoU |
BG | FB-1&2 | FB-WU | FB-E | Mix. | Com. | Ind. | Trans. | Public | Open | Parking | Vacant | Avg. F1 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ground | 82.28 | 57.85 | 29.29 | 25.97 | 18.62 | 6.55 | 41.87 | 0.07 | 9.80 | 9.32 | 0 | 0 | 23.47 |
Aerial | 94.89 | 84.19 | 63.54 | 77.69 | 49.58 | 52.73 | 65.95 | 64.33 | 61.87 | 62.79 | 29.22 | 36.76 | 61.96 |
Fused | 95.09 | 84.40 | 64.36 | 78.43 | 51.18 | 54.53 | 67.50 | 64.17 | 62.11 | 63.00 | 30.33 | 37.68 | 62.73 |
BG | FB-1&2 | FB-WU | FB-E | Mix. | Com. | Ind. | Trans. | Public | Open | Parking | Vacant | Avg. F1 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ground | 62.16 | 28.29 | 12.14 | 0.81 | 2.96 | 2.13 | 7.92 | 0.03 | 1.97 | 1.48 | 0 | 0 | 9.99 |
Aerial | 87.81 | 82.98 | 45.59 | 76.05 | 31.74 | 50.06 | 40.50 | 52.55 | 43.85 | 77.00 | 18.62 | 15.52 | 51.86 |
Fused | 88.60 | 83.35 | 47.72 | 74.89 | 31.06 | 48.63 | 40.57 | 56.23 | 44.16 | 79.84 | 22.06 | 15.19 | 52.69 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, R.; Zhu, J.; Tu, W.; Li, Q.; Cao, J.; Liu, B.; Zhang, Q.; Qiu, G. Integrating Aerial and Street View Images for Urban Land Use Classification.Remote Sens.2018,10, 1553. https://doi.org/10.3390/rs10101553
Cao R, Zhu J, Tu W, Li Q, Cao J, Liu B, Zhang Q, Qiu G. Integrating Aerial and Street View Images for Urban Land Use Classification.Remote Sensing. 2018; 10(10):1553. https://doi.org/10.3390/rs10101553
Chicago/Turabian StyleCao, Rui, Jiasong Zhu, Wei Tu, Qingquan Li, Jinzhou Cao, Bozhi Liu, Qian Zhang, and Guoping Qiu. 2018. "Integrating Aerial and Street View Images for Urban Land Use Classification"Remote Sensing 10, no. 10: 1553. https://doi.org/10.3390/rs10101553
APA StyleCao, R., Zhu, J., Tu, W., Li, Q., Cao, J., Liu, B., Zhang, Q., & Qiu, G. (2018). Integrating Aerial and Street View Images for Urban Land Use Classification.Remote Sensing,10(10), 1553. https://doi.org/10.3390/rs10101553