Authors:Masaki Umemura1;Kazuhiro Hotta1;Hideki Nonaka2 andKazuo Oda2
Affiliations:1Meijo University, Japan;2Asia Air Survey Co. andLtd, Japan
Keyword(s):Semantic Segmentation, Convolutional Neural Network, LiDAR Intensity, Road Map, Weighted Fusion, Appropriate Size and U-Net.
RelatedOntology Subjects/Areas/Topics:Applications ;Computer Vision, Visualization and Computer Graphics ;Image Understanding ;Pattern Recognition
Abstract:We propose a semantic segmentation method for LiDAR intensity images obtained by Mobile Mapping System (MMS). Conventional segmentation method could give high pixel-wise accuracy but the accuracy of small objects is quite low. We solve this issue by using the weighted fusion of multi-scale inputs because each class has the most effective scale that small object class gives higher accuracy for small input size than large input size. In experiments, we use 36 LIDAR intensity images with ground truth labels. We divide 36 images into 28 training images and 8 test images. Our proposed method gain 87.41% on class average accuracy, and it is 5% higher than conventional method. We demonstrated that the weighted fusion of multi-scale inputs is effective to improve the segmentation accuracy of small objects.