Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Natural Disaster Classification Using Aerial Photography Explainable for Typhoon Damaged Feature

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 12666))

Included in the following conference series:

Abstract

Recent years, typhoon damages has become social problem owing to climate change. In 9 September 2019, Typhoon Faxai passed on the Chiba in Japan, whose damages included with electric provision stop because of strong wind recorded on the maximum 45 m/s. A large amount of tree fell down, and the neighbour electric poles also fell down at the same time. These disaster features have caused that it took 18 days for recovery longer than past ones. Immediate responses are important for faster recovery. As long as we can, aerial survey for global screening of devastated region would be required for decision support to respond where to recover ahead. This paper proposes a practical method to visualize the damaged areas focused on the typhoon disaster features using aerial photography. This method can classify eight classes which contains land covers without damages and areas with disaster. Using target feature class probabilities, we can visualize disaster feature map to scale a colour range. Furthermore, we can realize explainable map on each unit grid images to compute the convolutional activation map using Grad-CAM. We demonstrate case studies applied to aerial photographs recorded at the Chiba region after typhoon.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Chou, T.-Y., Yeh, M.-L., et al.: Disaster monitoring and management by the unmanned aerial vehicle technology. In: Wanger, W., Szekely, B. (eds.) ISPRS TC VII Symposium, Austria, vol. XXXVIII, Part 7B (2010)

    Google Scholar 

  2. Kentsche, S., Karatsiolis, S., Kamilaris, A., et al.: Identification of tree species in Japanese forests based on aerial photography and deep learning,arXiv:2007.08907 (2020)

  3. JICA Survey Team: Aerial Survey Report on Inundation Damages and Sediment Disasters, 15th June 2016

    Google Scholar 

  4. Altan, M.O., Kemper, G.: Innovative airborne sensors for disaster management. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLI-B8, XXIII ISPRS Congress, Czech Republic, July 2016

    Google Scholar 

  5. Japan Bosai Platform.https://www.bosai-jp.org/en. Accessed 10 Oct 2020

  6. He, M., et al.: A 3D shape descriptor based on contour clusters for damaged roof detection using airborne LiDAR point clouds. MDPI8, 189 (2016)

    Google Scholar 

  7. Nex, F., et al.: Towards real-time building damage mapping with low-cost UAV solutions. MDPI Remote Sens.11, 287 (2019)

    Article  Google Scholar 

  8. Liu, C.-C., Nakamura, R., et al.: Near real-time browable landsat-8 imagery. MDPI Remote Sens.9, 79 (2017)

    Article  Google Scholar 

  9. Gupta, A., Watson, S., Yin, H.: Deep learning-based aerial image segmentation with open data for disaster impact assessment,arXiv:2006.05575v1 (2020)

  10. Rahnemoonfar, M., Murphy, R.: Comprehensive semantic segmentation on high resolution UAV imagery for natural disaster damage assessment,arXiv:2009.01193v2 (2020)

  11. Sheykhmousa, M., et al.: Post-disaster recovery assessment with machine learning-derived land cover and land use information. MDPI Remote Sens.11, 1174 (2019)

    Article  Google Scholar 

  12. Krizhevsky, A., Ilya, S., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)

    Google Scholar 

  13. Szegedy, C., Wei, L., Yangqing, J., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  14. Simoniyan, K. et al.: Very deep convolutional networks for large-scale image recognition. In: ICLR, VGG model, the Visual Geometry Group at University of Oxford (2015)

    Google Scholar 

  15. Szegedy, C., Vincent, V., Sergey, I., et al.: Rethinking the inception architecture for computer vision. In: CVPR, Inception v3 Model, pp. 2818–2826 (2015)

    Google Scholar 

  16. Kaiming, H., Xiangyu, Z., Shaoqing, R., et al.: Deep residual learning for image recognition, ResNet Model,arXiv:1512.03385v1 (2015)

  17. Szegedy, C., Sergey, I., Vincent, V., et al.: Inception-v4, Inception-ResNet and Impact of Residual Connections on Learning, Inception-ResNet-v2 Model (2016)

    Google Scholar 

  18. Huang, H., Liu, Z., Maaten, L., et al.: Densely connected convolutional networks. In: CVPR, DenseNet Model (2017)

    Google Scholar 

  19. Sandler, M., Howard, A., et al.: MobileNetV2: inverted residuals and linear bottlenecks,arXiv:1801.04381v4, 21 March 2019

  20. Zhang, X., Zhou, X., et al.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices,arXiv:1707.01083v2, 7 December 2017

  21. Ma, N., Zhang, X., et al.: ShuffleNet V2: practical guidelines for efficient CNN architecture design,arXiv:1807.11164v1, 30 July 2018

  22. Selvaraju, R., Cogswell, M., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization,arXiv:1610.02391v3, 21 March 2017

  23. Gonzalez, R., Woods, R., Eddins, S.: Digital Image Processing Using MATLAB, 2nd edn. McGrawHill Education, New York (2015)

    Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge the constructive comments of the anonymous referees. Support was given by the Aero Asahi Co. of Jun Miura, who provided us the aerial photographs recorded at the Chiba after the Typhoon Faxai. We thank Takuji Fukumoto and Shinichi Kuramoto for supporting us MATLAB resources.

Author information

Authors and Affiliations

  1. Research Institute for Infrastructure Paradigm Shift, Yachiyo Engineering Co., Ltd., Asakusabashi 5-20-8, Taito-ku, Tokyo, Japan

    Takato Yasuno, Masazumi Amakata & Masahiro Okano

Authors
  1. Takato Yasuno

    You can also search for this author inPubMed Google Scholar

  2. Masazumi Amakata

    You can also search for this author inPubMed Google Scholar

  3. Masahiro Okano

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toTakato Yasuno.

Editor information

Editors and Affiliations

  1. Dipartimento di Ingegneria dell’Informazione, University of Firenze, Firenze, Italy

    Alberto Del Bimbo

  2. Dipartimento di Ingegneria “Enzo Ferrari”, Università di Modena e Reggio Emilia, Modena, Italy

    Rita Cucchiara

  3. Department of Computer Science, Boston University, Boston, MA, USA

    Stan Sclaroff

  4. Dipartimento di Matematica e Informatica, University of Catania, Catania, Italy

    Giovanni Maria Farinella

  5. Cloud & AI, JD.COM, Beijing, China

    Tao Mei

  6. Dipartimento di Ingegneria dell’Informazione, University of Firenze, Firenze, Italy

    Marco Bertini

  7. Computational Sciences Department, National Institute of Astrophysics, Optics and Electronics (INAOE), Tonantzintla, Puebla, Mexico

    Hugo Jair Escalante

  8. Dipartimento di Ingegneria “Enzo Ferrari”, Università di Modena e Reggio Emilia, Modena, Italy

    Roberto Vezzani

Appendix

Appendix

Fig. 5.
figure 5

(Left) House roof break feature map, (Right) Visual explanation of roof break (red-blue range) using Grad-CAM, each pair of original clip and activation map, roof break covered with vinyl seat. The red is positive roof break affected by strong wind (Color figure online)

Rights and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yasuno, T., Amakata, M., Okano, M. (2021). Natural Disaster Classification Using Aerial Photography Explainable for Typhoon Damaged Feature. In: Del Bimbo, A.,et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12666. Springer, Cham. https://doi.org/10.1007/978-3-030-68780-9_2

Download citation

Publish with us

Societies and partnerships

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp