Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

ContextNet: Learning Context Information for Texture-Less Light Field Depth Estimation

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14430))

  • 822Accesses

Abstract

Depth estimation in texture-less regions of the light field is an important research direction. However, there are few existing methods dedicated to this issue. We find that context information is significantly crucial for depth estimation in texture-less regions. In this paper, we propose a simple yet effective method called ContextNet for texture-less light field depth estimation by learning context information. Specifically, we aim to enlarge the receptive field of feature extraction by using dilated convolutions and increasing the training patch size. Moreover, we design the Augment SPP (AugSPP) module to aggregate features of multiple-scale and multiple-level. Extensive experiments demonstrate the effectiveness of our method, significantly improving depth estimation results in texture-less regions. The performance of our method outperforms the current state-of-the-art methods (e.g., LFattNet, DistgDisp, OACC-Net, and SubFocal) on the UrbanLF-Syn dataset in terms of MSE\(\times \)100, BadPix 0.07, BadPix 0.03, and BadPix 0.01. Our method also ranks third place of comprehensive results in the competition about LFNAT Light Field Depth Estimation Challenge at CVPR 2023 Workshop without any post-processing steps (The code and model are available athttps://github.com/chaowentao/ContextNet.).

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 10295
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 12869
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Notes

  1. 1.

    http://www.lfchallenge.com/dp_lambertian_plane_result/. On the benchmark, the name of our method is called SF-Net.

References

  1. Chao, W., Duan, F., Wang, X., Wang, Y., Wang, G.: Occcasnet: occlusion-aware cascade cost volume for light field depth estimation. arXiv preprintarXiv:2305.17710 (2023)

  2. Chao, W., Wang, X., Wang, Y., Wang, G., Duan, F.: Learning sub-pixel disparity distribution for light field depth estimation. TCI Early Access, 1–12 (2023)

    Google Scholar 

  3. Chen, J., Zhang, S., Lin, Y.: Attention-based multi-level fusion network for light field depth estimation. In: AAAI, pp. 1009–1017 (2021)

    Google Scholar 

  4. Chen, J., Chau, L.: Light field compressed sensing over a disparity-aware dictionary. TCSVT27(4), 855–865 (2017)

    Google Scholar 

  5. Chen, Y., Zhang, S., Chang, S., Lin, Y.: Light field reconstruction using efficient pseudo 4d epipolar-aware structure. TCI8, 397–410 (2022)

    Google Scholar 

  6. Cheng, Z., Liu, Y., Xiong, Z.: Spatial-angular versatile convolution for light field reconstruction. TCI8, 1131–1144 (2022)

    Google Scholar 

  7. Cheng, Z., Xiong, Z., Chen, C., Liu, D., Zha, Z.J.: Light field super-resolution with zero-shot learning. In: CVPR, pp. 10010–10019 (2021)

    Google Scholar 

  8. Guo, C., Jin, J., Hou, J., Chen, J.: Accurate light field depth estimation via an occlusion-aware network. In: ICME, pp. 1–6 (2020)

    Google Scholar 

  9. Han, K., Xiang, W., Wang, E., Huang, T.: A novel occlusion-aware vote cost for light field depth estimation. TPAMI44(11), 8022–8035 (2022)

    Google Scholar 

  10. He, L., Wang, G., Hu, Z.: Learning depth from single images with deep neural network embedding focal length. TIP27(9), 4676–4689 (2018)

    MathSciNet  Google Scholar 

  11. Heber, S., Pock, T.: Convolutional networks for shape from light field. In: CVPR, pp. 3746–3754 (2016)

    Google Scholar 

  12. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017).https://doi.org/10.1007/978-3-319-54187-7_2

    Chapter  Google Scholar 

  13. Jeon, H.G., et al.: Accurate depth map estimation from a lenslet light field camera. In: CVPR, pp. 1547–1555 (2015)

    Google Scholar 

  14. Jin, J., Hou, J., Chen, J., Kwong, S.: Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization. In: CVPR, pp. 2260–2269 (2020)

    Google Scholar 

  15. Jin, J., Hou, J., Chen, J., Zeng, H., Kwong, S., Yu, J.: Deep coarse-to-fine dense light field reconstruction with flexible sampling and geometry-aware fusion. TPAMI44, 1819–1836 (2020)

    Article  Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. TPAMI37(9), 1904–1916 (2015)

    Article  Google Scholar 

  17. Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A., Gross, M.H.: Scene reconstruction from high spatio-angular resolution light fields. TOG32(4), 73–1 (2013)

    Article  Google Scholar 

  18. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014)

  19. Meng, N., So, H.K.H., Sun, X., Lam, E.Y.: High-dimensional dense residual convolutional neural network for light field reconstruction. TPAMI43(3), 873–886 (2019)

    Article  Google Scholar 

  20. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Ph.D. thesis, Stanford University (2005)

    Google Scholar 

  21. Peng, J., Xiong, Z., Wang, Y., Zhang, Y., Liu, D.: Zero-shot depth estimation from light field using a convolutional neural network. TCI6, 682–696 (2020)

    Google Scholar 

  22. Sheng, H., Cong, R., Yang, D., Chen, R., Wang, S., Cui, Z.: Urbanlf: a comprehensive light field dataset for semantic segmentation of urban scenes. TCSVT32(11), 7880–7893 (2022)

    Google Scholar 

  23. Shin, C., Jeon, H.G., Yoon, Y., Kweon, I.S., Kim, S.J.: Epinet: a fully-convolutional neural network using epipolar geometry for depth from light field images. In: CVPR, pp. 4748–4757 (2018)

    Google Scholar 

  24. Tao, M.W., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: ICCV, pp. 673–680 (2013)

    Google Scholar 

  25. Tsai, Y.J., Liu, Y.L., Ouhyoung, M., Chuang, Y.Y.: Attention-based view selection networks for light-field disparity estimation. In: AAAI, pp. 12095–12103 (2020)

    Google Scholar 

  26. Van Duong, V., Huu, T.N., Yim, J., Jeon, B.: Light field image super-resolution network via joint spatial-angular and epipolar information. TCI9, 350–366 (2023)

    Google Scholar 

  27. Wang, Y., Wang, L., Liang, Z., Yang, J., An, W., Guo, Y.: Occlusion-aware cost constructor for light field depth estimation. In: CVPR, pp. 19809–19818 (2022)

    Google Scholar 

  28. Wang, Y., Wang, L., Liang, Z., Yang, J., Timofte, R., Guo, Y.: Ntire 2023 challenge on light field image super-resolution: dataset, methods and results. arXiv preprintarXiv:2304.10415 (2023)

  29. Wang, Y., et al.: Disentangling light fields for super-resolution and disparity estimation. TPAMI45, 425–443 (2022)

    Article  Google Scholar 

  30. Wang, Y., Yang, J., Guo, Y., Xiao, C., An, W.: Selective light field refocusing for camera arrays using bokeh rendering and superresolution. SPL26(1), 204–208 (2018)

    Google Scholar 

  31. Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. TPAMI36(3), 606–619 (2014)

    Article  Google Scholar 

  32. Williem, W., Park, I.K.: Robust light field depth estimation for noisy scene with occlusion. In: CVPR, pp. 4396–4404 (2016)

    Google Scholar 

  33. Wu, G., Liu, Y., Fang, L., Dai, Q., Chai, T.: Light field reconstruction using convolutional network on epi and extended applications. TPAMI41(7), 1681–1694 (2018)

    Article  Google Scholar 

  34. Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprintarXiv:1511.07122 (2015)

  35. Yu, J.: A light-field journey to virtual reality. TMM24(2), 104–112 (2017)

    MathSciNet  Google Scholar 

  36. Zhang, S., Lin, Y., Sheng, H.: Residual networks for light field image super-resolution. In: CVPR, pp. 11046–11055 (2019)

    Google Scholar 

  37. Zhang, S., Sheng, H., Li, C., Zhang, J., Xiong, Z.: Robust depth estimation for light field via spinning parallelogram operator. CVIU145, 148–159 (2016)

    Google Scholar 

  38. Zhang, Y., et al.: Light-field depth estimation via epipolar plane image analysis and locally linear embedding. TCSVT27(4), 739–747 (2016)

    Google Scholar 

  39. Zhang, Y., Dai, W., Xu, M., Zou, J., Zhang, X., Xiong, H.: Depth estimation from light field using graph-based structure-aware analysis. TCSVT30(11), 4269–4283 (2019)

    Google Scholar 

  40. Zhu, H., Wang, Q., Yu, J.: Occlusion-model guided antiocclusion depth estimation in light field. J-STSP11(7), 965–978 (2017)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the National Key Research and Development Project Grant, Grant/Award Number: 2018AAA0100802.

Author information

Authors and Affiliations

  1. School of Artificial Intelligence, Beijing Normal University, Beijing, China

    Wentao Chao, Xuechun Wang, Yiming Kan & Fuqing Duan

Authors
  1. Wentao Chao

    You can also search for this author inPubMed Google Scholar

  2. Xuechun Wang

    You can also search for this author inPubMed Google Scholar

  3. Yiming Kan

    You can also search for this author inPubMed Google Scholar

  4. Fuqing Duan

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toFuqing Duan.

Editor information

Editors and Affiliations

  1. Nanjing University of Information Science and Technology, Nanjing, China

    Qingshan Liu

  2. Xiamen University, Xiamen, China

    Hanzi Wang

  3. Beijing University of Posts and Telecommunications, Beijing, China

    Zhanyu Ma

  4. Sun Yat-sen University, Guangzhou, China

    Weishi Zheng

  5. Peking University, Beijing, China

    Hongbin Zha

  6. Chinese Academy of Sciences, Beijing, China

    Xilin Chen

  7. Chinese Academy of Sciences, Beijing, China

    Liang Wang

  8. Xiamen University, Xiamen, China

    Rongrong Ji

Rights and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chao, W., Wang, X., Kan, Y., Duan, F. (2024). ContextNet: Learning Context Information for Texture-Less Light Field Depth Estimation. In: Liu, Q.,et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14430. Springer, Singapore. https://doi.org/10.1007/978-981-99-8537-1_2

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 10295
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 12869
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp