Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Strip-FFT Transformer for Single Image Deblurring

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14356))

Included in the following conference series:

  • 618Accesses

Abstract

The purpose of image deblurring is to restore the origin image from the blurred image. With the development of deep learning, better performence for image deblurring can be obtained through the deblurring methods based on CNNs, while limited ability to model the global relationship, therefore its treatment of the correlation between the original resolution pixels is relatively weak. The hot Transformer approaches have a better ability to model the global context in the early stage, howerver, the disadvantage is that it is computational complexity. In addition, using only spatial features for image deblurring may lead to poor recovery of frequency domain information from the deblurred images, and frequency domain information is also key features for image deblurring. Therefore, we propose the SFT (Strip-FFT Transformer) method, which uses a hybrid architecture of CNNs and transformers to reduce the computational complexity, and a strip-fft Attention Block that integrates attention and Res-FFT mechanism to simultaneously process spatial and frequency domain information. After experiments, it is proved that SFT can obtain state-of-the-art effect in dynamic scene deblurring with relatively low memory consumption and computational complexity.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9151
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11439
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Brigham, E.O.: The Fast Fourier Transform and its Applications. Prentice-Hall, Inc., Hoboken (1988)

    Google Scholar 

  2. Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: HINet: half instance normalization network for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 182–192 (2021)

    Google Scholar 

  3. Cho, S.J., Ji, S.W., Hong, J.P., Jung, S.W., Ko, S.J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4641–4650 (2021)

    Google Scholar 

  4. Chu, X., Tian, Z., Zhang, B., Wang, X., Wei, X., Xia, H., Shen, C.: Conditional positional encodings for vision transformers. arXiv preprintarXiv:2102.10882 (2021)

  5. Donatelli, M., Huckle, T., Mazza, M., Sesana, D.: Image deblurring by sparsity constraint on the fourier coefficients. Numer. Algorithms72, 341–361 (2016)

    Article MathSciNet MATH  Google Scholar 

  6. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprintarXiv:2010.11929 (2020)

  7. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014)

  8. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: Deblurgan-v2: deblurring (orders-of-magnitude) faster and better. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8878–8887 (2019)

    Google Scholar 

  9. Lai, W.S., Huang, J.B., Hu, Z., Ahuja, N., Yang, M.H.: A comparative study for single image blind deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1709 (2016)

    Google Scholar 

  10. Mao, X., Liu, Y., Shen, W., Li, Q., Wang, Y.: Deep residual fourier transformation for single image deblurring. arXiv preprintarXiv:2111.11745 (2021)

  11. Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891 (2017)

    Google Scholar 

  12. O’Connor, D., Vandenberghe, L.: Total variation image deblurring with space-varying kernel. Comput. Optim. Appl.67, 521–541 (2017)

    Article MathSciNet MATH  Google Scholar 

  13. Schuler, C.J., Hirsch, M., Harmeling, S., Schölkopf, B.: Learning to deblur. IEEE Trans. Pattern Anal. Mach. Intell.38(7), 1439–1451 (2015)

    Article  Google Scholar 

  14. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. (tog)27(3), 1–10 (2008)

    Article  Google Scholar 

  15. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)

    Google Scholar 

  16. Tsai, FJ., Peng, YT., Lin, YY., Tsai, CC., Lin, CW.: Stripformer: strip transformer for fast image Deblurring. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision - ECCV 2022. ECCV 2022. LNCS, vol. 13679, pp. 146–162. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-19800-7_9

  17. Tsai, F.J., Peng, Y.T., Tsai, C.C., Lin, Y.Y., Lin, C.W.: Banet: a blur-aware attention network for dynamic scene deblurring. IEEE Trans. Image Process.31, 6789–6799 (2022)

    Article  Google Scholar 

  18. Wang, R., Tao, D.: Recent progress in image deblurring. arXiv preprintarXiv:1409.6838 (2014)

  19. Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: a general u-shaped transformer for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17683–17693 (2022)

    Google Scholar 

  20. Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 157–170. Springer, Heidelberg (2010).https://doi.org/10.1007/978-3-642-15549-9_12

    Chapter  Google Scholar 

  21. Yang, Y., Soatto, S.: FDA: fourier domain adaptation for semantic segmentation. in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4085–4095 (2020)

    Google Scholar 

  22. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022)

    Google Scholar 

  23. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14821–14831 (2021)

    Google Scholar 

  24. Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5978–5986 (2019)

    Google Scholar 

  25. Zhao, X., et al.: Fractional fourier image transformer for multimodal remote sensing data classification. IEEE Trans. Neural Netw. Learn. Syst. (2022)

    Google Scholar 

  26. Zhu, K., Sang, N.: Multi-scale deformable deblurring kernel prediction for dynamic scene deblurring. In: Peng, Y., Hu, S.-M., Gabbouj, M., Zhou, K., Elad, M., Xu, K. (eds.) ICIG 2021. LNCS, vol. 12890, pp. 253–264. Springer, Cham (2021).https://doi.org/10.1007/978-3-030-87361-5_21

    Chapter  Google Scholar 

  27. Zou, W., Jiang, M., Zhang, Y., Chen, L., Lu, Z., Wu, Y.: SdwNet: a straight dilated network with wavelet transformation for image deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1895–1904 (2021)

    Google Scholar 

Download references

Acknowledgments

This study was supported by Guangdong Provincial Department of Education Characteristic Innovation Project and HanShan Normal university Doctoral Initiation Program.

Author information

Authors and Affiliations

  1. Medical College, Shantou University, Shantou, 515041, China

    Lei Liu

  2. College of Engineering, Shantou University, Shantou, 515063, China

    Yulong Zhu, Haoyu Zhang & Weifeng Zhang

  3. College of Computer and Information Engineering, Hanshan Normal University, Chaozhou, 521041, China

    Hong Peng

Authors
  1. Lei Liu

    You can also search for this author inPubMed Google Scholar

  2. Yulong Zhu

    You can also search for this author inPubMed Google Scholar

  3. Haoyu Zhang

    You can also search for this author inPubMed Google Scholar

  4. Weifeng Zhang

    You can also search for this author inPubMed Google Scholar

  5. Hong Peng

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toHong Peng.

Editor information

Editors and Affiliations

  1. Dalian University of Technology, Dalian, China

    Huchuan Lu

  2. University of Sydney, Sydney, NSW, Australia

    Wanli Ouyang

  3. Shenzhen University, Shenzhen, China

    Hui Huang

  4. Tsinghua University, Beijing, China

    Jiwen Lu

  5. Dalian University of Technology, Dalian, China

    Risheng Liu

  6. Institute of Automation, CAS, Beijing, China

    Jing Dong

  7. University of Technology Sydney, Sydney, NSW, Australia

    Min Xu

Rights and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, L., Zhu, Y., Zhang, H., Zhang, W., Peng, H. (2023). Strip-FFT Transformer for Single Image Deblurring. In: Lu, H.,et al. Image and Graphics. ICIG 2023. Lecture Notes in Computer Science, vol 14356. Springer, Cham. https://doi.org/10.1007/978-3-031-46308-2_14

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9151
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11439
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp