Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Springer Nature Link
Log in

EV-Perturb: event-stream perturbation for privacy-preserving classification with dynamic vision sensors

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The dynamic vision sensors or event-cameras are bio-inspired vision platforms with independent and asynchronous pixels. Their unique design enables a number of advantages over traditional RGB cameras, including high temporal resolution for capturing high speed motion without blur and high dynamic range for sensing under challenging lighting conditions. As the outputs of the event-cameras are discrete and asynchronous events, termed as event-streams, rather than traditional high-quality video frames, they are regarded as low privacy-intrusive. However, research on reconstruction from events has revealed that event-streams can be converted to high quality video frames by sophisticated reconstruction algorithms, so that the claim on privacy-preserving does not hold anymore. In this paper, we focus on the privacy issue of event-streams used in EV-based classification tasks and propose,EV-Perturb, an event-stream perturbation mechanism to protect event-streams from reconstruction attacks.EV-Perturb flips polarities of events in a random manner and the theoretical proof shows that it provides differential-private guarantee on the perturbed event-streams. We also evaluate the utility (classification accuracy) and privacy (video reconstruction error) ofEV-Perturb on EV-based classification tasks with multiple publicly available datasets using deep learning models. In summary, this work has several technical contributions. First, by proposingEV-Perturb, we consider the privacy issue of event-streams under reconstruction attack, which is the first piece of work focusing on solving this specific privacy issue. The approach is based on randomized response, which is both efficient and effective, shown as our evaluation. We also provide a theoretical proof that EV-Perturb is differential-private and derive the strict privacy guarantee with respect to the probability of change. Lastly, the results of the extensive evaluations show thatEV-Perturb is can effectively protect event-streams from reconstruction attacks while preserving comparable accuracy on classification.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data Availability Statement

The datasets used in this manuscripts are all publicly available. The corresponding repositories are all properly cited in the manuscript.

Notes

References

  1. Gallego G, Delbruck T, Orchard G, Bartolozzi C, Taba B, Censi A, Leutenegger S, Davison A, Conradt J, Daniilidis K et al (2019) Eventbased vision: A survey. arXiv preprintarXiv:1904.08405

  2. Sironi A, Brambilla M, Bourdis N, Lagorce X, Benosman R (2018) Hats: Histograms of averaged time surfaces for robust event-based object classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1731–1740

  3. Lagorce X, Orchard G, Galluppi F, Shi BE, Benosman RB (2016) Hots: a hierarchy of event-based time-surfaces for pattern recognition. IEEE Trans Pattern Anal Mach Intell 39(7):1346–1359

  4. Bi Y, Chadha A, Abbas A, Bourtsoulatze E, Andreopoulos Y (2020) Graph-based spatio-temporal feature learning for neuromorphic vision sensing. IEEE Trans Image Process 29:9084–9098

    Article  Google Scholar 

  5. Bi Y, Chadha A, Abbas A, Bourtsoulatze E, Andreopoulos Y (2020) Graph-based spatio-temporal feature learning for neuromorphic vision sensing. IEEE Trans Image Process 29:9084–9098

  6. Wang Y, Du B, Shen Y, Wu K, Zhao G, Sun J, Wen H (2019) Evgait: Event-based robust gait recognition using dynamic vision sensors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6358–6367

  7. Dwork C, Roth A et al (2014) The algorithmic foundations of differential privacy. Found Trends Theor Comput Sci 9(3–4):211–407

    MathSciNet  Google Scholar 

  8. Kueng B, Mueggler E, Gallego G, Scaramuzza D (2016) Low-latency visual odometry using event-based feature tracks. In: 2016 IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 16–23. IEEE

  9. Kim H, Leutenegger S, Davison AJ (2016) Real-time 3d reconstruction and 6-dof tracking with an event camera. In: European conference on computer vision, pp 349-364. Springer

  10. Falanga D, Kleber K, Scaramuzza D (2020) Dynamic obstacle avoidance for quadrotors with event cameras. Sci Robot 5(40)

  11. Gupta BB, Yamaguchi S, Agrawal DP (2018) Advances in security and privacy of multimedia big data in mobile and cloud computing. Multimedia Tools Appl 77:9203–9208

    Article  Google Scholar 

  12. Rebecq H, Ranftl R, Koltun V, Scaramuzza D (2019) High speed and high dynamic range video with an event camera. IEEE Trans Pattern Anal Mach Intell

  13. Kasiviswanathan SP, Lee HK, Nissim K, Raskhodnikova S, Smith A (2011) What can we learn privately? SIAM Journal on Computing 40(3):793–826

    Article MathSciNet  Google Scholar 

  14. Wang L, Kim T-K, Yoon K-J (2020) Eventsr: From asynchronous events to image reconstruction, restoration, and super-resolution via end-to-end adversarial learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8315–8325

  15. Kowalczuk Z, Szymański K (2019) Classification of objects in the lidar point clouds using deep neural networks based on the pointnet model. IFAC-PapersOnLine 52(8):416–421

    Article  Google Scholar 

  16. Paikin G, Ater Y, Shaul R, Soloveichik E (2021) Efi-net: Video frame interpolation from fusion of events and frames. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1291–1301

  17. Lagorce X, Orchard G, Galluppi F, Shi BE, Benosman RB (2016) Hots: a hierarchy of event-based time-surfaces for pattern recognition. IEEE Trans Pattern Anal Mach Intell 39(7):1346–1359

    Article  Google Scholar 

  18. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324

    Article  Google Scholar 

  19. Lee K, Woo H, Suk T (2001) Point data reduction using 3d grids. Int J Adv Manuf Technol 18(3):201–210

    Article  Google Scholar 

  20. Lichtsteiner P, Posch C, Delbruck T (2008) A 128\(\times \) 128 120 db 15\(\mu \) s latency asynchronous temporal contrast vision sensor. IEEE J Solid-State Circ 43(2):566–576

    Article  Google Scholar 

  21. Liu S-C, Delbruck T (2010) Neuromorphic sensory systems. Curr Opin Neurobiol 20(3):288–295

    Article  Google Scholar 

  22. Amir A, Taba B, Berg D, Melano T, McKinstry J, Di Nolfo C, Nayak T, Andreopoulos A, Garreau G, Mendoza M et al (2017) A low power, fully event-based gesture recognition system. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7243–7252

  23. Munda G, Reinbacher C, Pock T (2018) Real-time intensity-image reconstruction for event cameras using manifold regularisation. Int J Comp Vision 126(12):1381–1393

    Article  Google Scholar 

  24. Orchard G, Jayawant A, Cohen GK, Thakor N (2015) Converting static image datasets to spiking neuromorphic datasets using saccades. Front Neurosci 9:437

    Article  Google Scholar 

  25. Qi CR, Su H, Mo K, Guibas LJ (2017) Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 652–660

  26. Wang Q, Zhang Y, Yuan J, Lu Y (2019) Space-time event clouds for gesture recognition: From rgb cameras to event cameras. In: 2019 IEEE Winter conference on applications of computer vision (WACV), pp 1826–1835. IEEE

  27. Bardow P, Davison AJ, Leutenegger S (2016) Simultaneous optical flow and intensity estimation from an event camera. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 884–892

  28. Munda G, Reinbacher C, Pock T (2018) Real-time intensity-image reconstruction for event cameras using manifold regularisation. Int J Comp Vision 126(12):1381–1393

  29. Choi J, Yoon K-J et al (2020) Learning to super resolve intensity images from events. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2768–2776

  30. Liu S-C, Delbruck T (2010) Neuromorphic sensory systems. Curr Opin Neurobiol 20(3):288–295

  31. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds) Medical image computing and computer-assisted intervention - MICCAI 2015. Springer, Cham, pp 234–241

  32. Dwork C, Roth A et al (2014) The algorithmic foundations of differential privacy. Found Trends Theor Comput Sci 9(3-4):211–407

  33. Kasiviswanathan SP, Lee HK, Nissim K, Raskhodnikova S, Smith A (2011) What can we learn privately? SIAM Journal on Computing 40(3):793–826

  34. Zhu AZ, Yuan L, Chaney K, Daniilidis K (2018) Ev-flownet: Selfsupervised optical flow estimation for event-based cameras.arXiv:1802.06898

  35. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  36. Monti F, Boscaini D, Masci J, Rodola E, Svoboda J, Bronstein MM (2017) Geometric deep learning on graphs and manifolds using mixture model cnns. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5115–5124

  37. Simonovsky M, Komodakis N (2017) Dynamic edge-conditioned filters in convolutional neural networks on graphs. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3693–3702

  38. Orchard G, Jayawant A, Cohen GK, Thakor N (2015) Converting static image datasets to spiking neuromorphic datasets using saccades. Front Neurosci 9:437

  39. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

  40. Zhang L, Zhang L, Mou X, Zhang D (2011) Fsim: A feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386

  41. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L et al (2019) Pytorch: An imperative style, high-performance deep learning library. arXiv preprintarXiv:1912.01703

  42. Warner SL (1965) Randomized response: A survey technique for eliminating evasive answer bias. J Am Stat Assoc 60(309):63–69

    Article  Google Scholar 

  43. Zhang L, Zhang L, Mou X, Zhang D (2011) Fsim: A feature similarity index for image quality assessment. IEEE Trans Image Process 20(8):2378–2386

    Article MathSciNet  Google Scholar 

  44. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

Download references

Acknowledgements

This work is partially supported by Shandong Provincial Natural Science Foundation, China, Grant No. 2022HWYQ-040 and the Youth Fund Project of Humanities and Social Sciences Research of the Ministry of Education of China under Grant No.20YJCZH172.

Author information

Authors and Affiliations

  1. College of Computer Science and Technology, Harbin Engineering University, 145 Nantong Street, Harbin, 150001, Heilongjiang, China

    Xian Zhang & Yong Wang

  2. School of Management and Engineering, Capital University of Economics and Business, 121 Shoujingmao S Rd, Beijing, 100070, China

    Qing Yang

  3. School of Software, Shandong University, 27 Shanda S Rd, Jinan, 250100, Shandong, China

    Yiran Shen

  4. Department of Computer Science, The University of Warwick, Coventry, CV4 7AL, UK

    Hongkai Wen

Authors
  1. Xian Zhang

    You can also search for this author inPubMed Google Scholar

  2. Yong Wang

    You can also search for this author inPubMed Google Scholar

  3. Qing Yang

    You can also search for this author inPubMed Google Scholar

  4. Yiran Shen

    You can also search for this author inPubMed Google Scholar

  5. Hongkai Wen

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toYong Wang.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X., Wang, Y., Yang, Q.et al. EV-Perturb: event-stream perturbation for privacy-preserving classification with dynamic vision sensors.Multimed Tools Appl83, 16823–16847 (2024). https://doi.org/10.1007/s11042-023-15743-w

Download citation

Keywords

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp