Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Attribute inference privacy protection for pre-trained models

  • Regular Contribution
  • Published:
International Journal of Information Security Aims and scope Submit manuscript

Abstract

With the increasing popularity of machine learning (ML) in image processing, privacy concerns have emerged as a significant issue in deploying and using ML services. However, current privacy protection approaches often require computationally expensive training from scratch or extensive fine-tuning of models, posing significant barriers to the development of privacy-conscious models, particularly for smaller organizations seeking to comply with data privacy laws. In this paper, we address the privacy challenges in computer vision by investigating the effectiveness of two recent fine-tuning methods, Model Reprogramming and Low-Rank Adaptation. We adapt these techniques to provide attribute protection for pre-trained models, minimizing computational overhead and training time. Specifically, we modify the models to produce privacy-preserving latent representations of images that cannot be used to identify unintended attributes. We integrate these methods into an adversarial min–max framework, allowing us to conceal sensitive information from feature outputs without extensive modifications to the pre-trained model, but rather focusing on a small set of new parameters. We demonstrate the effectiveness of our methods by conducting experiments on the CelebA dataset, achieving state-of-the-art performance while significantly reducing computational complexity and cost. Our research provides a valuable contribution to the field of computer vision and privacy, offering practical solutions to enhance the privacy of machine learning services without compromising efficiency.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data Availability

The CelebA [29] dataset used in this study is publicly available athttp://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.

References

  1. Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B, Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), (2016).https://doi.org/10.1145/2976749.2978318

  2. Bahng, H., Jahanian, A., Sankaranarayanan, S., Isola, P.: Exploring visual prompts for adapting large-scale models.arXiv:abs/2203.17274 (2022)

  3. Chang, J., Sha, J.: An efficient implementation of 2D convolution in CNN. IEICE Electron. Express (2017).https://doi.org/10.1587/elex.13.20161134

    Article  Google Scholar 

  4. Chen, P.-Y.: Model reprogramming: resource-efficient cross-domain machine learning.arXiv:abs/2202.10629 (2022)

  5. Dave, I.R., Chen, C., Shah, M.: SPAct: self-supervised privacy preservation for action recognition. In: CVPR.https://doi.org/10.1109/CVPR52688.2022.01953 (2022)

  6. Ding, X., Fang, H., Zhang, Z., Choo, K.-K.R., Jin, H.: Privacy-preserving feature extraction via adversarial training. IEEE Trans. Knowl. Data Eng. (2022).https://doi.org/10.1109/TKDE.2020.2997604

    Article  Google Scholar 

  7. Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: CVPR, pp. 4829–4837 (2016)

  8. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci.,9(3–4), (2014).https://doi.org/10.1561/0400000042

  9. Elsayed, G.F., Goodfellow, I.J., Sohl-Dickstein, J.N.: Adversarial reprogramming of neural networks. In: ICLR (2019)

  10. Gildenblat, J., contributors: Pytorch library for cam methods.https://github.com/jacobgil/pytorch-grad-cam (2021)

  11. Guo, X., Li, B., Yu, H.: Improving the sample efficiency of prompt tuning with domain adaptation. In: Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics (2022).https://doi.org/10.18653/v1/2022.findings-emnlp.258

  12. Guo, Y., Shi, H., Kumar, A., Grauman, K., Rosing, T., Feris, R.: SpotTune: transfer learning through adaptive fine-tuning. In: CVPR (2019)

  13. Hambardzumyan, K., Khachatrian, H., May, J.: Warp: word-level adversarial reprogramming. In: ACL-IJCNLP.https://doi.org/10.18653/v1/2021.acl-long.381 (2021)

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

  15. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015)

  16. Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: LoRA: Low-rank adaptation of large language models. In: ICLR (2022a)

  17. Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P.S., Zhang, X.: Membership inference attacks on machine learning: a survey. ACM Comput. Surv. (2022).https://doi.org/10.1145/3523273

    Article  Google Scholar 

  18. Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions.arXiv:abs/1405.3866 (2014)

  19. Jia, M., Tang, L., Chen, B.-C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.-N.: Visual prompt tuning. In: ECCV. Springer, Berlin (2022).https://doi.org/10.1007/978-3-031-19827-4_41

  20. Kim, M., Kim, H., Ro, Y.M.: Speaker-adaptive lip reading with user-dependent padding. In: Computer Vision—ECCV 2022. Springer Nature Switzerland (2022)

  21. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

  22. Koczkodaj, W.W., Mazurek, M., Strzałka, D., Wolny-Dominiak, A., Woodbury-Smith, M.: Electronic health record breaches as social indicators. Soc. Indic. Res.141(2), 1 (2019).https://doi.org/10.1007/s11205-018-1837-z

    Article  Google Scholar 

  23. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NeurIPS, vol. 25. Curran Associates, Inc. (2012)

  24. Li, A., Duan, Y., Yang, H., Chen, Y., Yang, J.: TIPRDC: task-independent privacy-respecting data crowdsourcing framework for deep learning with anonymized intermediate representations. In: SIGKDD (2020).https://doi.org/10.1145/3394486.3403125

  25. Li, A., Guo, J., Yang, H., Salim, F.D., Chen, Y.: DeepObfuscator: obfuscating intermediate representations with privacy-preserving adversarial learning on smartphones. In: IoTDI (2021).https://doi.org/10.1145/3450268.3453519

  26. Liu, H., Tam, D., Mohammed, M., Mohta, J., Huang, T., Bansal, M., Raffel, C.: Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In: NeurIPS (2022a)

  27. Liu, X., Zhao, H., Tian, M., Sheng, L., Shao, J., Shuai, Y., Junjie, Y., Wang, X.: HydraPlus-Net: attentive deep features for pedestrian analysis. In: ICCV (2017).https://doi.org/10.1109/ICCV.2017.46

  28. Liu, Y., Wen, R., He, X., Salem, A., Zhang, Z., Backes, M., De Cristofaro, E., Fritz, M., Zhang, Y.: ML-Doctor: holistic risk assessment of inference attacks against machine learning models. In: USENIX Security (2022b)

  29. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV (2015)

  30. Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: S &P (2019).https://doi.org/10.1109/SP.2019.00029

  31. Meredith, S.: Facebook-Cambridge analytica: a timeline of the data hijacking scandal, (2018).https://www.cnbc.com/2018/04/10/facebook-cambridge-analytica-a-timeline-of-the-data-hijacking-scandal.html

  32. Mireshghallah, F., et al.: Not all features are equal: discovering essential features for preserving prediction privacy. In: WWW (2021).https://doi.org/10.1145/3442381.3449965

  33. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In: NIPS-W (2017)

  34. Pfeiffer, J., Ruder, S., Vulić, I., Ponti, E.: Modular deep learning. In: TMLR. ISSN 2835-8856 (2023)

  35. Ronneberger, O., Fischer, P., Brox, T.: U-Net convolutional networks for biomedical image segmentation. In: MICCAI (2015).https://doi.org/10.1007/978-3-319-24574-4_28

  36. Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: CVPR (2015).https://doi.org/10.1109/CVPR.2015.7298682

  37. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)

  38. Smith, L.N., Topin, N.: Super-convergence: very fast training of neural networks using large learning rates. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications. SPIE (2019).https://doi.org/10.1117/12.2520589

  39. Song, C., Shmatikov, V.: Overlearning reveals sensitive attributes. In: ICLR (2020)

  40. Treviso, M., et al.: Efficient methods for natural language processing: a survey. Trans. Assoc. Comput. Linguist.11, 826–860 (2023).https://doi.org/10.1162/tacl_a_00577

    Article  Google Scholar 

  41. Tsai, Y.-Y., Chen, P.-Y., Ho, T.-Y.: Transfer Learning without knowing: reprogramming black-box machine learning models with scarce data and limited resources. In: PMLR (2020)

  42. Wang, Z., Panda, R., Karlinsky, L., Feris, R., Sun, H., Kim, Y.: Multitask prompt tuning enables parameter-efficient transfer learning. In: ICLR, (2023)

  43. Williams, C.: 620 million accounts stolen from 16 hacked websites now for sale on dark web, seller boasts (2019).https://www.theregister.com/2019/02/11/620_million_hacked_accounts_dark_web/

  44. Wu, Z., Wang, Z., Wang, Z., Jin, H.: A Pilot study. In: ECCV, Towards Privacy-Preserving Visual Recognition via Adversarial Training (2018)

  45. Wu, Z., Wang, H., Wang, Z., Jin, H., Wang, Z.: Privacy-preserving deep action recognition: an adversarial learning framework and a new dataset. IEEE Trans. Pattern Anal. Mach. Intell.44(4), 2126–2139 (2022).https://doi.org/10.1109/TPAMI.2020.3026709

    Article  Google Scholar 

  46. Xiao, T., Tsai, Y.-H., Sohn, K., Chandraker, M., Yang, M.-H.: Adversarial learning of privacy-preserving and task-oriented representations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, No. 07 (2020).https://doi.org/10.1609/aaai.v34i07.6930

  47. Yousefpour, A., Shilov, I., Sablayrolles, A., Testuggine, D., Prasad, K., Malek, M., Nguyen, J., Ghosh, S., Bharadwaj, A., Zhao, J., Cormode, G., Mironov, I.: Opacus: user-friendly differential privacy library in PyTorch.arXiv:abs/2109.12298 (2021)

  48. Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., Sang, N.: Bilateral Segmentation Network for Real-time Semantic Segmentation. In: ECCV, BiSeNet (2018)

  49. Yu, D., Naik, S., Backurs, A., Gopi, S., Inan, H.A., Kamath, G., Kulkarni, J., Lee, Y.T., Manoel, A., Wutschitz, L., Yekhanin, S., Zhang, H.: Differentially private fine-tuning of language models, In: ICLR (2022)

  50. Zhang, G., Zhang, Y., Zhang, Y., Fan, W., Liu, S., Chang, S.: Fairness Reprogramming. In: NIPS, Qing Li (2022)

  51. Zhang, J.O., Sax, A., Zamir, A.R., Guibas, L.J., Malik, J.: Side-tuning: a baseline for network adaptation via additive side networks. In: Computer Vision—ECCV 2020. Springer, (2020). ISBN 978-3-030-58580-8

  52. Zheng, Y., Feng, X., Xia, Z., Jiang, X., Demontis, A., Pintor, M., Biggio, B., Roli, F.: Why adversarial reprogramming works, when it fails, and how to tell the difference. Inf. Sci. (2023).https://doi.org/10.1016/j.ins.2023.02.086

    Article  Google Scholar 

  53. Zhou, Z., Shin, J., Zhang, L., Gurudu, S., Gotway, M., Liang, J.: Actively and Incrementally. In: CVPR, Fine-Tuning Convolutional Neural Networks for Biomedical Image Analysis (2017)

  54. Zhu, Y., Yang, X., Wu, Y., Zhang, W.: Parameter-efficient fine-tuning with layer pruning on free-text sequence-to-sequence modeling.arXiv:abs/2305.08285 (2023)

Download references

Author information

Authors and Affiliations

  1. Computer Science, University Of Manitoba, Winnipeg, Canada

    Hossein Abedi Khorasgani & Noman Mohammed

  2. Computer Science and Software Engineering, Concordia University, Quebec, Canada

    Yang Wang

Authors
  1. Hossein Abedi Khorasgani

    You can also search for this author inPubMed Google Scholar

  2. Noman Mohammed

    You can also search for this author inPubMed Google Scholar

  3. Yang Wang

    You can also search for this author inPubMed Google Scholar

Contributions

All authors approved the final manuscript. H.A. designed the method, implemented the algorithm, conducted experiments, analyzed data, and wrote the manuscript. N.M. and Y.W. assisted in solution development and data analysis, reviewed the manuscript, and provided insightful discussions, edits, and critical feedback.

Corresponding author

Correspondence toHossein Abedi Khorasgani.

Ethics declarations

Conflict of interest

The authors declare that they have no Conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp