Abstract
With the increasing popularity of machine learning (ML) in image processing, privacy concerns have emerged as a significant issue in deploying and using ML services. However, current privacy protection approaches often require computationally expensive training from scratch or extensive fine-tuning of models, posing significant barriers to the development of privacy-conscious models, particularly for smaller organizations seeking to comply with data privacy laws. In this paper, we address the privacy challenges in computer vision by investigating the effectiveness of two recent fine-tuning methods, Model Reprogramming and Low-Rank Adaptation. We adapt these techniques to provide attribute protection for pre-trained models, minimizing computational overhead and training time. Specifically, we modify the models to produce privacy-preserving latent representations of images that cannot be used to identify unintended attributes. We integrate these methods into an adversarial min–max framework, allowing us to conceal sensitive information from feature outputs without extensive modifications to the pre-trained model, but rather focusing on a small set of new parameters. We demonstrate the effectiveness of our methods by conducting experiments on the CelebA dataset, achieving state-of-the-art performance while significantly reducing computational complexity and cost. Our research provides a valuable contribution to the field of computer vision and privacy, offering practical solutions to enhance the privacy of machine learning services without compromising efficiency.
This is a preview of subscription content,log in via an institution to check access.
Access this article
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
Price includes VAT (Japan)
Instant access to the full article PDF.















Similar content being viewed by others
Data Availability
The CelebA [29] dataset used in this study is publicly available athttp://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.
References
Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B, Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: ACM SIGSAC Conference on Computer and Communications Security (CCS), (2016).https://doi.org/10.1145/2976749.2978318
Bahng, H., Jahanian, A., Sankaranarayanan, S., Isola, P.: Exploring visual prompts for adapting large-scale models.arXiv:abs/2203.17274 (2022)
Chang, J., Sha, J.: An efficient implementation of 2D convolution in CNN. IEICE Electron. Express (2017).https://doi.org/10.1587/elex.13.20161134
Chen, P.-Y.: Model reprogramming: resource-efficient cross-domain machine learning.arXiv:abs/2202.10629 (2022)
Dave, I.R., Chen, C., Shah, M.: SPAct: self-supervised privacy preservation for action recognition. In: CVPR.https://doi.org/10.1109/CVPR52688.2022.01953 (2022)
Ding, X., Fang, H., Zhang, Z., Choo, K.-K.R., Jin, H.: Privacy-preserving feature extraction via adversarial training. IEEE Trans. Knowl. Data Eng. (2022).https://doi.org/10.1109/TKDE.2020.2997604
Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: CVPR, pp. 4829–4837 (2016)
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci.,9(3–4), (2014).https://doi.org/10.1561/0400000042
Elsayed, G.F., Goodfellow, I.J., Sohl-Dickstein, J.N.: Adversarial reprogramming of neural networks. In: ICLR (2019)
Gildenblat, J., contributors: Pytorch library for cam methods.https://github.com/jacobgil/pytorch-grad-cam (2021)
Guo, X., Li, B., Yu, H.: Improving the sample efficiency of prompt tuning with domain adaptation. In: Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics (2022).https://doi.org/10.18653/v1/2022.findings-emnlp.258
Guo, Y., Shi, H., Kumar, A., Grauman, K., Rosing, T., Feris, R.: SpotTune: transfer learning through adaptive fine-tuning. In: CVPR (2019)
Hambardzumyan, K., Khachatrian, H., May, J.: Warp: word-level adversarial reprogramming. In: ACL-IJCNLP.https://doi.org/10.18653/v1/2021.acl-long.381 (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop (2015)
Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., Chen, W.: LoRA: Low-rank adaptation of large language models. In: ICLR (2022a)
Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P.S., Zhang, X.: Membership inference attacks on machine learning: a survey. ACM Comput. Surv. (2022).https://doi.org/10.1145/3523273
Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions.arXiv:abs/1405.3866 (2014)
Jia, M., Tang, L., Chen, B.-C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.-N.: Visual prompt tuning. In: ECCV. Springer, Berlin (2022).https://doi.org/10.1007/978-3-031-19827-4_41
Kim, M., Kim, H., Ro, Y.M.: Speaker-adaptive lip reading with user-dependent padding. In: Computer Vision—ECCV 2022. Springer Nature Switzerland (2022)
Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
Koczkodaj, W.W., Mazurek, M., Strzałka, D., Wolny-Dominiak, A., Woodbury-Smith, M.: Electronic health record breaches as social indicators. Soc. Indic. Res.141(2), 1 (2019).https://doi.org/10.1007/s11205-018-1837-z
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NeurIPS, vol. 25. Curran Associates, Inc. (2012)
Li, A., Duan, Y., Yang, H., Chen, Y., Yang, J.: TIPRDC: task-independent privacy-respecting data crowdsourcing framework for deep learning with anonymized intermediate representations. In: SIGKDD (2020).https://doi.org/10.1145/3394486.3403125
Li, A., Guo, J., Yang, H., Salim, F.D., Chen, Y.: DeepObfuscator: obfuscating intermediate representations with privacy-preserving adversarial learning on smartphones. In: IoTDI (2021).https://doi.org/10.1145/3450268.3453519
Liu, H., Tam, D., Mohammed, M., Mohta, J., Huang, T., Bansal, M., Raffel, C.: Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In: NeurIPS (2022a)
Liu, X., Zhao, H., Tian, M., Sheng, L., Shao, J., Shuai, Y., Junjie, Y., Wang, X.: HydraPlus-Net: attentive deep features for pedestrian analysis. In: ICCV (2017).https://doi.org/10.1109/ICCV.2017.46
Liu, Y., Wen, R., He, X., Salem, A., Zhang, Z., Backes, M., De Cristofaro, E., Fritz, M., Zhang, Y.: ML-Doctor: holistic risk assessment of inference attacks against machine learning models. In: USENIX Security (2022b)
Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV (2015)
Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: S &P (2019).https://doi.org/10.1109/SP.2019.00029
Meredith, S.: Facebook-Cambridge analytica: a timeline of the data hijacking scandal, (2018).https://www.cnbc.com/2018/04/10/facebook-cambridge-analytica-a-timeline-of-the-data-hijacking-scandal.html
Mireshghallah, F., et al.: Not all features are equal: discovering essential features for preserving prediction privacy. In: WWW (2021).https://doi.org/10.1145/3442381.3449965
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In: NIPS-W (2017)
Pfeiffer, J., Ruder, S., Vulić, I., Ponti, E.: Modular deep learning. In: TMLR. ISSN 2835-8856 (2023)
Ronneberger, O., Fischer, P., Brox, T.: U-Net convolutional networks for biomedical image segmentation. In: MICCAI (2015).https://doi.org/10.1007/978-3-319-24574-4_28
Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: CVPR (2015).https://doi.org/10.1109/CVPR.2015.7298682
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)
Smith, L.N., Topin, N.: Super-convergence: very fast training of neural networks using large learning rates. In: Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications. SPIE (2019).https://doi.org/10.1117/12.2520589
Song, C., Shmatikov, V.: Overlearning reveals sensitive attributes. In: ICLR (2020)
Treviso, M., et al.: Efficient methods for natural language processing: a survey. Trans. Assoc. Comput. Linguist.11, 826–860 (2023).https://doi.org/10.1162/tacl_a_00577
Tsai, Y.-Y., Chen, P.-Y., Ho, T.-Y.: Transfer Learning without knowing: reprogramming black-box machine learning models with scarce data and limited resources. In: PMLR (2020)
Wang, Z., Panda, R., Karlinsky, L., Feris, R., Sun, H., Kim, Y.: Multitask prompt tuning enables parameter-efficient transfer learning. In: ICLR, (2023)
Williams, C.: 620 million accounts stolen from 16 hacked websites now for sale on dark web, seller boasts (2019).https://www.theregister.com/2019/02/11/620_million_hacked_accounts_dark_web/
Wu, Z., Wang, Z., Wang, Z., Jin, H.: A Pilot study. In: ECCV, Towards Privacy-Preserving Visual Recognition via Adversarial Training (2018)
Wu, Z., Wang, H., Wang, Z., Jin, H., Wang, Z.: Privacy-preserving deep action recognition: an adversarial learning framework and a new dataset. IEEE Trans. Pattern Anal. Mach. Intell.44(4), 2126–2139 (2022).https://doi.org/10.1109/TPAMI.2020.3026709
Xiao, T., Tsai, Y.-H., Sohn, K., Chandraker, M., Yang, M.-H.: Adversarial learning of privacy-preserving and task-oriented representations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, No. 07 (2020).https://doi.org/10.1609/aaai.v34i07.6930
Yousefpour, A., Shilov, I., Sablayrolles, A., Testuggine, D., Prasad, K., Malek, M., Nguyen, J., Ghosh, S., Bharadwaj, A., Zhao, J., Cormode, G., Mironov, I.: Opacus: user-friendly differential privacy library in PyTorch.arXiv:abs/2109.12298 (2021)
Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., Sang, N.: Bilateral Segmentation Network for Real-time Semantic Segmentation. In: ECCV, BiSeNet (2018)
Yu, D., Naik, S., Backurs, A., Gopi, S., Inan, H.A., Kamath, G., Kulkarni, J., Lee, Y.T., Manoel, A., Wutschitz, L., Yekhanin, S., Zhang, H.: Differentially private fine-tuning of language models, In: ICLR (2022)
Zhang, G., Zhang, Y., Zhang, Y., Fan, W., Liu, S., Chang, S.: Fairness Reprogramming. In: NIPS, Qing Li (2022)
Zhang, J.O., Sax, A., Zamir, A.R., Guibas, L.J., Malik, J.: Side-tuning: a baseline for network adaptation via additive side networks. In: Computer Vision—ECCV 2020. Springer, (2020). ISBN 978-3-030-58580-8
Zheng, Y., Feng, X., Xia, Z., Jiang, X., Demontis, A., Pintor, M., Biggio, B., Roli, F.: Why adversarial reprogramming works, when it fails, and how to tell the difference. Inf. Sci. (2023).https://doi.org/10.1016/j.ins.2023.02.086
Zhou, Z., Shin, J., Zhang, L., Gurudu, S., Gotway, M., Liang, J.: Actively and Incrementally. In: CVPR, Fine-Tuning Convolutional Neural Networks for Biomedical Image Analysis (2017)
Zhu, Y., Yang, X., Wu, Y., Zhang, W.: Parameter-efficient fine-tuning with layer pruning on free-text sequence-to-sequence modeling.arXiv:abs/2305.08285 (2023)
Author information
Authors and Affiliations
Computer Science, University Of Manitoba, Winnipeg, Canada
Hossein Abedi Khorasgani & Noman Mohammed
Computer Science and Software Engineering, Concordia University, Quebec, Canada
Yang Wang
- Hossein Abedi Khorasgani
You can also search for this author inPubMed Google Scholar
- Noman Mohammed
You can also search for this author inPubMed Google Scholar
- Yang Wang
You can also search for this author inPubMed Google Scholar
Contributions
All authors approved the final manuscript. H.A. designed the method, implemented the algorithm, conducted experiments, analyzed data, and wrote the manuscript. N.M. and Y.W. assisted in solution development and data analysis, reviewed the manuscript, and provided insightful discussions, edits, and critical feedback.
Corresponding author
Correspondence toHossein Abedi Khorasgani.
Ethics declarations
Conflict of interest
The authors declare that they have no Conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Abedi Khorasgani, H., Mohammed, N. & Wang, Y. Attribute inference privacy protection for pre-trained models.Int. J. Inf. Secur.23, 2269–2285 (2024). https://doi.org/10.1007/s10207-024-00839-7
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative