- Xiuli Bi ORCID:orcid.org/0000-0003-3134-217X13,
- Haowei Liu ORCID:orcid.org/0009-0003-6300-524713,
- Weisheng Li ORCID:orcid.org/0000-0002-9033-824513,
- Bo Liu ORCID:orcid.org/0000-0002-3164-629913 &
- …
- Bin Xiao ORCID:orcid.org/0000-0001-8469-530213
Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 15144))
Included in the following conference series:
408Accesses
Abstract
Artistic images typically contain the unique creative styles of artists. However, it is easy to transfer an artist’s style to arbitrary target images using style transfer techniques. To protect styles, some researchers use adversarial attacks to safeguard artists’ artistic style images. Prior methods only considered defending against all style transfer models, but artists may allow specific models to transfer their artistic styles properly. To meet such requirements, we propose an Artistic Style Protection Scheme (ASPS). The scheme utilizes adversarial perturbations to introduce biases in the mean and variance of content and style features extracted by unauthorized models while aligning authorized models’ content and style features. Additionally, it employs pixel-level and feature-level losses to enhance and degrade the output quality of authorized and unauthorized models, respectively. ASPS requires training only once; during usage, there is no need to see any style transfer models again. Meanwhile, it ensures that the visual quality of the authorized model is unaffected by perturbations. Experimental results demonstrate that our method effectively defends against unauthorized models’ indiscriminate use of artistic styles, allowing authorized models to operate normally, thus effectively resolving the issue of controlled authorization regarding artists’ artistic styles. The code is available athttps://github.com/CherishL-J/ASPS.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 8465
- Price includes VAT (Japan)
- Softcover Book
- JPY 10581
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Fu, C., Hu, Y., Wu, X., Wang, G., Zhang, Q., He, R.: High-fidelity face manipulation with extreme poses and expressions. IEEE Trans. Inf. Forensics Secur.16, 2218–2231 (2021)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprintarXiv:1412.6572 (2014)
Gu, J., Zhao, H., Tresp, V., Torr, P.H.S.: SegPGD: an effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision, ECCV 2022. LNCS, vol. 13689, pp. 308–325. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-19818-2_18
Guo, C., Gardner, J., You, Y., Wilson, A.G., Weinberger, K.: Simple black-box adversarial attacks. In: International Conference on Machine Learning, pp. 2484–2493. PMLR (2019)
Huang, H., Chen, Z., Chen, H., Wang, Y., Zhang, K.: T-SEA: transfer-based self-ensemble attack on object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20514–20523 (2023)
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
Jandial, S., Mangla, P., Varshney, S., Balasubramanian, V.: AdvGAN++: harnessing latent layers for adversary generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)
Jia, S., Ma, C., Yao, T., Yin, B., Ding, S., Yang, X.: Exploring frequency adversarial attacks for face forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4103–4112 (2022)
Lee, C.H., Liu, Z., Wu, L., Luo, P.: MaskGAN: towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5549–5558 (2020)
Li, Y., Ren, J., Xu, H., Liu, H.: Neural style protection: counteracting unauthorized neural style transfer. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), January 2024, pp. 3966–3975 (2024)
Li, Z., et al.: Sibling-attack: rethinking transferable adversarial attacks against face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24626–24637 (2023)
Liu, S., et al.: AdaAttN: revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021)
Luo, C., Lin, Q., Xie, W., Wu, B., Xie, J., Shen, L.: Frequency-driven imperceptible adversarial attack on semantic similarity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15315–15324 (2022)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)
Mi, J.X., Wang, X.D., Zhou, L.F., Cheng, K.: Adversarial examples based on object detection tasks: a survey. Neurocomputing519, 114–126 (2023)
Park, D.Y., Lee, K.H.: Arbitrary style transfer with style-attentional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5880–5888 (2019)
Qian, S., et al.: Make a face: towards arbitrary high fidelity face manipulation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10033–10042 (2019)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).https://doi.org/10.1007/978-3-319-24574-4_28
Rony, J., Pesquet, J.C., Ben Ayed, I.: Proximal splitting adversarial attack for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20524–20533 (2023)
Ruiz, N., Bargal, S.A., Sclaroff, S.: Disrupting deepfakes: adversarial attacks against conditional image translation networks and facial manipulation systems. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12538, pp. 236–251. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-66823-5_14
Ruiz, N., Bargal, S.A., Xie, C., Sclaroff, S.: Practical disruption of image translation deepfake networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 14478–14486 (2023)
Segalis, E., Galili, E.: OGAN: disrupting deepfakes with an adversarial attack that survives training. arXiv preprintarXiv:2006.12247 (2020)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprintarXiv:1409.1556 (2014)
Wang, R., Huang, Z., Chen, Z., Liu, L., Chen, J., Wang, L.: Anti-forgery: towards a stealthy and robust deepfake disruption attack via adversarial perceptual-aware perturbations. arXiv preprintarXiv:2206.00477 (2022)
Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1924–1933 (2021)
Wang, X., Huang, J., Ma, S., Nepal, S., Xu, C.: DeepFake Disrupter: the detector of deepfake is my friend. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14920–14929 (2022)
Xiao, C., Li, B., Zhu, J.Y., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 3905–3911 (2018)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)
Yang, M., Wang, Z., Chi, Z., Feng, W.: WaveGAN: frequency-aware GAN for high-fidelity few-shot image generation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. ECCV 2022. LNCS, vol. 13675, pp. 1–17. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-19784-0_1
Yeh, C.Y., Chen, H.W., Tsai, S.L., Wang, S.D.: Disrupting image-translation-based deepfake algorithms with adversarial attacks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, pp. 53–62 (2020)
Yin, F., et al.: Generalizable black-box adversarial attack with meta learning. IEEE Trans. Pattern Anal. Mach. Intell.46, 1804–1818 (2023)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, pp. 586–595 (2018)
Zhang, Y., et al.: Inversion-based style transfer with diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10146–10156 (2023)
Zhang, Y., et al.: Domain enhanced arbitrary image style transfer via contrastive learning. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–8 (2022)
Zhu, M., He, X., Wang, N., Wang, X., Gao, X.: All-to-key attention for arbitrary style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 23109–23119 (2023)
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Grant 62172067 and Grant 62376046, the Natural Science Foundation of Chongqing for Distinguished Young Scholars under Grant CSTB2022NSCQ-JQX0001, and in part by the Natural Science Foundation of Chongqing under Grant CSTB2023NSCQ-MSX0341.
Author information
Authors and Affiliations
Chongqing University of Posts and Telecommunications, Chongqing, China
Xiuli Bi, Haowei Liu, Weisheng Li, Bo Liu & Bin Xiao
- Xiuli Bi
You can also search for this author inPubMed Google Scholar
- Haowei Liu
You can also search for this author inPubMed Google Scholar
- Weisheng Li
You can also search for this author inPubMed Google Scholar
- Bo Liu
You can also search for this author inPubMed Google Scholar
- Bin Xiao
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toBo Liu.
Editor information
Editors and Affiliations
University of Birmingham, Birmingham, UK
Aleš Leonardis
University of Trento, Trento, Italy
Elisa Ricci
Technical University of Darmstadt, Darmstadt, Hessen, Germany
Stefan Roth
Princeton University, Palo Alto, CA, USA
Olga Russakovsky
Czech Technical University in Prague, Prague, Czech Republic
Torsten Sattler
École des Ponts ParisTech, Marne-la-Vallée, France
Gül Varol
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bi, X., Liu, H., Li, W., Liu, B., Xiao, B. (2025). Using My Artistic Style? You Must Obtain My Authorization. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15144. Springer, Cham. https://doi.org/10.1007/978-3-031-73016-0_18
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-73015-3
Online ISBN:978-3-031-73016-0
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative