Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Using My Artistic Style? You Must Obtain My Authorization

  • Conference paper
  • First Online:

Abstract

Artistic images typically contain the unique creative styles of artists. However, it is easy to transfer an artist’s style to arbitrary target images using style transfer techniques. To protect styles, some researchers use adversarial attacks to safeguard artists’ artistic style images. Prior methods only considered defending against all style transfer models, but artists may allow specific models to transfer their artistic styles properly. To meet such requirements, we propose an Artistic Style Protection Scheme (ASPS). The scheme utilizes adversarial perturbations to introduce biases in the mean and variance of content and style features extracted by unauthorized models while aligning authorized models’ content and style features. Additionally, it employs pixel-level and feature-level losses to enhance and degrade the output quality of authorized and unauthorized models, respectively. ASPS requires training only once; during usage, there is no need to see any style transfer models again. Meanwhile, it ensures that the visual quality of the authorized model is unaffected by perturbations. Experimental results demonstrate that our method effectively defends against unauthorized models’ indiscriminate use of artistic styles, allowing authorized models to operate normally, thus effectively resolving the issue of controlled authorization regarding artists’ artistic styles. The code is available athttps://github.com/CherishL-J/ASPS.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Fu, C., Hu, Y., Wu, X., Wang, G., Zhang, Q., He, R.: High-fidelity face manipulation with extreme poses and expressions. IEEE Trans. Inf. Forensics Secur.16, 2218–2231 (2021)

    Article  Google Scholar 

  2. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprintarXiv:1412.6572 (2014)

  3. Gu, J., Zhao, H., Tresp, V., Torr, P.H.S.: SegPGD: an effective and efficient adversarial attack for evaluating and boosting segmentation robustness. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision, ECCV 2022. LNCS, vol. 13689, pp. 308–325. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-19818-2_18

  4. Guo, C., Gardner, J., You, Y., Wilson, A.G., Weinberger, K.: Simple black-box adversarial attacks. In: International Conference on Machine Learning, pp. 2484–2493. PMLR (2019)

    Google Scholar 

  5. Huang, H., Chen, Z., Chen, H., Wang, Y., Zhang, K.: T-SEA: transfer-based self-ensemble attack on object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20514–20523 (2023)

    Google Scholar 

  6. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)

    Google Scholar 

  7. Jandial, S., Mangla, P., Varshney, S., Balasubramanian, V.: AdvGAN++: harnessing latent layers for adversary generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)

    Google Scholar 

  8. Jia, S., Ma, C., Yao, T., Yin, B., Ding, S., Yang, X.: Exploring frequency adversarial attacks for face forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4103–4112 (2022)

    Google Scholar 

  9. Lee, C.H., Liu, Z., Wu, L., Luo, P.: MaskGAN: towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5549–5558 (2020)

    Google Scholar 

  10. Li, Y., Ren, J., Xu, H., Liu, H.: Neural style protection: counteracting unauthorized neural style transfer. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), January 2024, pp. 3966–3975 (2024)

    Google Scholar 

  11. Li, Z., et al.: Sibling-attack: rethinking transferable adversarial attacks against face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24626–24637 (2023)

    Google Scholar 

  12. Liu, S., et al.: AdaAttN: revisit attention mechanism in arbitrary neural style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6649–6658 (2021)

    Google Scholar 

  13. Luo, C., Lin, Q., Xie, W., Wu, B., Xie, J., Shen, L.: Frequency-driven imperceptible adversarial attack on semantic similarity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15315–15324 (2022)

    Google Scholar 

  14. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  15. Mi, J.X., Wang, X.D., Zhou, L.F., Cheng, K.: Adversarial examples based on object detection tasks: a survey. Neurocomputing519, 114–126 (2023)

    Article  Google Scholar 

  16. Park, D.Y., Lee, K.H.: Arbitrary style transfer with style-attentional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5880–5888 (2019)

    Google Scholar 

  17. Qian, S., et al.: Make a face: towards arbitrary high fidelity face manipulation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10033–10042 (2019)

    Google Scholar 

  18. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  19. Rony, J., Pesquet, J.C., Ben Ayed, I.: Proximal splitting adversarial attack for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20524–20533 (2023)

    Google Scholar 

  20. Ruiz, N., Bargal, S.A., Sclaroff, S.: Disrupting deepfakes: adversarial attacks against conditional image translation networks and facial manipulation systems. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12538, pp. 236–251. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-66823-5_14

    Chapter  Google Scholar 

  21. Ruiz, N., Bargal, S.A., Xie, C., Sclaroff, S.: Practical disruption of image translation deepfake networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 14478–14486 (2023)

    Google Scholar 

  22. Segalis, E., Galili, E.: OGAN: disrupting deepfakes with an adversarial attack that survives training. arXiv preprintarXiv:2006.12247 (2020)

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprintarXiv:1409.1556 (2014)

  24. Wang, R., Huang, Z., Chen, Z., Liu, L., Chen, J., Wang, L.: Anti-forgery: towards a stealthy and robust deepfake disruption attack via adversarial perceptual-aware perturbations. arXiv preprintarXiv:2206.00477 (2022)

  25. Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1924–1933 (2021)

    Google Scholar 

  26. Wang, X., Huang, J., Ma, S., Nepal, S., Xu, C.: DeepFake Disrupter: the detector of deepfake is my friend. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14920–14929 (2022)

    Google Scholar 

  27. Xiao, C., Li, B., Zhu, J.Y., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 3905–3911 (2018)

    Google Scholar 

  28. Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)

    Google Scholar 

  29. Yang, M., Wang, Z., Chi, Z., Feng, W.: WaveGAN: frequency-aware GAN for high-fidelity few-shot image generation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. ECCV 2022. LNCS, vol. 13675, pp. 1–17. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-19784-0_1

  30. Yeh, C.Y., Chen, H.W., Tsai, S.L., Wang, S.D.: Disrupting image-translation-based deepfake algorithms with adversarial attacks. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, pp. 53–62 (2020)

    Google Scholar 

  31. Yin, F., et al.: Generalizable black-box adversarial attack with meta learning. IEEE Trans. Pattern Anal. Mach. Intell.46, 1804–1818 (2023)

    Article  Google Scholar 

  32. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  33. Zhang, Y., et al.: Inversion-based style transfer with diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10146–10156 (2023)

    Google Scholar 

  34. Zhang, Y., et al.: Domain enhanced arbitrary image style transfer via contrastive learning. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–8 (2022)

    Google Scholar 

  35. Zhu, M., He, X., Wang, N., Wang, X., Gao, X.: All-to-key attention for arbitrary style transfer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 23109–23119 (2023)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 62172067 and Grant 62376046, the Natural Science Foundation of Chongqing for Distinguished Young Scholars under Grant CSTB2022NSCQ-JQX0001, and in part by the Natural Science Foundation of Chongqing under Grant CSTB2023NSCQ-MSX0341.

Author information

Authors and Affiliations

  1. Chongqing University of Posts and Telecommunications, Chongqing, China

    Xiuli Bi, Haowei Liu, Weisheng Li, Bo Liu & Bin Xiao

Authors
  1. Xiuli Bi

    You can also search for this author inPubMed Google Scholar

  2. Haowei Liu

    You can also search for this author inPubMed Google Scholar

  3. Weisheng Li

    You can also search for this author inPubMed Google Scholar

  4. Bo Liu

    You can also search for this author inPubMed Google Scholar

  5. Bin Xiao

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toBo Liu.

Editor information

Editors and Affiliations

  1. University of Birmingham, Birmingham, UK

    Aleš Leonardis

  2. University of Trento, Trento, Italy

    Elisa Ricci

  3. Technical University of Darmstadt, Darmstadt, Hessen, Germany

    Stefan Roth

  4. Princeton University, Palo Alto, CA, USA

    Olga Russakovsky

  5. Czech Technical University in Prague, Prague, Czech Republic

    Torsten Sattler

  6. École des Ponts ParisTech, Marne-la-Vallée, France

    Gül Varol

Rights and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bi, X., Liu, H., Li, W., Liu, B., Xiao, B. (2025). Using My Artistic Style? You Must Obtain My Authorization. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15144. Springer, Cham. https://doi.org/10.1007/978-3-031-73016-0_18

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp