Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Self-pro: A Self-prompt and Tuning Framework for Graph Neural Networks

  • Conference paper
  • First Online:

Abstract

Graphs have become an important modeling tool for web applications, and Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, the performance of traditional GNNs heavily relies on a large amount of supervision. Recently, “pre-train, fine-tune” has become the paradigm to address the issues of label dependency and poor generalization. However, the pre-training strategies vary for graphs with homophily and heterophily, and the objectives for various downstream tasks also differ. This leads to a gap between pretexts and downstream tasks, resulting in “negative transfer” and poor performance. Inspired by prompt learning in Natural Language Processing (NLP), many studies turn to bridge the gap and fully leverage the pre-trained model. However, existing methods for graph prompting are tailored to homophily, neglecting inherent heterophily on graphs. Meanwhile, most of them rely on the randomly initialized prompts, which negatively impact on the stability. Therefore, we propose Self-Prompt, a prompting framework for graphs based on the model and data itself. We first introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks. Then we reuse the component from pre-training phase as the self adapter and introduce self-prompts based on graph itself for task adaptation. Finally, we conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority. We provide our codes athttps://github.com/gongchenghua/Self-Pro.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 17159
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901 (2020)

    Google Scholar 

  2. Chen, H., Lu, Y., et al.: A drug combination prediction framework based on graph convolutional network and heterogeneous information. IEEE/ACM Trans. Comput. Biol. Bioinform.20(3), 1917–1925 (2022)

    Article  Google Scholar 

  3. Fang, T., Zhang, Y., Yang, Y., Wang, C., Chen, L.: Universal prompt tuning for graph neural networks. In: Advances in Neural Information Processing Systems, vol. 36 (2024)

    Google Scholar 

  4. Gong, C., Cheng, Y., Li, X., Shan, C., Luo, S., Shi, C.: Towards learning from graphs with heterophily: progress and future. arXiv e-prints pp. arXiv-2401 (2024)

    Google Scholar 

  5. Guo, X., Wang, Y., Wei, Z., Wang, Y.: Architecture matters: uncovering implicit mechanisms in graph contrastive learning. In: Advances in Neural Information Processing Systems, vol. 36 (2024)

    Google Scholar 

  6. Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: International Conference on Machine Learning, pp. 4116–4126. PMLR (2020)

    Google Scholar 

  7. Hou, Z., et al.: Graphmae: self-supervised masked graph autoencoders. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 594–604 (2022)

    Google Scholar 

  8. Hu, W., et al.: Strategies for pre-training graph neural networks. arXiv preprintarXiv:1905.12265 (2019)

  9. Jia, C., et al.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916. PMLR (2021)

    Google Scholar 

  10. Jia, M., et al.: Visual prompt tuning. In: Avidan, S., Brostow, G., Cissè, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13693, pp. 709–727. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-19827-4_41

    Chapter  Google Scholar 

  11. Kim, H., Choi, J., et al.: Dynamic relation-attentive graph neural networks for fraud detection. arXiv preprintarXiv:2310.04171 (2023)

  12. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprintarXiv:1609.02907 (2016)

  13. Kipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv preprintarXiv:1611.07308 (2016)

  14. Li, J., et al.: What’s behind the mask: understanding masked graph modeling for graph autoencoders. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1268–1279 (2023)

    Google Scholar 

  15. Li, X., Ye, T., Shan, C., Li, D., Gao, M.: Seegera: self-supervised semi-implicit graph variational auto-encoders with masking. In: Proceedings of the ACM Web Conference 2023, pp. 143–153 (2023)

    Google Scholar 

  16. Li, X., et al.: Finding global homophily in graph neural networks when meeting heterophily. In: International Conference on Machine Learning, pp. 13242–13256. PMLR (2022)

    Google Scholar 

  17. Liu, N., Wang, X., Bo, D., Shi, C., Pei, J.: Revisiting graph contrastive learning from the perspective of graph spectrum. In: Advances in Neural Information Processing Systems, vol. 35, pp. 2972–2983 (2022)

    Google Scholar 

  18. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput. Surv.55(9), 1–35 (2023)

    Article  Google Scholar 

  19. Liu, Y., Zheng, Y., Zhang, D., Lee, V.C., Pan, S.: Beyond smoothing: unsupervised graph representation learning with edge heterophily discriminating. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 4516–4524 (2023)

    Google Scholar 

  20. Liu, Z., Yu, X., Fang, Y., Zhang, X.: Graphprompt: unifying pre-training and downstream tasks for graph neural networks. In: Proceedings of the ACM Web Conference 2023, pp. 417–428 (2023)

    Google Scholar 

  21. Ma, Y., Yan, N., Li, J., Mortazavi, M., Chawla, N.V.: Hetgpt: harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks. arXiv preprintarXiv:2310.15318 (2023)

  22. Pei, H., Wei, B., Chang, K.C.C., Lei, Y., Yang, B.: Geom-GCN: geometric graph convolutional networks. arXiv preprintarXiv:2002.05287 (2020)

  23. Rozemberczki, B., Allen, C., Sarkar, R.: Multi-scale attributed node embedding. J. Complex Netw.9(2), cnab014 (2021)

    Google Scholar 

  24. Sun, M., Zhou, K., He, X., Wang, Y., Wang, X.: GPPT: graph pre-training and prompt tuning to generalize graph neural networks. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1717–1727 (2022)

    Google Scholar 

  25. Sun, X., Cheng, H., Li, J., Liu, B., Guan, J.: All in one: multi-task prompting for graph neural networks (2023)

    Google Scholar 

  26. Sun, X., Zhang, J., Wu, X., Cheng, H., Xiong, Y., Li, J.: Graph prompt learning: a comprehensive survey and beyond. arXiv preprintarXiv:2311.16534 (2023)

  27. Tan, Z., Guo, R., Ding, K., Liu, H.: Virtual node tuning for few-shot node classification. arXiv preprintarXiv:2306.06063 (2023)

  28. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprintarXiv:1710.10903 (2017)

  29. Veličković, P., Fedus, W., Hamilton, W.L., Liò, P., Bengio, Y., Hjelm, R.D.: Deep graph infomax. arXiv preprintarXiv:1809.10341 (2018)

  30. Wang, L., et al.: AFEC: active forgetting of negative transfer in continual learning. In: Advances in Neural Information Processing Systems, vol. 34, pp. 22379–22391 (2021)

    Google Scholar 

  31. Wei, J., et al.: Finetuned language models are zero-shot learners. arXiv preprintarXiv:2109.01652 (2021)

  32. Wu, Q., Yang, Y., et al.: Heterophily-aware social bot detection with supervised contrastive learning. arXiv preprintarXiv:2306.07478 (2023)

  33. Xia, J., Zhu, Y., Du, Y., Li, S.Z.: A survey of pretraining on graphs: Taxonomy, methods, and applications. arXiv preprintarXiv:2202.07893 (2022)

  34. Xiao, T., Zhu, H., Chen, Z., Wang, S.: Simple and asymmetric graph contrastive learning without augmentations. In: Advances in Neural Information Processing Systems, vol. 36 (2024)

    Google Scholar 

  35. Yuan, M., Chen, M., Li, X.: Muse: multi-view contrastive learning for heterophilic graphs. arXiv preprintarXiv:2307.16026 (2023)

  36. Zhang, H., Wu, Q., Wang, Y., Zhang, S., Yan, J., Yu, P.S.: Localized contrastive learning on graphs. arXiv preprintarXiv:2212.04604 (2022)

  37. Zhao, W., Wu, Q., Yang, C., Yan, J.: Graphglow: universal and generalizable structure learning for graph neural networks. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3525–3536 (2023)

    Google Scholar 

  38. Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., Koutra, D.: Beyond homophily in graph neural networks: current limitations and effective designs. In: Advances in Neural Information Processing Systems, vol. 33, pp. 7793–7804 (2020)

    Google Scholar 

  39. Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. arXiv preprintarXiv:2006.04131 (2020)

Download references

Acknowledgments

This work is supported by National Natural Science Foundation of China No. 62202172 and Shanghai Science and Technology Committee General Program No. 22ZR1419900.

Author information

Authors and Affiliations

  1. School of Data Science and Engineering, East China Normal University, Shanghai, China

    Chenghua Gong, Xiang Li, Jianxiang Yu, Yao Cheng & Jiaqi Tan

  2. School of Computer and Information Engineering, Shanghai Polytechnic University, Shanghai, China

    Chengcheng Yu

Authors
  1. Chenghua Gong

    You can also search for this author inPubMed Google Scholar

  2. Xiang Li

    You can also search for this author inPubMed Google Scholar

  3. Jianxiang Yu

    You can also search for this author inPubMed Google Scholar

  4. Yao Cheng

    You can also search for this author inPubMed Google Scholar

  5. Jiaqi Tan

    You can also search for this author inPubMed Google Scholar

  6. Chengcheng Yu

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toXiang Li.

Editor information

Editors and Affiliations

  1. LTCI, Télécom Paris, Palaiseau Cedex, France

    Albert Bifet

  2. KU Leuven, Leuven, Belgium

    Jesse Davis

  3. Faculty of Informatics, Vytautas Magnus University, Akademija, Lithuania

    Tomas Krilavičius

  4. Institute of Computer Science, University of Tartu, Tartu, Estonia

    Meelis Kull

  5. Department of Computer Science, Bundeswehr University Munich, Munich, Germany

    Eirini Ntoutsi

  6. Department of Computer Science, University of Helsinki, Helsinki, Finland

    Indrė Žliobaitė

Rights and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gong, C., Li, X., Yu, J., Cheng, Y., Tan, J., Yu, C. (2024). Self-pro: A Self-prompt and Tuning Framework for Graph Neural Networks. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14942. Springer, Cham. https://doi.org/10.1007/978-3-031-70344-7_12

Download citation

Publish with us

Societies and partnerships

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 17159
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp