Part of the book series:Lecture Notes in Computer Science ((LNAI,volume 14942))
Included in the following conference series:
938Accesses
Abstract
Graphs have become an important modeling tool for web applications, and Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, the performance of traditional GNNs heavily relies on a large amount of supervision. Recently, “pre-train, fine-tune” has become the paradigm to address the issues of label dependency and poor generalization. However, the pre-training strategies vary for graphs with homophily and heterophily, and the objectives for various downstream tasks also differ. This leads to a gap between pretexts and downstream tasks, resulting in “negative transfer” and poor performance. Inspired by prompt learning in Natural Language Processing (NLP), many studies turn to bridge the gap and fully leverage the pre-trained model. However, existing methods for graph prompting are tailored to homophily, neglecting inherent heterophily on graphs. Meanwhile, most of them rely on the randomly initialized prompts, which negatively impact on the stability. Therefore, we propose Self-Prompt, a prompting framework for graphs based on the model and data itself. We first introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks. Then we reuse the component from pre-training phase as the self adapter and introduce self-prompts based on graph itself for task adaptation. Finally, we conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority. We provide our codes athttps://github.com/gongchenghua/Self-Pro.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 17159
- Price includes VAT (Japan)
- Softcover Book
- JPY 10581
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901 (2020)
Chen, H., Lu, Y., et al.: A drug combination prediction framework based on graph convolutional network and heterogeneous information. IEEE/ACM Trans. Comput. Biol. Bioinform.20(3), 1917–1925 (2022)
Fang, T., Zhang, Y., Yang, Y., Wang, C., Chen, L.: Universal prompt tuning for graph neural networks. In: Advances in Neural Information Processing Systems, vol. 36 (2024)
Gong, C., Cheng, Y., Li, X., Shan, C., Luo, S., Shi, C.: Towards learning from graphs with heterophily: progress and future. arXiv e-prints pp. arXiv-2401 (2024)
Guo, X., Wang, Y., Wei, Z., Wang, Y.: Architecture matters: uncovering implicit mechanisms in graph contrastive learning. In: Advances in Neural Information Processing Systems, vol. 36 (2024)
Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: International Conference on Machine Learning, pp. 4116–4126. PMLR (2020)
Hou, Z., et al.: Graphmae: self-supervised masked graph autoencoders. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 594–604 (2022)
Hu, W., et al.: Strategies for pre-training graph neural networks. arXiv preprintarXiv:1905.12265 (2019)
Jia, C., et al.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International Conference on Machine Learning, pp. 4904–4916. PMLR (2021)
Jia, M., et al.: Visual prompt tuning. In: Avidan, S., Brostow, G., Cissè, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13693, pp. 709–727. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-19827-4_41
Kim, H., Choi, J., et al.: Dynamic relation-attentive graph neural networks for fraud detection. arXiv preprintarXiv:2310.04171 (2023)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprintarXiv:1609.02907 (2016)
Kipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv preprintarXiv:1611.07308 (2016)
Li, J., et al.: What’s behind the mask: understanding masked graph modeling for graph autoencoders. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1268–1279 (2023)
Li, X., Ye, T., Shan, C., Li, D., Gao, M.: Seegera: self-supervised semi-implicit graph variational auto-encoders with masking. In: Proceedings of the ACM Web Conference 2023, pp. 143–153 (2023)
Li, X., et al.: Finding global homophily in graph neural networks when meeting heterophily. In: International Conference on Machine Learning, pp. 13242–13256. PMLR (2022)
Liu, N., Wang, X., Bo, D., Shi, C., Pei, J.: Revisiting graph contrastive learning from the perspective of graph spectrum. In: Advances in Neural Information Processing Systems, vol. 35, pp. 2972–2983 (2022)
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput. Surv.55(9), 1–35 (2023)
Liu, Y., Zheng, Y., Zhang, D., Lee, V.C., Pan, S.: Beyond smoothing: unsupervised graph representation learning with edge heterophily discriminating. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 4516–4524 (2023)
Liu, Z., Yu, X., Fang, Y., Zhang, X.: Graphprompt: unifying pre-training and downstream tasks for graph neural networks. In: Proceedings of the ACM Web Conference 2023, pp. 417–428 (2023)
Ma, Y., Yan, N., Li, J., Mortazavi, M., Chawla, N.V.: Hetgpt: harnessing the power of prompt tuning in pre-trained heterogeneous graph neural networks. arXiv preprintarXiv:2310.15318 (2023)
Pei, H., Wei, B., Chang, K.C.C., Lei, Y., Yang, B.: Geom-GCN: geometric graph convolutional networks. arXiv preprintarXiv:2002.05287 (2020)
Rozemberczki, B., Allen, C., Sarkar, R.: Multi-scale attributed node embedding. J. Complex Netw.9(2), cnab014 (2021)
Sun, M., Zhou, K., He, X., Wang, Y., Wang, X.: GPPT: graph pre-training and prompt tuning to generalize graph neural networks. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 1717–1727 (2022)
Sun, X., Cheng, H., Li, J., Liu, B., Guan, J.: All in one: multi-task prompting for graph neural networks (2023)
Sun, X., Zhang, J., Wu, X., Cheng, H., Xiong, Y., Li, J.: Graph prompt learning: a comprehensive survey and beyond. arXiv preprintarXiv:2311.16534 (2023)
Tan, Z., Guo, R., Ding, K., Liu, H.: Virtual node tuning for few-shot node classification. arXiv preprintarXiv:2306.06063 (2023)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprintarXiv:1710.10903 (2017)
Veličković, P., Fedus, W., Hamilton, W.L., Liò, P., Bengio, Y., Hjelm, R.D.: Deep graph infomax. arXiv preprintarXiv:1809.10341 (2018)
Wang, L., et al.: AFEC: active forgetting of negative transfer in continual learning. In: Advances in Neural Information Processing Systems, vol. 34, pp. 22379–22391 (2021)
Wei, J., et al.: Finetuned language models are zero-shot learners. arXiv preprintarXiv:2109.01652 (2021)
Wu, Q., Yang, Y., et al.: Heterophily-aware social bot detection with supervised contrastive learning. arXiv preprintarXiv:2306.07478 (2023)
Xia, J., Zhu, Y., Du, Y., Li, S.Z.: A survey of pretraining on graphs: Taxonomy, methods, and applications. arXiv preprintarXiv:2202.07893 (2022)
Xiao, T., Zhu, H., Chen, Z., Wang, S.: Simple and asymmetric graph contrastive learning without augmentations. In: Advances in Neural Information Processing Systems, vol. 36 (2024)
Yuan, M., Chen, M., Li, X.: Muse: multi-view contrastive learning for heterophilic graphs. arXiv preprintarXiv:2307.16026 (2023)
Zhang, H., Wu, Q., Wang, Y., Zhang, S., Yan, J., Yu, P.S.: Localized contrastive learning on graphs. arXiv preprintarXiv:2212.04604 (2022)
Zhao, W., Wu, Q., Yang, C., Yan, J.: Graphglow: universal and generalizable structure learning for graph neural networks. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3525–3536 (2023)
Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., Koutra, D.: Beyond homophily in graph neural networks: current limitations and effective designs. In: Advances in Neural Information Processing Systems, vol. 33, pp. 7793–7804 (2020)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. arXiv preprintarXiv:2006.04131 (2020)
Acknowledgments
This work is supported by National Natural Science Foundation of China No. 62202172 and Shanghai Science and Technology Committee General Program No. 22ZR1419900.
Author information
Authors and Affiliations
School of Data Science and Engineering, East China Normal University, Shanghai, China
Chenghua Gong, Xiang Li, Jianxiang Yu, Yao Cheng & Jiaqi Tan
School of Computer and Information Engineering, Shanghai Polytechnic University, Shanghai, China
Chengcheng Yu
- Chenghua Gong
You can also search for this author inPubMed Google Scholar
- Xiang Li
You can also search for this author inPubMed Google Scholar
- Jianxiang Yu
You can also search for this author inPubMed Google Scholar
- Yao Cheng
You can also search for this author inPubMed Google Scholar
- Jiaqi Tan
You can also search for this author inPubMed Google Scholar
- Chengcheng Yu
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toXiang Li.
Editor information
Editors and Affiliations
LTCI, Télécom Paris, Palaiseau Cedex, France
Albert Bifet
KU Leuven, Leuven, Belgium
Jesse Davis
Faculty of Informatics, Vytautas Magnus University, Akademija, Lithuania
Tomas Krilavičius
Institute of Computer Science, University of Tartu, Tartu, Estonia
Meelis Kull
Department of Computer Science, Bundeswehr University Munich, Munich, Germany
Eirini Ntoutsi
Department of Computer Science, University of Helsinki, Helsinki, Finland
Indrė Žliobaitė
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gong, C., Li, X., Yu, J., Cheng, Y., Tan, J., Yu, C. (2024). Self-pro: A Self-prompt and Tuning Framework for Graph Neural Networks. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14942. Springer, Cham. https://doi.org/10.1007/978-3-031-70344-7_12
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-70343-0
Online ISBN:978-3-031-70344-7
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative