Part of the book series:Studies in Computational Intelligence ((SCI,volume 1141))
Included in the following conference series:
1392Accesses
10Citations
Abstract
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by additionally making use of graph structure based on the relational inductive bias (edge bias), rather than treating the nodes as collections of independent and identically distributed (i.i.d.) samples. Though GNNs are believed to outperform basic NNs in real-world tasks, it is found that in some cases, GNNs have little performance gain or even underperform graph-agnostic NNs. To identify these cases, based on graph signal processing and statistical hypothesis testing, we propose two measures which analyze the cases in which the edge bias in features and labels does not provide advantages. Based on the measures, a threshold value can be given to predict the potential performance advantages of graph-aware models over graph-agnostic models.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Starting from 10 chapters or articles per month
- Access and download chapters and articles from more than 300k books and 2,500 journals
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 25167
- Price includes VAT (Japan)
- Softcover Book
- JPY 31459
- Price includes VAT (Japan)
- Hardcover Book
- JPY 31459
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Explore related subjects
Discover the latest articles, books and news in related subjects, suggested using machine learning.References
Ahmed, H.B., Dare, D., Boudraa, A.-O.: Graph signals classification using total variation and graph energy informations. In: 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 667–671. IEEE (2017)
Battaglia, P.W., et al.: Relational inductive biases, deep learning, and graph networks. arXiv preprintarXiv:1806.01261 (2018)
Chen, S., Sandryhaila, A., Moura, J.M., Kovacevic, J.: Signal recovery on graphs: variation minimization. IEEE Trans. Signal Process.63(17), 4609–4624 (2015)
Chung, F.R.: Spectral Graph Theory, vol. 92. American Mathematical Soc. (1997)
Cong, W., Ramezani, M., Mahdavi, M.: On provable benefits of depth in training graph convolutional networks. Adv. Neural. Inf. Process. Syst.34, 9936–9949 (2021)
Daković, M., Stanković, L., Sejdić, E.: Local smoothness of graph signals. Math. Probl. Eng.2019, 1–14 (2019)
Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. Adv. Neural Inf. Process. Syst.29 (2016)
Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst.30 (2017)
Hamilton, W.L.: Graph representation learning. Synth. Lect. Artif. Intell. Mach. Learn.14(3), 1–159 (2020)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (2016)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature521(7553), 436 (2015)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE86(11), 2278–2324 (1998)
Li, Q., Han, Z., Wu, X.-M.: Deeper insights into graph convolutional networks for semi-supervised learning. Proc. AAAI Conf. Artif. Intell.32 (2018)
Lim, D., et al.: Large scale learning on non-homophilous graphs: new benchmarks and strong simple methods. Adv. Neural. Inf. Process. Syst.34, 20887–20902 (2021)
Lim, D., Li, X., Hohne, F., Lim, S.-N.: New benchmarks for learning on non-homophilous graphs. arXiv preprintarXiv:2104.01404 (2021)
Liu, M., Wang, Z., Ji, S.: Non-local graph neural networks. arXiv preprintarXiv:2005.14612 (2020)
Luan, S.: On addressing the limitations of graph neural networks. arXiv preprintarXiv:2306.12640 (2023)
Luan, S., et al.: Is heterophily a real nightmare for graph neural networks to do node classification? arXiv preprintarXiv:2109.05641 (2021)
Luan, S., et al.: Revisiting heterophily for graph neural networks. Adv. Neural. Inf. Process. Syst.35, 1362–1375 (2022)
Luan, S., et al.: When do graph neural networks help with node classification: investigating the homophily principle on node distinguishability. Adv. Neural Inf. Process. Syst.36 (2023)
Luan, S., Zhao, M., Chang, X.-W., Precup, D.: Break the ceiling: stronger multi-scale deep graph convolutional networks. Adv. Neural Inf. Process. Syst.32 (2019)
Luan, S., Zhao, M., Chang, X.-W., Precup, D.: Training matters: unlocking potentials of deeper graph convolutional neural networks. arXiv preprintarXiv:2008.08838 (2020)
Luan, S., Zhao, M., Hua, C., Chang, X.-W., Precup, D.: Complete the missing half: augmenting aggregation filtering with diversification for graph convolutional networks. In: NeurIPS 2022 Workshop: New Frontiers in Graph Learning (2022)
Maehara, T.: Revisiting graph neural networks: all we have is low-pass filters. arXiv preprintarXiv:1905.09550 (2019)
McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Ann. Rev. Sociol.27(1), 415–444 (2001)
Pei, H, Wei, B., Chang, K.C.-C., Lei, Y., Yang, B.: Geom-gcn: geometric graph convolutional networks. In: International Conference on Learning Representations (2020)
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. In: International Conference on Learning Representations (2018)
Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., Weinberger, K.: Simplifying graph convolutional networks. In: International Conference on Machine Learning, pp. 6861–6871. PMLR (2019)
Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Advances in Neural Information Processing Systems, pp. 321–328 (2004)
Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., Koutra, D.: Generalizing graph neural networks beyond homophily. arXiv preprintarXiv:2006.11468 (2020)
Author information
Authors and Affiliations
McGill University, Montreal, Canada
Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Xiao-Wen Chang & Doina Precup
Mila, Montreal, Canada
Sitao Luan, Chenqing Hua & Doina Precup
DeepMind, London, UK
Doina Precup
- Sitao Luan
Search author on:PubMed Google Scholar
- Chenqing Hua
Search author on:PubMed Google Scholar
- Qincheng Lu
Search author on:PubMed Google Scholar
- Jiaqi Zhu
Search author on:PubMed Google Scholar
- Xiao-Wen Chang
Search author on:PubMed Google Scholar
- Doina Precup
Search author on:PubMed Google Scholar
Corresponding author
Correspondence toSitao Luan.
Editor information
Editors and Affiliations
University of Burgundy, Dijon Cedex, France
Hocine Cherifi
Thomas J. Watson College of Engineering and Applied Sciences, Binghamton University, Binghamton, NY, USA
Luis M. Rocha
IUT Lumière - Université Lyon 2, University of Lyon, Bron, France
Chantal Cherifi
Department of Economics, Yildiz Technical University, Istanbul, Türkiye
Murat Donduran
A Details of NSV and Sample Covariance Matrix
A Details of NSV and Sample Covariance Matrix
The sample covariance matrixS is computed as follows
It is easy to verify that
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Luan, S., Hua, C., Lu, Q., Zhu, J., Chang, XW., Precup, D. (2024). When Do We Need Graph Neural Networks for Node Classification?. In: Cherifi, H., Rocha, L.M., Cherifi, C., Donduran, M. (eds) Complex Networks & Their Applications XII. COMPLEX NETWORKS 2023. Studies in Computational Intelligence, vol 1141. Springer, Cham. https://doi.org/10.1007/978-3-031-53468-3_4
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-53467-6
Online ISBN:978-3-031-53468-3
eBook Packages:EngineeringEngineering (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
