Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14938))
Included in the following conference series:
270Accesses
Abstract
Convolutional neural networks (CNNs) are a popular choice of model for tasks in computer vision. When CNNs are made with many layers, resulting in adeep neural network, skip connections may be added to create an easier gradient optimization problem while retaining model expressiveness. In this paper, we show that arbitrarily complex, trained, linear CNNs with skip connections can be simplified into a single-layer model, resulting in greatly reduced computational requirements during prediction time. We also present a method for training nonlinear models with skip connections that are gradually removed throughout training, giving the benefits of skip connections without requiring computational overhead during prediction time. These results are demonstrated with computational examples on Residual Networks (ResNet) architecture.
Supported by the National Science Foundation under grant DMS 1854513.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 14871
- Price includes VAT (Japan)
- Softcover Book
- JPY 18589
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
Though other methods could be implemented make this easier, such as using gradient-free simulated annealing on\(t_0\), or—with sufficient framework—gradient descent.
References
Bao, J., He, Y.H., Hirst, E.: Neurons on amoebae. J. Symb. Comput.116, 1–38 (2023)
Barzilai, D., Geifman, A., Galun, M., Basri, R.: A kernel perspective of skip connections in convolutional networks. In: The Eleventh International Conference on Learning Representations (2022)
Bérczi, G., Fan, H., Zeng, M.: An ML approach to resolution of singularities. In: Topological, Algebraic and Geometric Learning Workshops, pp. 469–487 (2023)
England, M., Florescu, D.: Constrained neural networks for interpretable heuristic creation to optimise computer algebra systems. In: Buzzard, K., Dickenstein, A., Eick, B., Leykin, A., Ren, Y. (eds.) ICMS 2024. LNCS, vol. 14749, pp. 186–195. Springer, Cham (2024).https://doi.org/10.1007/978-3-031-64529-7_19
Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vision129(6), 1789–1819 (2021)
Hardt, M., Ma, T.: Identity matters in deep learning. In: International Conference on Learning Representations (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hu, T., Jin, B., Zhou, Z.: Solving Poisson problems in polygonal domains with singularity enriched physics informed neural networks. arXiv e-prints, p. arXiv–2308 (2023)
Huang, Y., Hao, W., Lin, G.: HomPINNs: homotopy physics-informed neural networks for learning multiple solutions of nonlinear elliptic differential equations. Comput. Math. Appl.121, 62–73 (2022)
Kileel, J., Trager, M., Bruna, J.: On the expressive power of deep polynomial neural networks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Kohn, K., Merkh, T., Montúfar, G., Trager, M.: Geometry of linear convolutional networks. SIAM J. Appl. Algebra Geom.6(3), 368–406 (2022)
Laurent, T., Brecht, J.: Deep linear networks with arbitrary loss: all local minima are global. In: International Conference on Machine Learning, pp. 2902–2907. PMLR (2018)
Li, Z., Arora, S.: An exponential learning rate schedule for deep learning. In: 8th International Conference on Learning Representations, ICLR 2020 (2020)
Lin, S.: Algebraic Methods for Evaluating Integrals in Bayesian Statistics. University of California, Berkeley (2011)
Maragos, P., Charisopoulos, V., Theodosis, E.: Tropical geometry and machine learning. Proc. IEEE109(5), 728–755 (2021)
Mehta, D., Chen, T., Tang, T., Hauenstein, J.D.: The loss surface of deep linear networks viewed through the algebraic geometry lens. IEEE Trans. Pattern Anal. Mach. Intell.44(9), 5664–5680 (2022)
Orhan, E., Pitkow, X.: Skip connections eliminate singularities. In: International Conference on Learning Representations (2018)
Pickering, L., del Río Almajano, T., England, M., Cohen, K.: Explainable AI insights for symbolic computation: a case study on selecting the variable ordering for cylindrical algebraic decomposition. J. Symb. Comput.123, 102276 (2024)
Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., Sohl-Dickstein, J.: On the expressive power of deep neural networks. In: International Conference on Machine Learning, pp. 2847–2854. PMLR (2017)
Watanabe, S.: Almost all learning machines are singular. In: 2007 IEEE Symposium on Foundations of Computational Intelligence, pp. 383–388. IEEE (2007)
Watanabe, S.: Algebraic Geometry and Statistical Learning Theory, vol. 25. Cambridge University Press, Cambridge (2009)
Watanabe, S., Opper, M.: Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. J. Mach. Learn. Res.11(12) (2010)
Wei, H., Zhang, J., Cousseau, F., Ozeki, T., Amari, S.I.: Dynamics of learning near singularities in layered networks. Neural Comput.20(3), 813–843 (2008)
Zhang, L., Naitzat, G., Lim, L.H.: Tropical geometry of deep neural networks. In: International Conference on Machine Learning, pp. 5824–5832. PMLR (2018)
Author information
Authors and Affiliations
Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, 851 S Morgan St (m/c 249), Chicago, IL, 60607, USA
Johnny Joyce & Jan Verschelde
- Johnny Joyce
You can also search for this author inPubMed Google Scholar
- Jan Verschelde
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toJan Verschelde.
Editor information
Editors and Affiliations
Université de Lille, Villeneuve d’Ascq, France
François Boulier
School of Mathematical Sciences, Beihang University, Beijing, China
Chenqi Mou
Plekhanov Russian University of Economics, Moscow, Russia
Timur M. Sadykov
Khristianovich Institute of Theoretical and Applied Mechanics, Novosibirsk, Russia
Evgenii V. Vorozhtsov
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Joyce, J., Verschelde, J. (2024). Algebraic Representations for Faster Predictions in Convolutional Neural Networks. In: Boulier, F., Mou, C., Sadykov, T.M., Vorozhtsov, E.V. (eds) Computer Algebra in Scientific Computing. CASC 2024. Lecture Notes in Computer Science, vol 14938. Springer, Cham. https://doi.org/10.1007/978-3-031-69070-9_10
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-69069-3
Online ISBN:978-3-031-69070-9
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative