Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Algebraic Representations for Faster Predictions in Convolutional Neural Networks

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 14938))

Included in the following conference series:

  • 270Accesses

Abstract

Convolutional neural networks (CNNs) are a popular choice of model for tasks in computer vision. When CNNs are made with many layers, resulting in adeep neural network, skip connections may be added to create an easier gradient optimization problem while retaining model expressiveness. In this paper, we show that arbitrarily complex, trained, linear CNNs with skip connections can be simplified into a single-layer model, resulting in greatly reduced computational requirements during prediction time. We also present a method for training nonlinear models with skip connections that are gradually removed throughout training, giving the benefits of skip connections without requiring computational overhead during prediction time. These results are demonstrated with computational examples on Residual Networks (ResNet) architecture.

Supported by the National Science Foundation under grant DMS 1854513.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 14871
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 18589
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

Notes

  1. 1.
  2. 2.

    Though other methods could be implemented make this easier, such as using gradient-free simulated annealing on\(t_0\), or—with sufficient framework—gradient descent.

References

  1. Bao, J., He, Y.H., Hirst, E.: Neurons on amoebae. J. Symb. Comput.116, 1–38 (2023)

    Article MathSciNet  Google Scholar 

  2. Barzilai, D., Geifman, A., Galun, M., Basri, R.: A kernel perspective of skip connections in convolutional networks. In: The Eleventh International Conference on Learning Representations (2022)

    Google Scholar 

  3. Bérczi, G., Fan, H., Zeng, M.: An ML approach to resolution of singularities. In: Topological, Algebraic and Geometric Learning Workshops, pp. 469–487 (2023)

    Google Scholar 

  4. England, M., Florescu, D.: Constrained neural networks for interpretable heuristic creation to optimise computer algebra systems. In: Buzzard, K., Dickenstein, A., Eick, B., Leykin, A., Ren, Y. (eds.) ICMS 2024. LNCS, vol. 14749, pp. 186–195. Springer, Cham (2024).https://doi.org/10.1007/978-3-031-64529-7_19

    Chapter  Google Scholar 

  5. Gou, J., Yu, B., Maybank, S.J., Tao, D.: Knowledge distillation: a survey. Int. J. Comput. Vision129(6), 1789–1819 (2021)

    Article  Google Scholar 

  6. Hardt, M., Ma, T.: Identity matters in deep learning. In: International Conference on Learning Representations (2017)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Hu, T., Jin, B., Zhou, Z.: Solving Poisson problems in polygonal domains with singularity enriched physics informed neural networks. arXiv e-prints, p. arXiv–2308 (2023)

    Google Scholar 

  9. Huang, Y., Hao, W., Lin, G.: HomPINNs: homotopy physics-informed neural networks for learning multiple solutions of nonlinear elliptic differential equations. Comput. Math. Appl.121, 62–73 (2022)

    Article MathSciNet  Google Scholar 

  10. Kileel, J., Trager, M., Bruna, J.: On the expressive power of deep polynomial neural networks. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  11. Kohn, K., Merkh, T., Montúfar, G., Trager, M.: Geometry of linear convolutional networks. SIAM J. Appl. Algebra Geom.6(3), 368–406 (2022)

    Article MathSciNet  Google Scholar 

  12. Laurent, T., Brecht, J.: Deep linear networks with arbitrary loss: all local minima are global. In: International Conference on Machine Learning, pp. 2902–2907. PMLR (2018)

    Google Scholar 

  13. Li, Z., Arora, S.: An exponential learning rate schedule for deep learning. In: 8th International Conference on Learning Representations, ICLR 2020 (2020)

    Google Scholar 

  14. Lin, S.: Algebraic Methods for Evaluating Integrals in Bayesian Statistics. University of California, Berkeley (2011)

    Google Scholar 

  15. Maragos, P., Charisopoulos, V., Theodosis, E.: Tropical geometry and machine learning. Proc. IEEE109(5), 728–755 (2021)

    Article  Google Scholar 

  16. Mehta, D., Chen, T., Tang, T., Hauenstein, J.D.: The loss surface of deep linear networks viewed through the algebraic geometry lens. IEEE Trans. Pattern Anal. Mach. Intell.44(9), 5664–5680 (2022)

    Google Scholar 

  17. Orhan, E., Pitkow, X.: Skip connections eliminate singularities. In: International Conference on Learning Representations (2018)

    Google Scholar 

  18. Pickering, L., del Río Almajano, T., England, M., Cohen, K.: Explainable AI insights for symbolic computation: a case study on selecting the variable ordering for cylindrical algebraic decomposition. J. Symb. Comput.123, 102276 (2024)

    Article MathSciNet  Google Scholar 

  19. Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., Sohl-Dickstein, J.: On the expressive power of deep neural networks. In: International Conference on Machine Learning, pp. 2847–2854. PMLR (2017)

    Google Scholar 

  20. Watanabe, S.: Almost all learning machines are singular. In: 2007 IEEE Symposium on Foundations of Computational Intelligence, pp. 383–388. IEEE (2007)

    Google Scholar 

  21. Watanabe, S.: Algebraic Geometry and Statistical Learning Theory, vol. 25. Cambridge University Press, Cambridge (2009)

    Book  Google Scholar 

  22. Watanabe, S., Opper, M.: Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. J. Mach. Learn. Res.11(12) (2010)

    Google Scholar 

  23. Wei, H., Zhang, J., Cousseau, F., Ozeki, T., Amari, S.I.: Dynamics of learning near singularities in layered networks. Neural Comput.20(3), 813–843 (2008)

    Google Scholar 

  24. Zhang, L., Naitzat, G., Lim, L.H.: Tropical geometry of deep neural networks. In: International Conference on Machine Learning, pp. 5824–5832. PMLR (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, 851 S Morgan St (m/c 249), Chicago, IL, 60607, USA

    Johnny Joyce & Jan Verschelde

Authors
  1. Johnny Joyce

    You can also search for this author inPubMed Google Scholar

  2. Jan Verschelde

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toJan Verschelde.

Editor information

Editors and Affiliations

  1. Université de Lille, Villeneuve d’Ascq, France

    François Boulier

  2. School of Mathematical Sciences, Beihang University, Beijing, China

    Chenqi Mou

  3. Plekhanov Russian University of Economics, Moscow, Russia

    Timur M. Sadykov

  4. Khristianovich Institute of Theoretical and Applied Mechanics, Novosibirsk, Russia

    Evgenii V. Vorozhtsov

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Joyce, J., Verschelde, J. (2024). Algebraic Representations for Faster Predictions in Convolutional Neural Networks. In: Boulier, F., Mou, C., Sadykov, T.M., Vorozhtsov, E.V. (eds) Computer Algebra in Scientific Computing. CASC 2024. Lecture Notes in Computer Science, vol 14938. Springer, Cham. https://doi.org/10.1007/978-3-031-69070-9_10

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 14871
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 18589
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp