Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Nature Machine Intelligence
  • Perspective
  • Published:

A collective AI via lifelong learning and sharing at the edge

Nature Machine Intelligencevolume 6pages251–264 (2024)Cite this article

Subjects

Abstract

One vision of a future artificial intelligence (AI) is where many separate units can learn independently over a lifetime and share their knowledge with each other. The synergy between lifelong learning and sharing has the potential to create a society of AI systems, as each individual unit can contribute to and benefit from the collective knowledge. Essential to this vision are the abilities to learn multiple skills incrementally during a lifetime, to exchange knowledge among units via a common language, to use both local data and communication to learn, and to rely on edge devices to host the necessary decentralized computation and data. The result is a network of agents that can quickly respond to and learn new tasks, that collectively hold more knowledge than a single agent and that can extend current knowledge in more diverse ways than a single agent. Open research questions include when and what knowledge should be shared to maximize both the rate of learning and the long-term learning performance. Here we review recent machine learning advances converging towards creating a collective machine-learned intelligence. We propose that the convergence of such scientific and technological advances will lead to the emergence of new types of scalable, resilient and sustainable AI systems.

This is a preview of subscription content,access via your institution

Access options

Access through your institution

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

9,800 Yen / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

¥14,900 per year

only ¥1,242 per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Research fields contributing to ShELL.
Fig. 2: Desired learning behaviour for an on-knowledge-demand ShELL system.
Fig. 3: Types of information sharing and timing.
Fig. 4: List of ShELL operations and their likely hardware allocation on a typical computer architecture on commercially available edge devices.
Fig. 5: List of application areas, task categories and learning tasks that are suitable to ShELL systems.

Similar content being viewed by others

References

  1. Fagan, M. Collective scientific knowledge.Philos. Compass7, 821–831 (2012).

    Article  Google Scholar 

  2. Csibra, G. & Gergely, G. Natural pedagogy as evolutionary adaptation.Phil. Trans. R. Soc. B366, 1149–1157 (2011).

    Article PubMed PubMed Central  Google Scholar 

  3. Wooldridge, M. & Jennings, N. R. Intelligent agents: theory and practice.Knowl. Eng. Rev.10, 115–152 (1995).

    Article  Google Scholar 

  4. Ferber, J. & Weiss, G.Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence Vol. 1 (Addison-Wesley, 1999).

  5. Stone, P. & Veloso, M. Multiagent systems: a survey from a machine learning perspective.Auton. Rob.8, 345–383 (2000).

    Article  Google Scholar 

  6. Conitzer, V. & Oesterheld, C. Foundations of cooperative AI. InProc. AAAI Conference on Artificial Intelligence Vol. 37, 15359–15367 (AAAI, 2022).

  7. Semsar-Kazerooni, E. & Khorasani, K. Multi-agent team cooperation: a game theory approach.Automatica45, 2205–2213 (2009).

    Article MathSciNet  Google Scholar 

  8. Thrun, S. Is learning then-th thing any easier than learning the first? InAdvances in Neural Information Processing Systems Vol. 8 (1995).

  9. Thrun, S. Lifelong learning algorithms.Learning to Learn8, 181–209 (1998).

    Article  Google Scholar 

  10. Chen, Z. & Liu, B.Lifelong Machine Learning Vol. 1 (Springer, 2018).

  11. Kudithipudi, D. et al. Biological underpinnings for lifelong learning machines.Nat. Mach. Intell.4, 196–210 (2022).

    Article  Google Scholar 

  12. Mundt, M., Hong, Y., Pliushch, I. & Ramesh, V. A wholistic view of continual learning with deep neural networks: forgotten lessons and the bridge to active and open world learning.Neural Networks160, 306–336 (2023).

    Article PubMed  Google Scholar 

  13. Khetarpal, K., Riemer, M., Rish, I. & Precup, D. Towards continual reinforcement learning: a review and perspectives.J. Artif. Intell. Res.75, 1401–1476 (2022).

    Article MathSciNet  Google Scholar 

  14. Mendez, J. A., van Seijen, H. & Eaton, E. Modular lifelong reinforcement learning via neural composition. InInternational Conference on Learning Representations (2022).

  15. Li, T., Sahu, A. K., Talwalkar, A. & Smith, V. Federated learning: challenges, methods, and future directions.IEEE Signal Process. Mag.37, 50–60 (2020).

    CAS  Google Scholar 

  16. Dorri, A., Kanhere, S. S. & Jurdak, R. Multi-agent systems: a survey.IEEE Access6, 28573–28593 (2018).

    Article  Google Scholar 

  17. Shi, W., Cao, J., Zhang, Q., Li, Y. & Xu, L. Edge computing: vision and challenges.IEEE Internet Things J.3, 637–646 (2016).

    Article  Google Scholar 

  18. Cai, H. et al. Enable deep learning on mobile devices: methods, systems, and applications.ACM Trans. Des. Autom. Electron. Syst.27, 20 (2022).

    Article  Google Scholar 

  19. Shared-Experience Lifelong Learning (ShELL). Opportunity DARPA-PA-20-02-11.SAM.govhttps://sam.gov/opp/1afbf600f2e04b26941fad352c08d1f1/view (accessed 10 October 2023).

  20. Smith, P. et al. Network resilience: a systematic approach.IEEE Commun. Mag.49, 88–97 (2011).

    Article  Google Scholar 

  21. Zhang, J., Cheung, B., Finn, C., Levine, S. & Jayaraman, D. Cautious adaptation for reinforcement learning in safety-critical settings. InInternational Conference on Machine Learning 11055–11065 (PMLR, 2020).

  22. McMahan, B., Moore, E., Ramage, D., Hampson, S. & Arcas, B. A. Y. Communication-efficient learning of deep networks from decentralized data. InArtificial Intelligence and Statistics 1273–1282, (PMLR, 2017).

  23. Liu, J. et al. From distributed machine learning to federated learning: a survey.Knowl. Inf. Syst.64, 885–917 (2022).

    Article  Google Scholar 

  24. Verbraeken, J. et al. A survey on distributed machine learning.ACM Comput. Surv.53, 30 (2020).

    Google Scholar 

  25. Henderson, P. et al. Towards the systematic reporting of the energy and carbon footprints of machine learning.J. Mach. Learn. Res.21, 10039–10081 (2020).

    MathSciNet  Google Scholar 

  26. de Vries, A. The growing energy footprint of artificial intelligence.Joule7, 2191–2194 (2023).

  27. Silver, D. L., Yang, Q. & Li, L. Lifelong machine learning systems: beyond learning algorithms. In2013 AAAI Spring Symposium Series (AAAI, 2013).

  28. Hadsell, R., Rao, D., Rusu, A. A. & Pascanu, R. Embracing change: continual learning in deep neural networks.Trends Cognit. Sci.24, 1028–1040 (2020).

    Article  Google Scholar 

  29. French, R. M. Catastrophic forgetting in connectionist networks.Trends Cognit. Sci.3, 128–135 (1999).

    Article CAS  Google Scholar 

  30. Kirkpatrick, J. et al. Overcoming catastrophic forgetting in neural networks.Proc. Natl Acad. Sci. USA114, 3521–3526 (2017).

    Article ADS MathSciNet CAS PubMed PubMed Central  Google Scholar 

  31. Parisi, G. I., Kemker, R., Part, J. L., Kanan, C. & Wermter, S. Continual lifelong learning with neural networks: a review.Neural Networks113, 54–71 (2018).

  32. Lange, M. D. et al. A continual learning survey: defying forgetting in classification tasks.IEEE Trans. Pattern Anal. Mach. Intell.44, 3366–3385 (2022).

    PubMed  Google Scholar 

  33. van de Ven, G. M., Tuytelaars, T. & Tolias, A. S. Three types of incremental learning.Nat. Mach. Intell.4, 1185–1197 (2022).

  34. Soltoggio, A., Stanley, K. O. & Risi, S. Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks.Neural Networks108, 48–67 (2018).

    Article PubMed  Google Scholar 

  35. Lifelong learning machines (L2M).DARPAhttps://www.darpa.mil/news-events/2017-03-16 (accessed 10 October 2023).

  36. New, A., Baker, M., Nguyen, E. & Vallabha, G. Lifelong learning metrics. Preprint athttps://doi.org/10.48550/arXiv.2201.08278 (2022).

  37. Baker, M. M. et al. A domain-agnostic approach for characterization of lifelong learning systems.Neural Networks160, 274–296 (2023).

    Article PubMed  Google Scholar 

  38. Mendez, J. A. & Eaton, E. Lifelong learning of compositional structures. InInternational Conference on Learning Representations (2021).

  39. Xie, A. & Finn, C. Lifelong robotic reinforcement learning by retaining experiences. InConference on Lifelong Learning Agents 838–855 (PMLR, 2022).

  40. Ben-Iwhiwhu, E., Nath, S., Pilly, P. K., Kolouri, S. & Soltoggio, A. Lifelong reinforcement learning with modulating masks. InTransactions on Machine Learning Research (2023).

  41. Tasse, G. N., James, S. & Rosman, B. Generalisation in lifelong reinforcement learning through logical composition. InInternational Conference on Learning Representations (2022).

  42. Merenda, M., Porcaro, C. & Iero, D. Edge machine learning for AI-enabled IoT devices: a review.Sensors20, 2533 (2020).

    Article ADS PubMed PubMed Central  Google Scholar 

  43. Sipola, T., Alatalo, J., Kokkonen, T. & Rantonen, M. Artificial intelligence in the IoT era: a review of edge AI hardware and software. In2022 31st Conference of Open Innovations Association (FRUCT) 320–331 (IEEE, 2022).

  44. Prabhu, A. et al. Computationally budgeted continual learning: What does matter? InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 3698–3707 (2023).

  45. Díaz-Rodríguez, N., Lomonaco, V., Filliat, D. & Maltoni, D. Don’t forget, there is more than forgetting: new metrics for continual learning. Preprint athttps://doi.org/10.48550/arXiv.1810.13166 (2018).

  46. De Lange, M., van de Ven, G. & Tuytelaars, T. Continual evaluation for lifelong learning: identifying the stability gap. In11th International Conference on Learning Representationshttps://openreview.net/forum?id=Zy350cRstc6 (ICLR, 2023).

  47. Ghunaim, Y. et al. Real-time evaluation in online continual learning: a new hope. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 11888–11897 (2023).

  48. Sarker, I. H. Machine learning: algorithms, real-world applications and research directions.SN Comput. Sci.2, 160 (2021).

    Article PubMed PubMed Central  Google Scholar 

  49. Tsuda, B., Tye, K. M., Siegelmann, H. T. & Sejnowski, T. J. A modeling framework for adaptive lifelong learning with transfer and savings through gating in the prefrontal cortex.Proc. Natl Acad. Sci. USA117, 29872–29882 (2020).

    Article ADS CAS PubMed PubMed Central  Google Scholar 

  50. Kairouz, P. et al. Advances and open problems in federated learning.Found. Trends Mach. Learn.14, 1–210 (2021).

    Article  Google Scholar 

  51. Zhu, H., Xu, J., Liu, S. & Jin, Y. Federated learning on non-IID data: a survey.Neuropcomputing465, 371–390 (2021).

  52. Nguyen, D. C. et al. Federated learning for internet of things: a comprehensive survey.IEEE Commun. Surv. Tutorials23, 1622–1658 (2021).

    Article  Google Scholar 

  53. Abreha, H. G., Hayajneh, M. & Serhani, M. A. Federated learning in edge computing: a systematic survey.Sensors22, 450 (2022).

    Article ADS PubMed PubMed Central  Google Scholar 

  54. Guo, Y., Lin, T. & Tang, X. Towards federated learning on time-evolving heterogeneous data. Preprint athttps://doi.org/10.48550/arXiv.2112.13246 (2021).

  55. Criado, M. F., Casado, F. E., Iglesias, R., Regueiro, C. V. & Barro, S. Non-IID data and continual learning processes in federated learning: a long road ahead.Inf. Fusion88, 263–280 (2022).

    Article  Google Scholar 

  56. Gao, L. et al. FedDC: federated learning with non-IID data via local drift decoupling and correction. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 10112–10121 (2022).

  57. Yoon, J., Jeong, W., Lee, G., Yang, E. & Hwang, S. J. Federated continual learning with weighted inter-client transfer. InInternational Conference on Machine Learning 12073–12086 (PMLR, 2021).

  58. Pellegrini, L., Lomonaco, V., Graffieti, G. & Maltoni, D. Continual learning at the edge: real-time training on smartphone devices. InProc. European Symposium on Artificial Neural Networkshttps://doi.org/10.14428/esann/2021.ES2021-136 (2021).

  59. Gao, D. et al. Rethinking pruning for accelerating deep inference at the edge. InProc. 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 155–164 (2020).

  60. Huang, W., Ye, M. & Du, B. Learn from others and be yourself in heterogeneous federated learning. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 10143–10153 (2022).

  61. Sun, T., Li, D. & Wang, B. Decentralized federated averaging.IEEE Trans. Pattern Anal. Mach. Intell.45, 4289–4301 (2023).

    Article PubMed  Google Scholar 

  62. Taylor, M. E. & Stone, P. Transfer learning for reinforcement learning domains: a survey.J. Mach. Learn. Res.10, 1633–1685 (2009).

    MathSciNet  Google Scholar 

  63. Zamir, A. R. et al. Taskonomy: disentangling task transfer learning. InProc. IEEE Conference on Computer Vision and Pattern Recognition (2018).

  64. Zhuang, F. et al. A comprehensive survey on transfer learning.Proc. IEEE109, 43–76 (2020).

    Article  Google Scholar 

  65. Ding, N. et al. Parameter-efficient fine-tuning of large-scale pre-trained language models.Nat. Mach. Intell.5, 220–235 (2023).

    Article  Google Scholar 

  66. Koohpayegani, S. A., Navaneet, K., Nooralinejad, P., Kolouri, S. & Pirsiavash, H. NOLA: networks as linear combination of low rank random basis. InInternational Conference on Learning Representations (ICLR, 2024).

  67. Wang, M. & Deng, W. Deep visual domain adaptation: a survey.Neurocomputing312, 135–153 (2018).

    Article  Google Scholar 

  68. Wilson, G. & Cook, D. J. A survey of unsupervised deep domain adaptation.ACM Trans. Intell. Syst. Technol.11, 51 (2020).

    Article  Google Scholar 

  69. Farahani, A., Voghoei, S., Rasheed, K. & Arabnia, H. R. A brief review of domain adaptation. InAdvances in Data Science and Information Engineering: Proceedings from ICDATA 2020 and IKE 2020 877–894 (2021).

  70. Kim, Y., Cho, D., Han, K., Panda, P. & Hong, S. Domain adaptation without source data.IEEE Trans. Artif. Intell.2, 508–518 (2021).

    Article  Google Scholar 

  71. Caruana, R. Multitask learning.Mach. Learn.28, 41–75 (1997).

    Article  Google Scholar 

  72. Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O. & Kaiser, L. Multi-task sequence to sequence learning. Preprint athttps://doi.org/10.48550/arXiv.1511.06114 (2015).

  73. Ruder, S. An overview of multi-task learning in deep neural networks. Preprint athttps://doi.org/10.48550/arXiv.1706.05098 (2017).

  74. Liu, X., He, P., Chen, W. & Gao, J. Multi-task deep neural networks for natural language understanding. InProc. 57th Annual Meeting of the Association for Computational Linguistics (2019).

  75. Hospedales, T., Antoniou, A., Micaelli, P. & Storkey, A. Meta-learning in neural networks: a survey.IEEE Trans. Pattern Anal. Mach. Intell.44, 5149–5169 (2021).

    Google Scholar 

  76. Kayaalp, M., Vlaski, S. & Sayed, A. H. Dif-MAML: decentralized multi-agent meta-learning.IEEE Open J. Signal Process.3, 71–93 (2022).

    Article  Google Scholar 

  77. Riemer, M. et al. Learning to learn without forgetting by maximizing transfer and minimizing interference. In7th International Conference on Learning Representations, ICLR 2019 (OpenReview, 2019).

  78. Bengio, Y., Louradour, J., Collobert, R. & Weston, J. Curriculum learning. InProc. 26th Annual International Conference on Machine Learning 41–48 (2009).

  79. Narvekar, S. et al. Curriculum learning for reinforcement learning domains: a framework and survey.J. Mach. Learn. Res.21, 7382–7431 (2020).

    MathSciNet  Google Scholar 

  80. Wang, W., Zheng, V. W., Yu, H. & Miao, C. A survey of zero-shot learning: settings, methods, and applications.ACM Trans. Intell. Syst. Technol.10, 13 (2019).

    Article CAS  Google Scholar 

  81. Rostami, M., Isele, D. & Eaton, E. Using task descriptions in lifelong machine learning for improved performance and zero-shot transfer.J. Artif. Intell. Res.67, 673–704 (2020).

    Article MathSciNet  Google Scholar 

  82. Chen, J. et al. Knowledge-aware zero-shot learning: survey and perspective. InProc. 30th International Joint Conference on Artificial Intelligence (IJCAI-21) (2021).

  83. Xie, G.-S., Zhang, Z., Xiong, H., Shao, L. & Li, X. Towards zero-shot learning: a brief review and an attention-based embedding network.IEEE Trans. Circuits Syst. Video Technol.33, 1181–1197 (2022).

  84. Cao, W. et al. A review on multimodal zero-shot learning.Wiley Interdiscip. Rev. Data Min. Knowl. Discovery13, e1488 (2023).

    Article  Google Scholar 

  85. Jones, A. M. et al. USC-DCT: a collection of diverse classification tasks.Data8, 153 (2023).

    Article  Google Scholar 

  86. Liu, X., Bai, Y., Lu, Y., Soltoggio, A. & Kolouri, S. Wasserstein task embedding for measuring task similarities. Preprint athttps://doi.org/10.48550/arXiv.2208.11726 (2022).

  87. Yang, J., Zhou, K., Li, Y. & Liu, Z. Generalized out-of-distribution detection: a survey. Preprint athttps://doi.org/10.48550/arXiv.2110.11334 (2021).

  88. Abdar, M. et al. A review of uncertainty quantification in deep learning: techniques, applications and challenges.Inf. Fusion76, 243–297 (2021).

    Article  Google Scholar 

  89. Musliner, D. J. et al. OpenMIND: planning and adapting in domains with novelty. InProc. 9th Conference on Advances in Cognitive Systems (2021).

  90. Rios, A. & Itti, L. Lifelong learning without a task oracle. In2020 IEEE 32nd International Conference on Tools with Artificial Intelligence 255–263 (IEEE, 2020).

  91. Carvalho, D. V., Pereira, E. M. & Cardoso, J. S. Machine learning interpretability: a survey on methods and metrics.Electronics8, 832 (2019).

    Article  Google Scholar 

  92. Masana, M. et al. Class-incremental learning: survey and performance evaluation on image classification.IEEE Trans. Pattern Anal. Mach. Intell.45, 5513–5533 (2022).

    Google Scholar 

  93. Isele, D. & Cosgun, A. Selective experience replay for lifelong learning. InProc. AAAI Conference on Artificial Intelligence Vol. 32 (2018).

  94. Nath, S. et al. Sharing lifelong reinforcement learning knowledge via modulating masks. InProc. of Machine Learning Research Vol. 232 (2023).

  95. Pimentel, M. A., Clifton, D. A., Clifton, L. & Tarassenko, L. A review of novelty detection.Signal Process.99, 215–249 (2014).

    Article  Google Scholar 

  96. Da Silva, B. C., Basso, E. W., Bazzan, A. L. & Engel, P. M. Dealing with non-stationary environments using context detection. InProc. 23rd International Conference on Machine Learning 217–224 (2006).

  97. Niv, Y. Learning task-state representations.Nat. Neurosci.22, 1544–1553 (2019).

    Article CAS PubMed PubMed Central  Google Scholar 

  98. Mendez, J. & Eaton, E. How to reuse and compose knowledge for a lifetime of tasks: a survey on continual learning and functional composition. InTransactions on Machine Learning Research (2023).

  99. Hu, E. J. et al. LoRA: low-rank adaptation of large language models.International Conference on Learning Representations (ICLR) (2021).

  100. Nooralinejad, P. et al. PRANC: pseudo random networks for compacting deep models. InProc. IEEE/CVF International Conference on Computer Vision 17021–17031 (2023).

  101. Lester, B., Al-Rfou, R. & Constant, N. The power of scale for parameter-efficient prompt tuning. InProc. 2021 Conference on Empirical Methods in Natural Language Processing (2021).

  102. Ge, Y. et al. Lightweight learner for shared knowledge lifelong learning. InTransactions on Machine Learning Research (2023).

  103. Ge, Y. et al. CLR: Channel-wise lightweight reprogramming for continual learning. InProc. IEEE/CVF International Conference on Computer Vision 18798–18808 (2023).

  104. Sarker, M. K., Zhou, L., Eberhart, A. & Hitzler, P. Neuro-symbolic artificial intelligence.AI Commun.34, 197–209 (2021).

    Article MathSciNet  Google Scholar 

  105. Zoph, B. & Le, Q. Neural architecture search with reinforcement learning. InInternational Conference on Learning Representations (2017).

  106. Ren, P. et al. A comprehensive survey of neural architecture search: challenges and solutions.ACM Comput. Surv.54, 76 (2021).

    Google Scholar 

  107. Zhang, C., Patras, P. & Haddadi, H. Deep learning in mobile and wireless networking: a survey.IEEE Commun. Surv. Tutorials21, 2224–2287 (2019).

    Article  Google Scholar 

  108. Deng, S. et al. Edge intelligence: the confluence of edge computing and artificial intelligence.IEEE Internet Things J.7, 7457–7469 (2020).

    Article  Google Scholar 

  109. Murshed, M. S. et al. Machine learning at the network edge: a survey.ACM Comput. Surv.54, 170 (2021).

    Google Scholar 

  110. Ajani, T. S., Imoize, A. L. & Atayero, A. A. An overview of machine learning within embedded and mobile devices–optimizations and applications.Sensors21, 4412 (2021).

    Article ADS PubMed PubMed Central  Google Scholar 

  111. Dhar, S. et al. A survey of on-device machine learning: an algorithms and learning theory perspective.ACM Trans. Internet Things2, 15 (2021).

    Article  Google Scholar 

  112. Singh, R. & Gill, S. S. Edge AI: a survey.Internet Things Cyber-Phys. Syst.3, 71–92 (2023).

  113. Mao, Y., You, C., Zhang, J., Huang, K. & Letaief, K. B. A survey on mobile edge computing: the communication perspective.IEEE Commun. Surv. Tutorials19, 2322–2358 (2017).

    Article  Google Scholar 

  114. Xu, D. et al. Edge intelligence: architectures, challenges, and applications. Preprint athttps://doi.org/10.48550/arXiv.2003.12172 (2020).

  115. Li, E., Zeng, L., Zhou, Z. & Chen, X. Edge AI: on-demand accelerating deep neural network inference via edge computing.IEEE Trans. Wireless Commun.19, 447–457 (2019).

    Article  Google Scholar 

  116. Mehlin, V., Schacht, S. & Lanquillon, C. Towards energy-efficient deep learning: an overview of energy-efficient approaches along the deep learning lifecycle. Preprint athttps://doi.org/10.48550/arXiv.2303.01980 (2023).

  117. Lin, J. et al. On-device training under 256kb memory. 36th Conference on Neural Information Processing Systems (NeurIPS)(2022).

  118. Yang, Y., Li, G. & Marculescu, R. Efficient on-device training via gradient filtering. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 3811–3820 (2023).

  119. Hayes, T. L. & Kanan, C. Online continual learning for embedded devices. InProc. First Conference on Lifelong Learning Agents (eds Chandar, S. et al.) 744–766 (PMLR, 2022).

  120. Wang, Z. et al. SparCL: sparse continual learning on the edge. In36th Conference on Neural Information Processing Systems (2022).

  121. Harun, M. Y., Gallardo, J., Hayes, T. L. & Kanan, C. How efficient are today’s continual learning algorithms? InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 2430–2435 (2023).

  122. Yang, J. et al. Quantization networks. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 7308–7316 (2019).

  123. Cai, Z., He, X., Sun, J. & Vasconcelos, N. Deep learning with low precision by half-wave gaussian quantization. InProc. IEEE Conference on Computer Vision and Pattern Recognition 5918–5926 (2017).

  124. Jain, A., Bhattacharya, S., Masuda, M., Sharma, V. & Wang, Y. Efficient execution of quantized deep learning models: a compiler approach. Preprint athttps://doi.org/10.48550/arXiv.2006.10226 (2020).

  125. Goel, A., Tung, C., Lu, Y.-H. & Thiruvathukal, G. K. A survey of methods for low-power deep learning and computer vision. In2020 IEEE 6th World Forum on Internet of Things (IEEE, 2020).

  126. Ma, X. et al. Cost-effective on-device continual learning over memory hierarchy with Miro. InProc. 29th Annual International Conference on Mobile Computing and Networking 83, 1–15 (ACM, 2023).

  127. Kudithipudi, D. et al. Design principles for lifelong learning AI accelerators.Nat. Electron.6, 807–822 (2023).

    Article  Google Scholar 

  128. Machupalli, R., Hossain, M. & Mandal, M. Review of ASIC accelerators for deep neural network.Microprocess. Microsyst.89, 104441 (2022).

    Article  Google Scholar 

  129. Jouppi, P. N. et al. In-datacenter performance analysis of a tensor processing unit. InProc. 44th Annual International Symposium on Computer Architecture (2017).

  130. Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. & Eleftheriou, E. Memory devices and applications for in-memory computing.Nat. Nanotechnol.15, 529–544 (2020).

    Article ADS CAS PubMed  Google Scholar 

  131. Tang, K.-T. et al. Considerations of integrating computing-in-memory and processing-in-sensor into convolutional neural network accelerators for low-power edge devices. In2019 Symposium on VLSI Circuits T166–T167 (IEEE, 2019).

  132. Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing.Nature575, 607–617 (2019).

    Article ADS CAS PubMed  Google Scholar 

  133. Chakraborty, I., Jaiswal, A., Saha, A., Gupta, S. & Roy, K. Pathways to efficient neuromorphic computing with non-volatile memory technologies.Appl. Phys. Rev.7, 021308 (2020).

    Article ADS CAS  Google Scholar 

  134. Christensen, D. V. et al. 2022 roadmap on neuromorphic computing and engineering.Neuromorph. Comput. Eng.2, 022501 (2022).

    Article  Google Scholar 

  135. Rathi, N. et al. Exploring neuromorphic computing based on spiking neural networks: algorithms to hardware.ACM Comput. Surv.55, 243 (2023).

    Article  Google Scholar 

  136. Zhang, W. et al. Neuro-inspired computing chips.Nat. Electron.3, 371–382 (2020).

    Article ADS  Google Scholar 

  137. Feldmann, J. et al. Parallel convolutional processing using an integrated photonic tensor core.Nature589, 52–58 (2021).

    Article ADS CAS PubMed  Google Scholar 

  138. Peserico, N., Shastri, B. J. & Sorger, V. J. Integrated photonic tensor processing unit for a matrix multiply: a review.J. Lightwave Technol.41, 3704–3716 (2023).

  139. Shastri, B. J. et al. Photonics for artificial intelligence and neuromorphic computing.Nat. Photonics15, 102–114 (2021).

    Article ADS CAS  Google Scholar 

  140. Toczé, K. & Nadjm-Tehrani, S. A taxonomy for management and optimization of multiple resources in edge computing.Wireless Commun. Mobile Comput.2018, 7476201 (2018).

  141. Bhattacharjee, A., Venkatesha, Y., Moitra, A. & Panda, P. MIME: adapting a single neural network for multi-task inference with memory-efficient dynamic pruning. InProc. 59th ACM/IEEE Design Automation Conference 499–504 (2022).

  142. Extreme Computing BAA.DARPAhttps://sam.gov/opp/211b1819bd5f46eba20d4a466358d8bb/view (accessed 10 October 2023).

  143. Rostami, M., Kolouri, S., Kim, K. & Eaton, E. Multi-agent distributed lifelong learning for collective knowledge acquisition. InProc. 17th International Conference on Autonomous Agents and Multiagent Systems 2018 (2017).

  144. Boyd, S. et al. Distributed optimization and statistical learning via the alternating direction method of multipliers.Found. Trends Mach. Learn.3, 1–122 (2011).

    Article  Google Scholar 

  145. Mohammadi, J. & Kolouri, S. Collaborative learning through shared collective knowledge and local expertise. In2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (2019).

  146. Wortsman, M. et al. Supermasks in superposition.Adv. Neural Inf. Process. Syst.33, 15173–15184 (2020).

    Google Scholar 

  147. Koster, N., Grothe, O. & Rettinger, A. Signing the supermask: keep, hide, invert. InInternational Conference on Learning Representationshttps://openreview.net/forum?id=e0jtGTfPihs (2022).

  148. Wen, S., Rios, A., Ge, Y. & Itti, L. Beneficial perturbation network for designing general adaptive artificial intelligence systems.IEEE Trans. Neural Networks Learn. Syst.33, 3778–3791 (2021).

    Article MathSciNet  Google Scholar 

  149. Saha, G., Garg, I. & Roy, K. Gradient projection memory for continual learning. InInternational Conference on Learning Representations (2021).

  150. Choudhary, S., Aketi, S. A., Saha, G. & Roy, K. CoDeC: communication-efficient decentralized continual learning. Preprint athttps://doi.org/10.48550/arXiv.2303.15378 (2023).

  151. Singh, P., Verma, V. K., Mazumder, P., Carin, L. & Rai, P. Calibrating CNNs for lifelong learning.Adv. Neural Inf. Process. Syst.33, 15579–15590 (2020).

    Google Scholar 

  152. Verma, V. K., Liang, K. J., Mehta, N., Rai, P. & Carin, L. Efficient feature transformations for discriminative and generative continual learning. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 13865–13875 (2021).

  153. Ma, Z., Lu, Y., Li, W. & Cui, S. EFL: elastic federated learning on non-IID data. InConference on Lifelong Learning Agents 92–115 (PMLR, 2022).

  154. Shenaj, D., Toldo, M., Rigon, A. & Zanuttigh, P. Asynchronous federated continual learning. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10208460 (2023).

  155. Venkatesha, Y., Kim, Y., Park, H. & Panda, P. Divide-and-conquer the NAS puzzle in resource constrained federated learning systems.Neural Networks168, 569–579 (2023).

  156. Usmanova, A., Portet, F., Lalanda, P. & Vega, G. A distillation-based approach integrating continual learning and federated learning for pervasive services. 3rd Workshop on Continual and Multimodal Learning for Internet of Things – Co-located with IJCAI 2021, Aug 2021, Montreal, Canadahttps://doi.org/10.48550/arXiv.2109.04197 (2021).

  157. Wang, T., Zhu, J.-Y., Torralba, A. & Efros, A. A. Dataset distillation. Preprint athttps://doi.org/10.48550/arXiv.1811.10959 (2018).

  158. Cazenavette, G., Wang, T., Torralba, A., Efros, A. A. & Zhu, J.-Y. Dataset distillation by matching training trajectories. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 4750–4759 (2022).

  159. Baradad Jurjo, M., Wulff, J., Wang, T., Isola, P. & Torralba, A. Learning to see by looking at noise.Adv. Neural Inf. Process. Syst.34, 2556–2569 (2021).

    Google Scholar 

  160. Carta, A., Cossu, A., Lomonaco, V., Bacciu, D. & van de Weijer, J. Projected latent distillation for data-agnostic consolidation in distributed continual learning. Preprint athttps://doi.org/10.48550/arXiv.2303.15888 (2023).

  161. Teh, Y. et al. Distral: robust multitask reinforcement learning. InAdvances in Neural Information Processing Systems Vol. 30 (2017).

  162. Zheng, G., Jacobs, M. A., Braverman, V. & Parekh, V. S. Asynchronous decentralized federated lifelong learning for landmark localization in medical imaging. InInternational Workshop on Federated Learning for Distributed Data Mining (2023).

  163. Zheng, G., Lai, S., Braverman, V., Jacobs, M. A. & Parekh, V. S. A framework for dynamically training and adapting deep reinforcement learning models to different, low-compute, and continuously changing radiology deployment environments. Preprint athttps://doi.org/10.48550/arXiv.2306.05310 (2023).

  164. Zheng, G., Lai, S., Braverman, V., Jacobs, M. A. & Parekh, V. S. Multi-environment lifelong deep reinforcement learning for medical imaging. Preprint athttps://doi.org/10.48550/arXiv.2306.00188 (2023).

  165. Zheng, G., Zhou, S., Braverman, V., Jacobs, M. A. & Parekh, V. S. Selective experience replay compression using coresets for lifelong deep reinforcement learning in medical imaging. InProc. Machine Learning Research 227, 1751–1764 (2024).

  166. Shperberg, S. S., Liu, B. & Stone, P. Learning a shield from catastrophic action effects: never repeat the same mistake. Preprint athttps://doi.org/10.48550/arXiv.2202.09516 (2022).

  167. Shperberg, S. S., Liu, B., Allievi, A. & Stone, P. A rule-based shield: Accumulating safety rules from catastrophic action effects. InConference on Lifelong Learning Agents 231–242 (PMLR, 2022).

  168. Alshiekh, M. et al. Safe reinforcement learning via shielding. InProc. 32nd AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI, 2018).

  169. Garcıa, J. & Fernández, F. A comprehensive survey on safe reinforcement learning.J. Mach. Learn. Res.16, 1437–1480 (2015).

    MathSciNet  Google Scholar 

  170. Jang, D., Yoo, J., Son, C. Y., Kim, D. & Kim, H. J. Multi-robot active sensing and environmental model learning with distributed gaussian process.IEEE Robot. Autom. Lett.5, 5905–5912 (2020).

  171. Igoe, C., Ghods, R. & Schneider, J. Multi-agent active search: a reinforcement learning approach.IEEE Rob. Autom. Lett.7, 754–761 (2021).

    Article  Google Scholar 

  172. Raja, G., Baskar, Y., Dhanasekaran, P., Nawaz, R. & Yu, K. An efficient formation control mechanism for multi-UAV navigation in remote surveillance. In2021 IEEE Globecom Workshops (IEEE, 2021).

  173. Sitzmann, V., Martel, J., Bergman, A., Lindell, D. & Wetzstein, G. Implicit neural representations with periodic activation functions.Adv. Neural Inf. Process. Syst.33, 7462–7473 (2020).

    Google Scholar 

  174. Yu, A., Ye, V., Tancik, M. & Kanazawa, A. pixelNeRF: neural radiance fields from one or few images. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 4578–4587 (2021).

  175. Zhang, K., Riegler, G., Snavely, N. & Koltun, V. NeRF++: analyzing and improving neural radiance fields. Preprint athttps://doi.org/10.48550/arXiv.2010.07492 (2020).

  176. Bylow, E., Sturm, J., Kerl, C., Kahl, F. & Cremers, D. Real-time camera tracking and 3D reconstruction using signed distance functions.Rob. Sci. Syst.2, 2 (2013).

    Google Scholar 

  177. Park, J. J., Florence, P., Straub, J., Newcombe, R. & Lovegrove, S. DeepSDF: learning continuous signed distance functions for shape representation. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 165–174 (2019).

  178. Kolouri, S., Abbasi, A., Koohpayegani, S. A., Nooralinejad, P. & Pirsiavash, H. Multi-agent lifelong implicit neural learning.IEEE Signal Process. Lett.30, 1812–1816 (2023).

  179. Bortnik, J. & Camporeale, E. Ten ways to apply machine learning in the earth and space sciences. InAGU Fall Meeting Abstracts IN12A-06 (2021).

  180. Zhang, Y., Bai, Y., Wang, M. & Hu, J. Cooperative adaptive cruise control with robustness against communication delay: an approach in the space domain.IEEE Trans. Intell. Transport. Syst.22, 5496–5507 (2020).

    Article  Google Scholar 

  181. Gao, Y. & Chien, S. Review on space robotics: toward top-level science through space exploration.Sci. Rob.2, eaan5074 (2017).

    Article  Google Scholar 

  182. Bornstein, B. J. et al. Autonomous exploration for gathering increased science.NASA Tech Briefs34(9), 10 (2010).

  183. Swan, R. M. et al. AI4MARS: a dataset for terrain-aware autonomous driving on Mars. InProc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 1982–1991 (2021).

  184. Bayer, T. Planning for the un-plannable: redundancy, fault protection, contingency planning and anomaly response for the mars reconnaissance oribiter mission. InAIAA SPACE 2007 Conference and Exposition 6109 (2007).

  185. Rieke, N. et al. The future of digital health with federated learning.NPJ Dig. Med.3, 119 (2020).

    Article  Google Scholar 

  186. Sheller, M. J. et al. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data.Sci. Rep.10, 12598 (2020).

    Article ADS PubMed PubMed Central  Google Scholar 

  187. Moor, M. et al. Foundation models for generalist medical artificial intelligence.Nature616, 259–265 (2023).

    Article ADS CAS PubMed  Google Scholar 

  188. Bécue, A., Praça, I. & Gama, J. Artificial intelligence, cyber-threats and industry 4.0: challenges and opportunities.Artif. Intell. Rev.54, 3849–3886 (2021).

    Article  Google Scholar 

  189. Buczak, A. L. & Guven, E. A survey of data mining and machine learning methods for cyber security intrusion detection.IEEE Commun. Surv. Tutorials18, 1153–1176 (2015).

    Article  Google Scholar 

  190. Shaukat, K., Luo, S., Varadharajan, V., Hameed, I. A. & Xu, M. A survey on machine learning techniques for cyber security in the last decade.IEEE Access8, 222310–222354 (2020).

    Article  Google Scholar 

  191. Berman, D. S., Buczak, A. L., Chavis, J. S. & Corbett, C. L. A survey of deep learning methods for cyber security.Information10, 122 (2019).

    Article  Google Scholar 

  192. Kozik, R., Choras, M. & Keller, J. Balanced efficient lifelong learning (B-ELLA) for cyber attack detection.J. Univers. Comput. Sci.25, 2–15 (2019).

    MathSciNet  Google Scholar 

  193. Bernstein, D. S., Givan, R., Immerman, N. & Zilberstein, S. The complexity of decentralized control of Markov decision processes.Math. Oper. Res.27, 819–840 (2002).

    Article MathSciNet  Google Scholar 

  194. Goldman, C. V. & Zilberstein, S. Decentralized control of cooperative systems: categorization and complexity analysis.J. Artif. Intell. Res.22, 143–174 (2004).

    Article MathSciNet  Google Scholar 

  195. Melo, F. S., Spaan, M. T. J. & Witwicki, S. J. InMulti-Agent Systems (eds Cossentino, M. et al.) 189–204 (Springer, 2012).

  196. Vaswani, A. et al. Attention is all you need. InAdvances in Neural Information Processing Systems Vol. 30 (2017).

  197. Khan, S. et al. Transformers in vision: a survey.ACM Comput. Surv.54, 200 (2022).

    Article  Google Scholar 

  198. Bommasani, R. et al. On the opportunities and risks of foundation models. Preprint athttps://doi.org/10.48550/arXiv.2108.07258 (2021).

  199. Yang, S. et al. Foundation models for decision making: problems, methods, and opportunities. Preprint athttps://doi.org/10.48550/arXiv.2303.04129 (2023).

  200. Knight, W. OpenAI’s CEO says the age of giant AI models is already over.Wiredhttps://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over/ (17 April 2023).

  201. Rahwan, I. et al. Machine behaviour.Nature568, 477–486 (2019).

    Article ADS CAS PubMed  Google Scholar 

  202. Cath, C. Governing artificial intelligence: ethical, legal and technical opportunities and challenges.Phil. Trans. R. Soc. A376, 20180080 (2018).

    Article ADS PubMed PubMed Central  Google Scholar 

  203. Cao, Y. & Yang, J. Towards making systems forget with machine unlearning. In2015 IEEE Symposium on Security and Privacy 463–480 (IEEE, 2015).

  204. Nick, B.Superintelligence: Paths, Dangers, Strategies. (Oxford Univ. Press, 2014).

    Google Scholar 

  205. Marr, B. The 15 biggest risks of artificial intelligence.Forbeshttps://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=309f29002706 (2 June 2023).

  206. Bengio, Y. et al. Managing AI risks in an era of rapid progress. Preprint athttps://doi.org/10.48550/arXiv.2310.17688 (2023).

  207. Wu, C.-J. et al. Sustainable AI: environmental implications, challenges and opportunities.Proc. Mach. Learn. Syst.4, 795–813 (2022).

    Google Scholar 

Download references

Acknowledgements

This material is based on work supported by DARPA under contracts HR00112190132, HR00112190133, HR00112190134, HR00112190135, HR00112190130 and HR00112190136. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. The authors would like to thank B. Bertoldson, A. Carta, B. Clipp, N. Jennings, K. Stanley, C. Ekanadham, N. Ketz, M. Paravia, M. Petrescu, T. Senator and J. Steil for constructive discussions and comments on early versions of the manuscript.

Author information

Authors and Affiliations

  1. Computer Science Department, Loughborough University, Loughborough, UK

    Andrea Soltoggio, Eseoghene Ben-Iwhiwhu, Saptarshi Nath & Christos Peridis

  2. Computer Science Department, Rice University, Houston, TX, USA

    Vladimir Braverman, Michael A. Jacobs & Guangyao Zheng

  3. University of Pennsylvania, Philadelphia, PA, USA

    Eric Eaton, Long Le & Kyle Vedder

  4. ECS Federal, Arlington, VA, USA

    Benjamin Epstein

  5. Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA

    Yunhao Ge & Laurent Itti

  6. Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, USA

    Lucy Halperin & Jonathan How

  7. Department of Diagnostic and Interventional Imaging, The University of Texas McGovern Medical School at Houston, Houston, TX, USA

    Michael A. Jacobs

  8. The Department of Radiology and Oncology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA

    Michael A. Jacobs

  9. Smart Information Flow Technologies, Minneapolis, MN, USA

    Pavan Kantharaju & David Musliner

  10. Aurora Flight Sciences, Cambridge, MA, USA

    Steven Lee & Sildomar T. Monteiro

  11. Department of Computer Science, Vanderbilt University, Nashville, TN, USA

    Xinran Liu & Soheil Kolouri

  12. Massachusetts Institute of Technology, Cambridge, MA, USA

    Sildomar T. Monteiro

  13. Department of Electrical Engineering, Yale University, New Haven, CT, USA

    Priyadarshini Panda

  14. Department of Computer Science, University of California, Davis, Davis, CA, USA

    Hamed Pirsiavash

  15. Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA

    Vishwa Parekh

  16. Department of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA

    Kaushik Roy

  17. Department of Software and Information System Engineering, Ben-Gurion University, Beer Sheva, Israel

    Shahaf Shperberg

  18. University of Massachusetts, Amherst, Amherst, MA, USA

    Hava T. Siegelmann

  19. Department of Computer Science, The University of Texas at Austin, Austin, TX, USA

    Peter Stone

  20. Sony AI America, Sony AI, Austin, TX, USA

    Peter Stone

  21. Simons Institute, University of California, Berkeley, Berkeley, CA, USA

    Jingfeng Wu

  22. Department of Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, USA

    Lin Yang

Authors
  1. Andrea Soltoggio

    You can also search for this author inPubMed Google Scholar

  2. Eseoghene Ben-Iwhiwhu

    You can also search for this author inPubMed Google Scholar

  3. Vladimir Braverman

    You can also search for this author inPubMed Google Scholar

  4. Eric Eaton

    You can also search for this author inPubMed Google Scholar

  5. Benjamin Epstein

    You can also search for this author inPubMed Google Scholar

  6. Yunhao Ge

    You can also search for this author inPubMed Google Scholar

  7. Lucy Halperin

    You can also search for this author inPubMed Google Scholar

  8. Jonathan How

    You can also search for this author inPubMed Google Scholar

  9. Laurent Itti

    You can also search for this author inPubMed Google Scholar

  10. Michael A. Jacobs

    You can also search for this author inPubMed Google Scholar

  11. Pavan Kantharaju

    You can also search for this author inPubMed Google Scholar

  12. Long Le

    You can also search for this author inPubMed Google Scholar

  13. Steven Lee

    You can also search for this author inPubMed Google Scholar

  14. Xinran Liu

    You can also search for this author inPubMed Google Scholar

  15. Sildomar T. Monteiro

    You can also search for this author inPubMed Google Scholar

  16. David Musliner

    You can also search for this author inPubMed Google Scholar

  17. Saptarshi Nath

    You can also search for this author inPubMed Google Scholar

  18. Priyadarshini Panda

    You can also search for this author inPubMed Google Scholar

  19. Christos Peridis

    You can also search for this author inPubMed Google Scholar

  20. Hamed Pirsiavash

    You can also search for this author inPubMed Google Scholar

  21. Vishwa Parekh

    You can also search for this author inPubMed Google Scholar

  22. Kaushik Roy

    You can also search for this author inPubMed Google Scholar

  23. Shahaf Shperberg

    You can also search for this author inPubMed Google Scholar

  24. Hava T. Siegelmann

    You can also search for this author inPubMed Google Scholar

  25. Peter Stone

    You can also search for this author inPubMed Google Scholar

  26. Kyle Vedder

    You can also search for this author inPubMed Google Scholar

  27. Jingfeng Wu

    You can also search for this author inPubMed Google Scholar

  28. Lin Yang

    You can also search for this author inPubMed Google Scholar

  29. Guangyao Zheng

    You can also search for this author inPubMed Google Scholar

  30. Soheil Kolouri

    You can also search for this author inPubMed Google Scholar

Contributions

All authors contributed with insights during brainstorming, ideas and writing the paper. A.S. conceived the main idea and led the integration of all contributions.

Corresponding author

Correspondence toAndrea Soltoggio.

Ethics declarations

Competing interests

P.S. serves as the executive director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research. All other authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Senen Barro, Vincenzo Lomonaco, Xiaoying Tang, Gido van de Ven and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Section 1: ShELL algorithms and their implementations. Supplementary Section 2: additional technical details are provided on application scenarios and performance metrics.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Soltoggio, A., Ben-Iwhiwhu, E., Braverman, V.et al. A collective AI via lifelong learning and sharing at the edge.Nat Mach Intell6, 251–264 (2024). https://doi.org/10.1038/s42256-024-00800-2

Download citation

This article is cited by

Access through your institution
Buy or subscribe

Advertisement

Search

Advanced search

Quick links

Nature Briefing AI and Robotics

Sign up for theNature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox.Sign up for Nature Briefing: AI and Robotics

[8]ページ先頭

©2009-2025 Movatter.jp