Gary Marcus argued, "We cannot construct richcognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning."[6] Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much of useful knowledge is abstract to make do without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."[7]
Angelo Dalli,[8]Henry Kautz,[9]Francesca Rossi,[10] andBart Selman[11] also argued for such a synthesis. Their arguments attempt to address the two kinds of thinking, as discussed inDaniel Kahneman's bookThinking, Fast and Slow. It describes cognition as encompassing two components: System 1 is fast, reflexive, intuitive, and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is used forpattern recognition. System 2 handles planning, deduction, and deliberative thinking. In this view,deep learning best handles the first kind of cognition, while symbolic reasoning best handles the second kind. Both are necessary for the development of a robust and reliable AI system capable of learning, reasoning, and interacting with humans to accept advice and answer questions. Since the 1990s, dual-process models with explicit references to the two contrasting systems have been the focus of research in both the fields of AI and cognitive science by numerous researchers.[12]
In 2025, the adoption of neurosymbolic AI, an approach that integrates neural networks with symbolic reasoning, increased in response to the need to addresshallucination issues inlarge language models.[13][14] For example, Amazon implemented Neurosymbolic AI in its Vulcan warehouse robots and Rufus shopping assistant to enhance accuracy and decision-making.[15]
Approaches for integration are diverse.[16]Henry Kautz's taxonomy of neuro-symbolic architectures[17] follows, along with some examples:
Symbolic Neural symbolic is the current approach of many neural models innatural language processing, where words or subword tokens are the ultimate input and output oflarge language models. Examples includeBERT, RoBERTa, andGPT-3.
Symbolic[Neural] is exemplified byAlphaGo, where symbolic techniques are used to invoke neural techniques. In this case, the symbolic approach isMonte Carlo tree search and the neural techniques learn how to evaluate game positions.
Neural | Symbolic uses a neural architecture to interpret perceptual data as symbols and relationships that are reasoned about symbolically. Neural-Concept Learner[18] is an example.
Neural: Symbolic → Neural relies on symbolic reasoning to generate or labeltraining data that is subsequently learned by a deep learning model, e.g., to train a neural model for symbolic computation by using aMacsyma-likesymbolic mathematics system to create or label examples.
NeuralSymbolic uses aneural net that is generated from symbolic rules. An example is the Neural Theorem Prover,[19] which constructs a neural network from anAND-OR proof tree generated from knowledge base rules and terms. Logic Tensor Networks[20] also fall into this category.
Neural[Symbolic] according to Kautz, this approach embeds truesymbolic reasoning inside a neural network. These are tightly-coupled neural-symbolic systems, in which the logical inference rules are internal to the neural network. This way, the neural network internally computes the inference from the premises and learns to reason based on logical inference systems. Early work on connectionist modal and temporal logics by Garcez, Lamb, and Gabbay[21] is aligned with this approach.
These categories are not exhaustive, as they do not consider multi-agent systems. In 2005, Bader andHitzler presented a more fine-grained categorization that took into account, e.g., whether the use of symbols included logic and, if so, whether the logic waspropositional or first-order logic.[22] The 2005 categorization and Kautz's taxonomy above are compared and contrasted in a 2021 article.[17]Sepp Hochreiter argued thatGraph Neural Networks "...are the predominant models of neural-symbolic computing"[23] since "[t]hey describe the properties of molecules, simulate social networks, or predict future states in physical and engineering applications with particle-particle interactions."[24]
Gary Marcus argues that "...hybrid architectures that combine learning and symbol manipulation are necessary for robust intelligence, but not sufficient",[25] and that there are
...four cognitive prerequisites for building robust artificial intelligence:
hybrid architectures that combine large-scale learning with the representational and computational powers of symbol manipulation,
large-scale knowledge bases—likely leveraging innate frameworks—that incorporate symbolic knowledge along with other forms of knowledge,
reasoning mechanisms capable of leveraging those knowledge bases in tractable ways, and
A series of workshops on neuro-symbolic AI has been held annually since 2005 Neuro-Symbolic Artificial Intelligence.[31] In the early 1990s, an initial set of workshops on this topic were organized.[27]
Implementations of neuro-symbolic approaches include:
AllegroGraph: an integrated Knowledge Graph based platform for neuro-symbolic application development.[33][34][35]
Scallop: a language based onDatalog that supports differentiable logical and relational reasoning. Scallop can be integrated inPython and with aPyTorch learning module.[36]
Logic Tensor Networks: encode logical formulas as neural networks and simultaneously learn term encodings, term weights, and formula weights.
DeepProbLog: combines neural networks with the probabilistic reasoning ofProbLog.
Abductive Learning: integrates machine learning and logical reasoning in a balanced-loop viaabductive reasoning, enabling them to work together in a mutually beneficial way.[37]
SymbolicAI: a compositional differentiable programming library.
Explainable Neural Networks (XNNs): combine neural networks with symbolichypergraphs and trained using a mixture of backpropagation and symbolic learning called induction.[38]
^Garcez, Artur (30 May 2025). "Neurosymbolic AI is the answer to large language models' inability to stop hallucinating".The Conversation.doi:10.64628/AB.5gpku36ct.
^Jones, Nicola (2025). "How good old-fashioned AI could spark the field's next revolution".Nature (News feature).647:842–844.doi:10.1038/d41586-025-03856-1.
^Serafini, Luciano; Garcez, Artur d'Avila (2016). "Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge".arXiv:1606.04422 [cs.AI].
^D'Avila Garcez, Artur S.; Lamb, Luis C.; Gabbay, Dov M. (2009).Neural-symbolic cognitive reasoning. Cognitive technologies. Springer.ISBN978-3-540-73245-7.
^Li, Ziyang; Huang, Jiani; Naik, Mayur (2023). "Scallop: A Language for Neurosymbolic Programming".arXiv:2304.04812 [cs.PL].
^Zhi-Hua, Zhou; Yu-Xuan, Huang (2022).Abductive Learning(PDF). In P. Hitzler and M. K. Sarker eds., Neuro-Symbolic Artificial Intelligence: The State of the Art. IOP Press. p. 353-379.
Bader, Sebastian;Hitzler, Pascal (2005-11-10). "Dimensions of Neural-symbolic Integration – A Structured Survey".arXiv:cs/0511042.
Garcez, Artur S. d'Avila; Broda, Krysia; Gabbay, Dov M.; Gabbay (2002).Neural-Symbolic Learning Systems: Foundations and Applications. Springer Science & Business Media.ISBN978-1-85233-512-0.
Garcez, Artur d'Avila; Gori, Marco; Lamb, Luis C.; Serafini, Luciano; Spranger, Michael; Tran, Son N. (2019). "Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning".arXiv:1905.06088 [cs.AI].
Garcez, Artur d'Avila; Lamb, Luis C. (2020). "Neurosymbolic AI: The 3rd Wave".arXiv:2012.05876 [cs.AI].
Hochreiter, Sepp. "Toward a Broad AI." Commun. ACM 65(4): 56–57 (2022).Toward a broad AI
Honavar, Vasant (1995).Symbolic Artificial Intelligence and Numeric Artificial Neural Networks: Towards a Resolution of the Dichotomy. The Springer International Series In Engineering and Computer Science. Springer US. pp. 351–388.doi:10.1007/978-0-585-29599-2_11.
Serafini, Luciano; Garcez, Artur d'Avila (2016-07-07). "Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge".arXiv:1606.04422 [cs.AI].
Sun, Ron (1995). "Robust reasoning: Integrating rule-based and similarity-based reasoning".Artificial Intelligence.75 (2):241–296.doi:10.1016/0004-3702(94)00028-Y.
Sun, Ron; Bookman, Lawrence (1994).Computational Architectures Integrating Neural and Symbolic Processes. Kluwer.
Sun, Ron; Alexandre, Frederic (1997).Connectionist Symbolic Integration. Lawrence Erlbaum Associates.
Sun, R (2001). "Hybrid systems and connectionist implementationalism".Encyclopedia of Cognitive Science (MacMillan Publishing Company, 2001).