Building machines that learn and think like people.Brenden M. Lake,Tomer D. Ullman,Joshua B. Tenenbaum &Samuel J. Gershman -2017 -Behavioral and Brain Sciences 40.detailsRecent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking (...) machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models. (shrink)
No categories
Generalization, similarity, and bayesian inference.Joshua B. Tenenbaum &Thomas L. Griffiths -2001 -Behavioral and Brain Sciences 24 (4):629-640.detailsShepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a (...) single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models. Key Words: additive clustering; Bayesian inference; categorization; concept learning; contrast model; features; generalization; psychological space; similarity. (shrink)
Inferring causal networks from observations and interventions.Mark Steyvers,Joshua B. Tenenbaum,Eric-Jan Wagenmakers &Ben Blum -2003 -Cognitive Science 27 (3):453-489.detailsInformation about the structure of a causal system can come in the form of observational data—random samples of the system's autonomous behavior—or interventional data—samples conditioned on the particular values of one or more variables that have been experimentally manipulated. Here we study people's ability to infer causal structure from both observation and intervention, and to choose informative interventions on the basis of observational data. In three causal inference tasks, participants were to some degree capable of distinguishing between competing causal hypotheses (...) on the basis of purely observational data. Performance improved substantially when participants were allowed to observe the effects of interventions that they performed on the systems. We develop computational models of how people infer causal structure from data and how they plan intervention experiments, based on the representational framework of causal graphical models and the inferential principles of optimal Bayesian decision‐making and maximizing expected information gain. These analyses suggest that people can make rational causal inferences, subject to psychologically reasonable representational assumptions and computationally reasonable processing constraints. (shrink)
One and Done? Optimal Decisions From Very Few Samples.Edward Vul,Noah Goodman,Thomas L. Griffiths &Joshua B. Tenenbaum -2014 -Cognitive Science 38 (4):599-637.detailsIn many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling-based approximations are a common way to implement Bayesian (...) inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples—but as samples are costly—how many samples should people use to optimize their total expected or worst-case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling-based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource-constrained cognition. (shrink)
No categories
The Large‐Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.Mark Steyvers &Joshua B. Tenenbaum -2005 -Cognitive Science 29 (1):41-78.detailsWe present statistical analyses of the large‐scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small‐world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale‐free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities (...) have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high‐dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small‐world statistics and power‐law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks. (shrink)
No categories
A Rational Analysis of Rule‐Based Concept Learning.Noah D. Goodman,Joshua B. Tenenbaum,Jacob Feldman &Thomas L. Griffiths -2008 -Cognitive Science 32 (1):108-154.detailsThis article proposes a new model of human concept learning that provides a rational analysis of learning feature‐based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space—a concept language of logical rules. This article compares the model predictions to human generalization judgments in several well‐known category learning experiments, and finds good agreement for both average and individual participant generalizations. This article further investigates judgments for a broad set of 7‐feature concepts—a more natural setting in several (...) ways—and again finds that the model explains human performance. (shrink)
Resource-rational contractualism: A triple theory of moral cognition.Sydney Levine,Nick Chater,Joshua B. Tenenbaum &Fiery Cushman -forthcoming -Behavioral and Brain Sciences:1-38.detailsIt is widely agreed upon that morality guides people with conflicting interests towards agreements of mutual benefit. We therefore might expect numerous proposals for organizing human moral cognition around the logic of bargaining, negotiation, and agreement. Yet, while “contractualist” ideas play an important role in moral philosophy, they are starkly underrepresented in the field of moral psychology. From a contractualist perspective, ideal moral judgments are those that would be agreed to by rational bargaining agents—an idea with wide-spread support in philosophy, (...) psychology, economics, biology, and cultural evolution. As a practical matter, however, investing time and effort in negotiating every interpersonal interaction is unfeasible. Instead, we propose, people use abstractions and heuristics to efficiently identify mutually beneficial arrangements. We argue that many well-studied elements of our moral minds, such as reasoning about others’ utilities (“consequentialist” reasoning) or evaluating intrinsic ethical properties of certain actions (“deontological” reasoning), can be naturally understood as resource-rational approximations of a contractualist ideal. Moreover, this view explains the flexibility of our moral minds—how our moral rules and standards get created, updated and overridden and how we deal with novel cases we have never seen before. Thus, the apparently fragmentary nature of our moral psychology—commonly described in terms of systems in conflict—can be largely unified around the principle of finding mutually beneficial agreements under resource constraint. Our resulting “triple theory” of moral cognition naturally integrates contractualist, consequentialist and deontological concerns. (shrink)
Bayes and Blickets: Effects of Knowledge on Causal Induction in Children and Adults.Thomas L. Griffiths,David M. Sobel,Joshua B. Tenenbaum &Alison Gopnik -2011 -Cognitive Science 35 (8):1407-1455.detailsPeople are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4-year-olds, using tasks in which (...) participants learned about the causal properties of a set of objects. The studies varied the two factors that our Bayesian approach predicted should be relevant to causal induction: the prior probability with which causal relations exist, and the assumption of a deterministic or a probabilistic relation between cause and effect. Adults’ judgments (Experiments 1, 2, and 4) were in close correspondence with the quantitative predictions of the model, and children’s judgments (Experiments 3 and 5) agreed qualitatively with this account. (shrink)
The imaginary fundamentalists: The unshocking truth about Bayesian cognitive science.Nick Chater,Noah Goodman,Thomas L. Griffiths,Charles Kemp,Mike Oaksford &Joshua B. Tenenbaum -2011 -Behavioral and Brain Sciences 34 (4):194-196.detailsIf Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
Learning to Learn Causal Models.Charles Kemp,Noah D. Goodman &Joshua B. Tenenbaum -2010 -Cognitive Science 34 (7):1185-1243.detailsLearning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the (...) objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning. (shrink)
Too Many Cooks: Bayesian Inference for Coordinating Multi‐Agent Collaboration.Sarah A. Wu,Rose E. Wang,James A. Evans,Joshua B. Tenenbaum,David C. Parkes &Max Kleiman-Weiner -2021 -Topics in Cognitive Science 13 (2):414-432.detailsCollaboration requires agents to coordinate their behavior on the fly, sometimes cooperating to solve a single task together and other times dividing it up into sub‐tasks to work on in parallel. Underlying the human ability to collaborate is theory‐of‐mind (ToM), the ability to infer the hidden mental states that drive others to act. Here, we develop Bayesian Delegation, a decentralized multi‐agent learning mechanism with these abilities. Bayesian Delegation enables agents to rapidly infer the hidden intentions of others by inverse planning. (...) We test Bayesian Delegation in a suite of multi‐agent Markov decision processes inspired by cooking problems. On these tasks, agents with Bayesian Delegation coordinate both their high‐level plans (e.g., what sub‐task they should work on) and their low‐level actions (e.g., avoiding getting in each other's way). When matched with partners that act using the same algorithm, Bayesian Delegation outperforms alternatives. Bayesian Delegation is also a capable ad hoc collaborator and successfully coordinates with other agent types even in the absence of prior experience. Finally, in a behavioral experiment, we show that Bayesian Delegation makes inferences similar to human observers about the intent of others. Together, these results argue for the centrality of ToM for successful decentralized multi‐agent collaboration. (shrink)
Dynamical Causal Learning.David Danks,Thomas L. Griffiths &Joshua B. Tenenbaum -unknowndetailsCurrent psychological theories of human causal learning and judgment focus primarily on long-run predictions: two by estimating parameters of a causal Bayes nets, and a third through structural learning. This paper focuses on people’s short-run behavior by examining dynamical versions of these three theories, and comparing their predictions to a real-world dataset.