Are linguists better subjects?Jennifer Culbertson &Steven Gross -2009 -British Journal for the Philosophy of Science 60 (4):721-736.detailsWho are the best subjects for judgment tasks intended to test grammatical hypotheses? Michael Devitt ( [2006a] , [2006b] ) argues, on the basis of a hypothesis concerning the psychology of such judgments, that linguists themselves are. We present empirical evidence suggesting that the relevant divide is not between linguists and non-linguists, but between subjects with and without minimally sufficient task-specific knowledge. In particular, we show that subjects with at least some minimal exposure to or knowledge of such tasks tend (...) to perform consistently with one another—greater knowledge of linguistics makes no further difference—while at the same time exhibiting markedly greater in-group consistency than those who have no previous exposure to or knowledge of such tasks and their goals. (shrink)
Revisited Linguistic Intuitions.Jennifer Culbertson &Steven Gross -2011 -British Journal for the Philosophy of Science 62 (3):639 - 656.detailsMichael Devitt ([2006a], [2006b]) argues that, insofar as linguists possess better theories about language than non-linguists, their linguistic intuitions are more reliable. (Culbertson and Gross [2009]) presented empirical evidence contrary to this claim. Devitt ([2010]) replies that, in part because we overemphasize the distinction between acceptability and grammaticality, we misunderstand linguists' claims, fall into inconsistency, and fail to see how our empirical results can be squared with his position. We reply in this note. Inter alia we argue that Devitt's focus (...) on grammaticality intuitions, rather than acceptability intuitions, distances his discussion from actual linguistic practice. We close by questioning a demand that drives his discussion—viz., that, for linguistic intuitions to supply evidence for linguistic theorizing, a better account of why they are evidence is required. (shrink)
Cognitive Biases, Linguistic Universals, and Constraint‐Based Grammar Learning.Jennifer Culbertson,Paul Smolensky &Colin Wilson -2013 -Topics in Cognitive Science 5 (3):392-424.detailsAccording to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology—the distribution of linguistic patterns across the world's languages—and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern of results of artificial (...) grammar experiments on noun-phrase word order (Culbertson, Smolensky, & Legendre, 2012). Our proposal has several novel properties that distinguish it from prior work in the domains of linguistic theory, computational cognitive science, and machine learning. This study illustrates how ideas from these domains can be synthesized into a model of language learning in which biases range in strength from hard (absolute) to soft (statistical), and in which language-specific and domain-general biases combine to account for data from the macro-level scale of typological distribution to the micro-level scale of learning by individuals. (shrink)
A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word‐Order Universal.Jennifer Culbertson &Paul Smolensky -2012 -Cognitive Science 36 (8):1468-1498.detailsIn this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language‐learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word‐order patterns in the nominal domain. The model identifies internal biases of (...) the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross‐linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners’ inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18—which bans a particular word‐order pattern relating nouns, adjectives, and numerals. (shrink)
Predictability and Variation in Language Are Differentially Affected by Learning and Production.Aislinn Keogh,Simon Kirby &Jennifer Culbertson -2024 -Cognitive Science 48 (4):e13435.detailsGeneral principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory—the component of short‐term memory used for temporary storage and manipulation of information. In this study, we consider the relationship between working memory and regularization of linguistic variation. Regularization is a well‐documented process whereby (...) languages become less variable (on some dimension) over time. This process has been argued to be driven by the behavior of individual language users, but the specific mechanism is not agreed upon. Here, we use an artificial language learning experiment to investigate whether limitations in working memory during either language learning or language production drive regularization behavior. We find that taxing working memory during production results in the loss of all types of variation, but the process by which random variation becomes more predictable is better explained by learning biases. A computational model offers a potential explanation for the production effect using a simple self‐priming mechanism. (shrink)
Evaluating the Relative Importance of Wordhood Cues Using Statistical Learning.Elizabeth Pankratz,Simon Kirby &Jennifer Culbertson -2024 -Cognitive Science 48 (3):e13429.detailsIdentifying wordlike units in language is typically done by applying a battery of criteria, though how to weight these criteria with respect to one another is currently unknown. We address this question by investigating whether certain criteria are also used as cues for learning an artificial language—if they are, then perhaps they can be relied on more as trustworthy top‐down diagnostics. The two criteria for grammatical wordhood that we consider are a unit's free mobility and its internal immutability. These criteria (...) also map to two cognitive mechanisms that could underlie successful statistical learning: learners might orient themselves around the low transitional probabilities at unit boundaries, or they might seek chunks with high internal transitional probabilities. We find that each criterion has its own facilitatory effect, and learning is best where they both align. This supports the battery‐of‐criteria approach to diagnosing wordhood, and also suggests that the mechanism behind statistical learning may not be a question of either/or; perhaps the two mechanisms do not compete, but mutually reinforce one another. (shrink)
Autistic Traits, Communicative Efficiency, and Social Biases Shape Language Learning in Autistic and Allistic Learners.Lauren Fletcher,Hugh Rabagliati &Jennifer Culbertson -2024 -Cognitive Science 48 (11):e70007.detailsThere is ample evidence that individual-level cognitive mechanisms active during language learning and use can contribute to the evolution of language. For example, experimental work suggests that learners will reduce case marking in a language where grammatical roles are reliably indicated by fixed word order, a correlation found robustly in the languages of the world. However, such research often assumes homogeneity among language learners and users, or at least does not dig into individual differences in behavior. Yet, it is increasingly (...) clear that language users vary in a large number of ways: in culture, in demographics, and—critically for present purposes—in terms of cognitive diversity. Here, we explore how neurodiversity impacts behavior in an experimental task similar to the one summarized above, and how this behavior interacts with social pressures. We find both similarities and differences between autistic and nonautistic English-speaking individuals, suggesting that neurodiversity can impact language change in the lab. This, in turn, highlights the potential for future research on the role of neurodivergent populations in language evolution more generally. (shrink)