Language Evolution and Computation Bibliography

Our site (www.isrl.uiuc.edu/amag/langev) retired, please use https://langev.com instead.
Jennifer Culbertson
2014
PNAS 111(16):5842-7, 2014
Although it is widely agreed that learning the syntax of natural languages involves acquiring structure-dependent rules, recent work on acquisition has nevertheless attempted to characterize the outcome of learning primarily in terms of statistical generalizations about surface ...MORE ⇓
Although it is widely agreed that learning the syntax of natural languages involves acquiring structure-dependent rules, recent work on acquisition has nevertheless attempted to characterize the outcome of learning primarily in terms of statistical generalizations about surface distributional information. In this paper we investigate whether surface statistical knowledge or structural knowledge of English is used to infer properties of a novel language under conditions of impoverished input. We expose learners to artificial-language patterns that are equally consistent with two possible underlying grammars--one more similar to English in terms of the linear ordering of words, the other more similar on abstract structural grounds. We show that learners' grammatical inferences overwhelmingly favor structural similarity over preservation of superficial order. Importantly, the relevant shared structure can be characterized in terms of a universal preference for isomorphism in the mapping from meanings to utterances. Whereas previous empirical support for this universal has been based entirely on data from cross-linguistic language samples, our results suggest it may reflect a deep property of the human cognitive system--a property that, together with other structure-sensitive principles, constrains the acquisition of linguistic knowledge.
2012
Cognitive Science 36(8):1468-1498, 2012
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language ...MORE ⇓
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word-order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross-linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners’ inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18—which bans a particular word-order pattern relating nouns, adjectives, and numerals.
2011
Cognition, 2011
How recurrent typological patterns, or universals, emerge from the extensive diversity found across the world's languages constitutes a central question for linguistics and cognitive science. Recent challenges to a fundamental assumption of generative linguistics—that ...