Language Evolution and Computation Bibliography

Our site (www.isrl.uiuc.edu/amag/langev) retired, please use https://langev.com instead.
Paul Smolensky
2012
Cognitive Science 36(8):1468-1498, 2012
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language ...MORE ⇓
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word-order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross-linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners’ inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18—which bans a particular word-order pattern relating nouns, adjectives, and numerals.
2011
Cognition, 2011
How recurrent typological patterns, or universals, emerge from the extensive diversity found across the world's languages constitutes a central question for linguistics and cognitive science. Recent challenges to a fundamental assumption of generative linguistics—that ...
1997
Science 275(5306):1604-1610, 1997
Can concepts from the theory of neural computation contribute to formal theories of the mind? Recent research has explored the implications of one principle of neural computation, optimization, for the theory of grammar. Optimization over symbolic linguistic structures provides ...MORE ⇓
Can concepts from the theory of neural computation contribute to formal theories of the mind? Recent research has explored the implications of one principle of neural computation, optimization, for the theory of grammar. Optimization over symbolic linguistic structures provides the core of a new grammatical architecture, optimality theory. The proposition that grammaticality equals optimality sheds light on a wide range of phenomena, from the gulf between production and comprehension in child language, to language learnability, to the fundamental questions of linguistic theory: What is it that the grammars of all languages share, and how may they differ?