Language Evolution and Computation Bibliography

Our site (www.isrl.uiuc.edu/amag/langev) retired, please use https://langev.com instead.
Journal :: Cognition
2018
Cognition 173:43-59, 2018
Spectacular progress in the information processing sciences (machine learning, wearable sensors) promises to revolutionize the study of cognitive development. Here, we analyse the conditions under which 'reverse engineering' language development, i.e., building an effective ...MORE ⇓
Spectacular progress in the information processing sciences (machine learning, wearable sensors) promises to revolutionize the study of cognitive development. Here, we analyse the conditions under which 'reverse engineering' language development, i.e., building an effective system that mimics infant's achievements, can contribute to our scientific understanding of early language development. We argue that, on the computational side, it is important to move from toy problems to the full complexity of the learning situation, and take as input as faithful reconstructions of the sensory signals available to infants as possible. On the data side, accessible but privacy-preserving repositories of home data have to be setup. On the psycholinguistic side, specific tests have to be constructed to benchmark humans and machines at different linguistic levels. We discuss the feasibility of this approach and present an overview of current results.
Cognition 176:174-183, 2018
Language acquisition and change are thought to be causally connected. We demonstrate a method for quantifying the strength of this connection in terms of the 'basic reproductive ratio' of linguistic constituents. It represents a standardized measure of reproductive success, which ...MORE ⇓
Language acquisition and change are thought to be causally connected. We demonstrate a method for quantifying the strength of this connection in terms of the 'basic reproductive ratio' of linguistic constituents. It represents a standardized measure of reproductive success, which can be derived both from diachronic and from acquisition data. By analyzing phonotactic English data, we show that the results of both types of derivation correlate, so that phonotactic acquisition indeed predicts phonotactic change, and vice versa. After drawing that general conclusion, we discuss the role of utterance frequency and show that the latter exhibits destabilizing effects only on late acquired items, which belong to phonotactic periphery. We conclude that - at least in the evolution of English phonotactics - acquisition serves conservation, while innovation is more likely to occur in adult speech and affects items that are less entrenched but comparably frequent.
2017
Cognition 166:225-250, 2017
Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). ...MORE ⇓
Nativist theories have argued that language involves syntactic principles which are unlearnable from the input children receive. A paradigm case of these innate principles is the structure dependence of auxiliary inversion in complex polar questions (Chomsky, 1968, 1975, 1980). Computational approaches have focused on the properties of the input in explaining how children acquire these questions. In contrast, we argue that messages are structured in a way that supports structure dependence in syntax. We demonstrate this approach within a connectionist model of sentence production (Chang, 2009) which learned to generate a range of complex polar questions from a structured message without positive exemplars in the input. The model also generated different types of error in development that were similar in magnitude to those in children (e.g., auxiliary doubling, Ambridge, Rowland, & Pine, 2008; Crain & Nakayama, 1987). Through model comparisons we trace how meaning constraints and linguistic experience interact during the acquisition of auxiliary inversion. Our results suggest that auxiliary inversion rules in English can be acquired without innate syntactic principles, as long as it is assumed that speakers who ask complex questions express messages that are structured into multiple propositions.
2015
Cognition 141:87-102, 2015
Language exhibits striking systematic structure. Words are composed of combinations of reusable sounds, and those words in turn are combined to form complex sentences. These properties make language unique among natural communication systems and enable our species to convey an ...MORE ⇓
Language exhibits striking systematic structure. Words are composed of combinations of reusable sounds, and those words in turn are combined to form complex sentences. These properties make language unique among natural communication systems and enable our species to convey an open-ended set of messages. We provide a cultural evolutionary account of the origins of this structure. We show, using simulations of rational learners and laboratory experiments, that structure arises from a trade-off between pressures for compressibility (imposed during learning) and expressivity (imposed during communication). We further demonstrate that the relative strength of these two pressures can be varied in different social contexts, leading to novel predictions about the emergence of structured behaviour in the wild.
2012
Cognition, 2012
In this article we present a neural network model of sentence generation. The network has both technical and conceptual innovations. Its main technical novelty is in its semantic representations: the messages which form the input to the network are structured as ...
2011
Cognition, 2011
How recurrent typological patterns, or universals, emerge from the extensive diversity found across the world's languages constitutes a central question for linguistics and cognitive science. Recent challenges to a fundamental assumption of generative linguistics—that ...
Cognition 120(3):302--321, 2011
We present an introduction to Bayesian inference as it is used in probabilistic models of cognitive development. Our goal is to provide an intuitive and accessible guide to the what, the how, and the why of the Bayesian approach: what sorts of problems and data the ...
2010
Cognition 116(3):444--449, 2010
Human languages may be shaped not only by the (individual psychological) processes of language acquisition, but also by population-level processes arising from repeated language learning and use. One prevalent feature of natural languages is that they avoid ...
2009
Cognition 111(3):17-28, 2009
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior ...MORE ⇓
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this paper we explore how regular linguistic structures can emerge from language evolution by iterated learning, in which one person's linguistic output is used to generate the linguistic input provided to the next person. We use a model of iterated learning with Bayesian agents to show that this process can result in regularization when learners have the appropriate inductive biases. We then present three experiments demonstrating that simulating the process of language evolution in the laboratory can reveal biases towards regularization that might not otherwise be obvious, allowing weak biases to have strong effects. The results of these experiments suggest that people tend to regularize inconsistent word-meaning mappings, and that even a weak bias towards regularization can allow regular languages to be produced via language evolution by iterated learning.
Cognition 111(3):317 - 328, 2009
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior ...MORE ⇓
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this paper we explore how regular linguistic structures can emerge from language evolution by iterated learning, in which one person's linguistic output is used to generate the linguistic input provided to the next person. We use a model of iterated learning with Bayesian agents to show that this process can result in regularization when learners have the appropriate inductive biases. We then present three experiments demonstrating that simulating the process of language evolution in the laboratory can reveal biases towards regularization that might not otherwise be obvious, allowing weak biases to have strong effects. The results of these experiments suggest that people tend to regularize inconsistent word-meaning mappings, and that even a weak bias towards regularization can allow regular languages to be produced via language evolution by iterated learning.
Cognition 113(2):226-233, 2009
A unique hallmark of human language is that it uses signals that are both learnt and symbolic. The emergence of such signals was therefore a defining event in human cognitive evolution, yet very little is known about how such a process occurs. Previous work provides some insights ...MORE ⇓
A unique hallmark of human language is that it uses signals that are both learnt and symbolic. The emergence of such signals was therefore a defining event in human cognitive evolution, yet very little is known about how such a process occurs. Previous work provides some insights on how meaning can become attached to form, but a more foundational issue is presently unaddressed. How does a signal signal its own signalhood? That is, how do humans even know that communicative behaviour is indeed communicative in nature? We introduce an experimental game that has been designed to tackle this problem. We find that it is commonly resolved with a bootstrapping process, and that this process influences the final form of the communication system. Furthermore, sufficient common ground is observed to be integral to the recognition of signalhood, and the emergence of dialogue is observed to be the key step in the development of a system that can be employed to achieve shared goals.
2008
Cognition 107:479-500, 2008
There is a surprising degree of overlapping structure evident across the languages of the world. One factor leading to cross-linguistic similarities may be constraints on human learning abilities. Linguistic structures that are easier for infants to learn should predominate in ...MORE ⇓
There is a surprising degree of overlapping structure evident across the languages of the world. One factor leading to cross-linguistic similarities may be constraints on human learning abilities. Linguistic structures that are easier for infants to learn should predominate in human languages. If correct, then (a) human infants should more readily acquire structures that are consistent with the form of natural language, whereas (b) non-human primates patterns of learning should be less tightly linked to the structure of human languages. Prior experiments have not directly compared laboratory-based learning of grammatical strucutures by human infants and non-human primates, especially under comparable testing conditions and with similar materials. Five experiments with 12-month-old human infants and adult cotton-top tamarin monkeys addressed these predictions, employing comparable methods (familiarization-discrimination) and materials. Infants rapidly acquired complex grammatical structures by using statistically predictive patterns, failing to learn structures that lacked such patterns. In contrast, the tamarins only exploited predictive patterns when learning relatively simple grammatical structures. Infant learning abilities may serve both to facilitate natural language acquisition and to impose constraints on the structure of human languages.
Cognition 107(2):603-622, 2008
Set representations are explicitly expressed in natural language. For example, many languages distinguish between sets and subsets (all vs. some), as well as between singular and plural sets (a cat vs. some cats). Three experiments explored the hypothesis that these ...MORE ⇓
Set representations are explicitly expressed in natural language. For example, many languages distinguish between sets and subsets (all vs. some), as well as between singular and plural sets (a cat vs. some cats). Three experiments explored the hypothesis that these representations are language specific, and thus absent from the conceptual resources of non-linguistic animals. We found that rhesus monkeys spontaneously discriminate sets based on a conceptual singular-plural distinction. Under conditions that do not elicit comparisons based on approximate magnitudes or one-to-one correspondance, rhesus monkeys distinguished between singular and plural sets (1 vs. 2 and 1 vs. 5), but not between two plural sets (2 vs. 3, 2 vs. 4, and 2 vs. 5). These results suggest that set-relational distinctions are not a privileged part of natural language, and may have evolved in a non-linguistic species to support domain general quantitative computations.
2006
Cognition 100(1):173-215, 2006
Studies of the biology of music (as of language) are highly interdisciplinary and demand the integration of diverse strands of evidence. In this paper, I present a comparative perspective on the biology and evolution of music, stressing the value of comparisons both with human ...MORE ⇓
Studies of the biology of music (as of language) are highly interdisciplinary and demand the integration of diverse strands of evidence. In this paper, I present a comparative perspective on the biology and evolution of music, stressing the value of comparisons both with human language, and with those animal communication systems traditionally termed 'song'. A comparison of the 'design features' of music with those of language reveals substantial overlap, along with some important differences. Most of these differences appear to stem from semantic, rather than structural, factors, suggesting a shared formal core of music and language. I next review various animal communication systems that appear related to human music, either by analogy (bird and whale 'song') or potential homology (great ape bimanual drumming). A crucial comparative distinction is between learned, complex signals (like language, music and birdsong) and unlearned signals (like laughter, ape calls, or bird calls). While human vocalizations clearly build upon an acoustic and emotional foundation shared with other primates and mammals, vocal learning has evolved independently in our species since our divergence with chimpanzees. The convergent evolution of vocal learning in other species offers a powerful window into psychological and neural constraints influencing the evolution of complex signaling systems (including both song and speech), while ape drumming presents a fascinating potential homology with human instrumental music. I next discuss the archeological data relevant to music evolution, concluding on the basis of prehistoric bone flutes that instrumental music is at least 40,000 years old, and perhaps much older. I end with a brief review of adaptive functions proposed for music, concluding that no one selective force (e.g., sexual selection) is adequate to explaining all aspects of human music. I suggest that questions about the past function of music are unlikely to be answered definitively and are thus a poor choice as a research focus for biomusicology. In contrast, a comparative approach to music promises rich dividends for our future understanding of the biology and evolution of music.
2005
The nature of the language faculty and its implications for evolution of language (Reply to Fitch, Hauser, and Chomsky)PDF
Cognition, 2005
In a continuation of the conversation with Fitch, Chomsky, and Hauser on the evolution of language, we examine their defense of the claim that the uniquely human, language-specific part of the language faculty (the ``narrow language faculty'') consists only of recursion, and that ...MORE ⇓
In a continuation of the conversation with Fitch, Chomsky, and Hauser on the evolution of language, we examine their defense of the claim that the uniquely human, language-specific part of the language faculty (the ``narrow language faculty'') consists only of recursion, and that this part cannot be considered an adaptation to communication. We argue that their characterization of the narrow language faculty is problematic for many reasons, including its dichotomization of cognitive capacities into those that are utterly unique and those that are identical to nonlinguistic or nonhuman capacities, omitting capacities that may have been substantially modified during human evolution. We also question their dichotomy of the current utility versus original function of a trait, which omits traits that are adaptations for current use, and their dichotomy of humans and animals, which conflates similarity due to common function and similarity due to inheritance from a recent common ancestor. We show that recursion, though absent from other animals' communications systems, is found in visual cognition, hence cannot be the sole evolutionary development that granted language to humans. Finally, we note that despite Fitch et al.'s denial, their view of language evolution is tied to Chomsky's conception of language itself, which identifies combinatorial productivity with a core of ``narrow syntax.'' An alternative conception, in which combinatoriality is spread across words and constructions, has both empirical advantages and greater evolutionary plausibility.
Cognition 95(2):201-236, 2005
We examine the question of which aspects of language are uniquely human and uniquely linguistic in light of recent suggestions by Hauser, Chomsky, and Fitch that the only such aspect is syntactic recursion, the rest of language being either specific to humans but not to language ...MORE ⇓
We examine the question of which aspects of language are uniquely human and uniquely linguistic in light of recent suggestions by Hauser, Chomsky, and Fitch that the only such aspect is syntactic recursion, the rest of language being either specific to humans but not to language (e.g. words and concepts) or not specific to humans (e.g. speech perception). We find the hypothesis problematic. It ignores the many aspects of grammar that are not recursive, such as phonology, morphology, case, agreement, and many properties of words. It is inconsistent with the anatomy and neural control of the human vocal tract. And it is weakened by experiments suggesting that speech perception cannot be reduced to primate audition, that word learning cannot be reduced to fact learning, and that at least one gene involved in speech and language was evolutionarily selected in the human lineage but is not specific to recursion. The recursion-only claim, we suggest, is motivated by Chomsky's recent approach to syntax, the Minimalist Program, which de-emphasizes the same aspects of language. The approach, however, is sufficiently problematic that it cannot be used to support claims about evolution. We contest related arguments that language is not an adaptation, namely that it is 'perfect,' non-redundant, unusable in any partial form, and badly designed for communication. The hypothesis that language is a complex adaptation for communication which evolved piecemeal avoids all these problems.
Cognition 97(2):179-210, 2005
In this response to Pinker and Jackendoff's critique, we extend our previous framework for discussion of language evolution, clarifying certain distinctions and elaborating on a number of points. In the first half of the paper, we reiterate that profitable research into the ...MORE ⇓
In this response to Pinker and Jackendoff's critique, we extend our previous framework for discussion of language evolution, clarifying certain distinctions and elaborating on a number of points. In the first half of the paper, we reiterate that profitable research into the biology and evolution of language requires fractionation of ``language'' into component mechanisms and interfaces, a non-trivial endeavor whose results are unlikely to map onto traditional disciplinary boundaries. Our terminological distinction between FLN and FLB is intended to help clarify misunderstandings and aid interdisciplinary rapprochement. By blurring this distinction, Pinker and Jackendoff mischaracterize our hypothesis 3 which concerns only FLN, not ``language'' as a whole. Many of their arguments and examples are thus irrelevant to this hypothesis. Their critique of the minimalist program is for the most part equally irrelevant, because very few of the arguments in our original paper were tied to this program; in an online appendix we detail the deep inaccuracies in their characterization of this program. Concerning evolution, we believe that Pinker and Jackendoff's emphasis on the past adaptive history of the language faculty is misplaced. Such questions are unlikely to be resolved empirically due to a lack of relevant data, and invite speculation rather than research. Preoccupation with the issue has retarded progress in the field by diverting research away from empirical questions, many of which can be addressed with comparative data. Moreover, offering an adaptive hypothesis as an alternative to our hypothesis concerning mechanisms is a logical error, as questions of function are independent of those concerning mechanism. The second half of our paper consists of a detailed response to the specific data discussed by Pinker and Jackendoff. Although many of their examples are irrelevant to our original paper and arguments, we find several areas of substantive disagreement that could be resolved by future empirical research. We conclude that progress in understanding the evolution of language will require much more empirical research, grounded in modern comparative biology, more interdisciplinary collaboration, and much less of the adaptive storytelling and phylogenetic speculation that has traditionally characterized the field.
1996
Cognition 61(1-2):1-38, 1996
This paper presents a computational study of part of the lexical-acquisition task faced by children, namely the acquisition of word-to-meaning mappings. It first approximates this task as a formal mathematical problem. It then presents an implemented algorithm for solving this ...MORE ⇓
This paper presents a computational study of part of the lexical-acquisition task faced by children, namely the acquisition of word-to-meaning mappings. It first approximates this task as a formal mathematical problem. It then presents an implemented algorithm for solving this problem, illustrating its operation on a small example. This algorithm offers one precise interpretation of the intuitive notions of cross-situational learning and the principle of contrast applied between words in an utterance. It robustly learns a homonymous lexicon despite noisy multi-word input, in the presence of referential uncertainty, with no prior knowledge that is specific to the language being learned. Computational simulations demonstrate the robustness of this algorithm and illustrate how algorithms based on cross-situational learning and the principle of contrast might be able to solve lexical-acquisition problems of the size faced by children, under weak, worst-case assumptions about the type and quantity of data available.
Cognition 61(1-2):161-193, 1996
This paper shows how to formally characterize language learning in a finite parameter space, for instance, in the principles-and-parameters approach to language, as a Markov structure. New language learning results follow directly; we can explicitly calculate how many positive ...MORE ⇓
This paper shows how to formally characterize language learning in a finite parameter space, for instance, in the principles-and-parameters approach to language, as a Markov structure. New language learning results follow directly; we can explicitly calculate how many positive examples on average (``sample complexity'') it will take for a learner to correctly identify a target language with high probability. We show how sample complexity varies with input distributions and learning regimes. In particular we find that the average time to converge under reasonable language input distributions for a simple three-parameter system first described by Gibson and Wexler (1994) is psychologically plausible, in the range of 100-150 positive examples. We further find that a simple random step algorithm - that is, simply jumping from one language hypothesis to another rather than changing one parameter at a time - works faster and always converges to the right target language, in contrast to the single-step, local parameter setting method advocated in some recent work.
1995
Cognition 56(1):61-98, 1995
An account is offered to change over time in English verb morphology, based on a connectionist approach to how morphological knowledge is acquired and used. A technique is first described that was developed for modeling historical change in connectionist networks, and that ...MORE ⇓
An account is offered to change over time in English verb morphology, based on a connectionist approach to how morphological knowledge is acquired and used. A technique is first described that was developed for modeling historical change in connectionist networks, and that technique is applied to model English verb inflection as it developed from the highly complex past tense system of Old English towards that of the modern language, with one predominant ``regular'' inflection and a small number of irregular forms. The model relies on the fact that certain input-output mappings are easier than others to learn in a connectionist network. Highly frequent patterns, or those that share phonological regularities with a number of others, are learned more quickly and with lower error than low-frequency, highly irregular patterns. A network is taught a data set representative of the verb classes of Old English, but learning is stopped before reaching asymptote, and the output of this network is used as the teacher of a new net. As a result, the errors in the first network were passed on to become part of the data set of the second. Those patterns that are hardest to learn led to the most errors, and over time are ``regularized'' to fit a more dominant pattern. The results of the networks simulations were highly consistent with the major historical developments. These results are predicted from well-understood aspects of network dynamics, which therefore provide a rationale for the shape of the attested changes.
1993
Cognition 48(1):71-99, 1993
It is a striking fact that in humans the greatest learning occurs precisely at that point in time - childhood - when the most dramatic maturational changes also occur. This report describes possible synergistic interactions between maturational change and the ability to learn a ...MORE ⇓
It is a striking fact that in humans the greatest learning occurs precisely at that point in time - childhood - when the most dramatic maturational changes also occur. This report describes possible synergistic interactions between maturational change and the ability to learn a complex domain (language), as investigated in connectionist networks. The networks are trained to process complex sentences involving relative clauses, number agreement, and several types of verb argument structure. Training fails in the case of networks which are fully formed and `adultlike' in their capacity. Training succeeds only when networks begin with limited working memory and gradually `mature' to the adult state. This result suggests that rather than being a limitation, developmental restrictions on resources may constitute a necessary prerequisite for mastering certain complex domains. Specifically, successful learning may depend on starting small.
1992
On the evolution of language and generativity
Cognition 44(3):197--126, 1992
One of the properties that most conspicuously distinguishes human language from any other form of animal communication is generativity. Language with this property therefore presumably evolved with the Homo line somewhere between H. habilis and H. sapiens sapiens. Some have ...MORE ⇓
One of the properties that most conspicuously distinguishes human language from any other form of animal communication is generativity. Language with this property therefore presumably evolved with the Homo line somewhere between H. habilis and H. sapiens sapiens. Some have suggested that it emerged relatively suddenly and completely with H. sapiens sapiens, and this view is consistent with (a) linguistic estimates as to when vocal language emerged, (b) the relatively late 'explosion' of manufacture and cultural artifacts such as body ornamentation and cave drawings, and (c) evidence on changes in the vocal apparatus. However, evidence on brain size and developmental patterns of growth suggests an earlier origin and a more continuous evolution. I propose that these scenarios can be reconciled if it is supposed that generative language evolved, perhaps from H. habilis on, as a system of manual gestures, but switched to a predominantly vocal system with H. sapiens sapiens. The subsequent 'cultural explosion' can then be attributed to the freeing of the hands from primary involvement in language, so that they could be exploited, along with generativity, for manufacture, art, and other activities.
1991
Cognition 40(3):159-201, 1991
Evidence suggests that there is a critical, or at least a sensitive, period for language acquisition, which ends around puberty. The existence of this period is explained by an evolutionary model which assumes that (a) linguistic ability is in principle (if not in practice) ...MORE ⇓
Evidence suggests that there is a critical, or at least a sensitive, period for language acquisition, which ends around puberty. The existence of this period is explained by an evolutionary model which assumes that (a) linguistic ability is in principle (if not in practice) measurable, and (b) the amount of language controlled by an individual conferred selective advantage on it. In this model, the language faculty is seen as adaptive, favoured by natural selection, while the critical period for language acquisition itself is not an adaptation, but arises from the interplay of genetic factors influencing life-history characters in relation to language acquisition. The evolutionary model is implemented on a computer and simulations of populations evolving under various plausible, if idealized, conditions result in clear critical period effects, which end around puberty.