Language Evolution and Computation Bibliography

Our site (www.isrl.uiuc.edu/amag/langev) retired, please use https://langev.com instead.
Journal :: Neural Networks
2012
Neural Networks, 2012
In this paper we present a neuro-robotic model that uses artificial neural networks for investigating the relations between the development of symbol manipulation capabilities and of sensorimotor knowledge in the humanoid robot iCub. We describe a cognitive ...
2011
Neural Networks 24(4):311--320, 2011
We show that a Multiple Timescale Recurrent Neural Network (MTRNN) can acquire the capabilities to recognize, generate, and correct sentences by self-organizing in a way that mirrors the hierarchical structure of sentences: characters grouped into words, and words ...
2009
Neural Networks 22(2):161-172, 2009
In neural network research on language, the existence of discrete combinatorial rule representations is commonly denied. Combinatorial capacity of networks and brains is rather attributed to probability mapping and pattern overlay. Here, we demonstrate that networks incorporating ...MORE ⇓
In neural network research on language, the existence of discrete combinatorial rule representations is commonly denied. Combinatorial capacity of networks and brains is rather attributed to probability mapping and pattern overlay. Here, we demonstrate that networks incorporating relevant features of neuroanatomical connectivity and neuronal function give rise to discrete neuronal circuits that store combinatorial information and exhibit a function similar to elementary rules of grammar. Key properties of these networks are rich auto- and hetero-associative connectivity, availability of sequence detectors similar to those found in a range of animals, and unsupervised Hebbian learning. Input of specific word sequences establishes sequence detectors in the network, and substitutions of words and larger string segments from one syntactic category, occurring in the context of elements of a second syntactic class, lead to binding between them into neuronal assemblies. Critically, these newly formed aggregates of sequence detectors now respond in a discrete generalizing fashion when members of specific substitution classes of string elements are combined with each other. The discrete combinatorial neuronal assemblies (DCNAs) even respond in the same way to learned strings and to word sequences that never appeared in the input but conform to a rule. We also show how combinatorial information interacts with information about functional and anatomical properties of the brain in the emergence of discrete neuronal circuits that may implement rules and discuss the model in the wider context of brain mechanism for syntax and grammar. Implications for the evolution of human language are discussed in closing.
Neural Networks 22(3):247--257, 2009
What is the role of language in cognition? Do we think with words, or do we use words to communicate made-up decisions? The paper briefly reviews ideas in this area since 1950s. Then we discuss mechanisms of cognition, recent neuroscience experiments, and ...
Neural Networks 22(5):579--585, 2009
The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or virtual agents have brought that issue to a central place in ...
2008
Neural Networks 21(2):250--256, 2008
The relationship between thought and language and, in particular, the issue of whether and how language influences thought is still a matter of fierce debate. Here we consider a discrimination task scenario to study language acquisition in which an agent receives ...
2007
Neural Networks 20(2):236-244, 2007
Recurrent neural networks are often employed in the cognitive science community to process symbol sequences that represent various natural language structures. The aim is to study possible neural mechanisms of language processing and aid in development of artificial language ...MORE ⇓
Recurrent neural networks are often employed in the cognitive science community to process symbol sequences that represent various natural language structures. The aim is to study possible neural mechanisms of language processing and aid in development of artificial language processing systems. We used data sets containing recursive linguistic structures and trained the Elman simple recurrent network (SRN) for the next-symbol prediction task. Concentrating on neuron activation clusters in the recurrent layer of SRN we investigate the network state space organization before and after training. Given a SRN and a training stream, we construct predictive models, called neural prediction machines, that directly employ the state space dynamics of the network. We demonstrate two important properties of representations of recursive symbol series in the SRN. First, the clusters of recurrent activations emerging before training are meaningful and correspond to Markov prediction contexts. We show that prediction states that naturally arise in the SRN initialized with small random weights approximately correspond to states of Variable Memory Length Markov Models (VLMM) based on individual symbols (i.e. words). Second, we demonstrate that during training, the SRN reorganizes its state space according to word categories and their grammatical subcategories, and the next-symbol prediction is again based on the VLMM strategy. However, after training, the prediction is based on word categories and their grammatical subcategories rather than individual words. Our conclusion holds for small depths of recursions that are comparable to human performances. The methods of SRN training and analysis of its state space introduced in this paper are of a general nature and can be used for investigation of processing of any other symbol time series by means of SRN.
2004
Neural Networks 17(8-9):1345-1362, 2004
In this paper we present a self-organizing neural network model of early lexical development called DevLex. The network consists of two self-organizing maps (a growing semantic map and a growing phonological map) that are connected via associative links trained by Hebbian ...MORE ⇓
In this paper we present a self-organizing neural network model of early lexical development called DevLex. The network consists of two self-organizing maps (a growing semantic map and a growing phonological map) that are connected via associative links trained by Hebbian learning. The model captures a number of important phenomena that occur in early lexical acquisition by children, as it allows for the representation of a dynamically changing linguistic environment in language learning. In our simulations, DevLex develops topographically organized representations for linguistic categories over time, models lexical confusion as a function of word density and semantic similarity, and shows age-of-acquisition effects in the course of learning a growing lexicon. These results match up with patterns from empirical research on lexical development, and have significant implications for models of language acquisition based on self-organizing neural networks.
2003
Neural Networks 16(9):1237-1260, 2003
This paper contributes to neurolinguistics by grounding an evolutionary account of the readiness of the human brain for language in the search for homologies between different cortical areas in macaque and human. We consider two hypotheses for this grounding, that of Aboitiz and ...MORE ⇓
This paper contributes to neurolinguistics by grounding an evolutionary account of the readiness of the human brain for language in the search for homologies between different cortical areas in macaque and human. We consider two hypotheses for this grounding, that of Aboitiz and Garci[Brain Res. Rev. 25 (1997) 381] and the Mirror System Hypothesis of Rizzolatti and Arbib [Trends Neurosci. 21 (1998) 188] and note the promise of computational modeling of neural circuitry of the macaque and its linkage to analysis of human brain imaging data. In addition to the functional differences between the two hypotheses, problems arise because they are grounded in different cortical maps of the macaque brain. In order to address these divergences, we have developed several neuroinformatics tools included in an on-line knowledge management system, the NeuroHomology Database, which is equipped with inference engines both to relate and translate information across equivalent cortical maps and to evaluate degrees of homology for brain regions of interest in different species.