Language Evolution and Computation Bibliography

Our site (www.isrl.uiuc.edu/amag/langev) retired, please use https://langev.com instead.
Alan Blair
2000
Evolving learnable languagesPDF
Advances in Neural Information Processing Systems 12, (NIPS*99), pages 66-72, 2000
Traditional theories of child language acquisition center around the existence of a language acquisition device which is specifically tuned for learning a particular class of languages. More recent proposals suggest that language acquisition is assisted by the evolution of ...MORE ⇓
Traditional theories of child language acquisition center around the existence of a language acquisition device which is specifically tuned for learning a particular class of languages. More recent proposals suggest that language acquisition is assisted by the evolution of languages towards forms that are easily learnable. In this paper, we evolve combinatorial languages which can be learned by a simple recurrent network quickly and from relatively few examples. Additionally, we evolve languages for generalization in different ``worlds'', and for generalization from specific examples. We find that languages can be evolved to facilitate different forms of impressive generalization for a minimally biased learner. The results provide empirical support for the theory that the language itself, as well as the language environment of a learner, plays a substantial role in learning: that there is far more to language acquisition than the language acquisition device.
1998
Proceedings of the Second Asia-Pacific Conference on Simulated Evolution and Learning (SEAL98), pages 357-364, 1998
We develop a new framework for studying the biases that recurrent neural networks bring to language processing tasks. A semantic concept represented by a point in Euclidian space is translated into a symbol sequence by an encoder network. This sequence is then fed to a decoder ...MORE ⇓
We develop a new framework for studying the biases that recurrent neural networks bring to language processing tasks. A semantic concept represented by a point in Euclidian space is translated into a symbol sequence by an encoder network. This sequence is then fed to a decoder network which attempts to translate it back to the original concept. We show how a pair of recurrent networks acting as encoder and decoder can develop their own symbolic language that is serially transmitted between them either forwards or backwards. The encoder and decoder bring different constraints to the task, and these early results indicate that the conflicting nature of these constraints may be reflected in the language that ultimately emerges, providing important clues to the structure of human languages.