Language Evolution and Computation Bibliography

Our site (www.isrl.uiuc.edu/amag/langev) retired, please use https://langev.com instead.
Journal :: IEEE Transactions on Cognitive and Developmental Systems
2018
IEEE Transactions on Cognitive and Developmental Systems 10(3):784-794, 2018
Language has evolved over centuries and was gradually enriched and improved. The question, how people find assignment between meanings and referents, remains unanswered. There are many of computational models based on the statistical co-occurrence of meaning-reference pairs. ...MORE ⇓
Language has evolved over centuries and was gradually enriched and improved. The question, how people find assignment between meanings and referents, remains unanswered. There are many of computational models based on the statistical co-occurrence of meaning-reference pairs. Unfortunately, these mapping strategies show poor performance in an environment with a higher number of objects or noise. Therefore, we propose a more robust noise-resistant algorithm. We tested the performance of this novel algorithm with simulated and physical iCub robots. We developed a testing scenario consisting of objects with varying visual properties presented to the robot accompanied by utterances describing the given object. The results suggest that the proposed mapping procedure is robust, resistant against noise and shows better performance than one-step mapping for all levels of noise in the linguistic input, as well as slower performance degradation with increasing noise. Furthermore, the proposed procedure increases the clustering accuracy of both modalities.
2016
IEEE Transactions on Cognitive and Developmental Systems 8:3-14, 2016
For robots to effectively bootstrap the acquisition of language, they must handle referential uncertainty-the problem of deciding what meaning to ascribe to a given word. Typically when socially grounding terms for space and time, the underlying sensor or representation was ...MORE ⇓
For robots to effectively bootstrap the acquisition of language, they must handle referential uncertainty-the problem of deciding what meaning to ascribe to a given word. Typically when socially grounding terms for space and time, the underlying sensor or representation was specified within the grammar of a conversation, which constrained language learning to words for innate features. In this paper, we demonstrate that cross-situational learning resolves the issues of referential uncertainty for bootstrapping a language for episodic space and time; therefore removing the need to specify the underlying sensors or representations a priori. The requirements for robots to be able to link words to their designated meanings are presented and analyzed within the Lingodroids-language learning robots-framework. We present a study that compares predetermined associations given a priori against unconstrained learning using cross-situational learning. This study investigates the long-term coherence, immediate usability and learning time for each condition. Results demonstrate that for unconstrained learning, the long-term coherence is unaffected, though at the cost of increased learning time and hence decreased immediate usability.