David Ball
2016
IEEE Transactions on Cognitive and Developmental Systems 8:3-14, 2016
For robots to effectively bootstrap the acquisition of language, they must handle referential uncertainty-the problem of deciding what meaning to ascribe to a given word. Typically when socially grounding terms for space and time, the underlying sensor or representation was ...MORE ⇓
For robots to effectively bootstrap the acquisition of language, they must handle referential uncertainty-the problem of deciding what meaning to ascribe to a given word. Typically when socially grounding terms for space and time, the underlying sensor or representation was specified within the grammar of a conversation, which constrained language learning to words for innate features. In this paper, we demonstrate that cross-situational learning resolves the issues of referential uncertainty for bootstrapping a language for episodic space and time; therefore removing the need to specify the underlying sensors or representations a priori. The requirements for robots to be able to link words to their designated meanings are presented and analyzed within the Lingodroids-language learning robots-framework. We present a study that compares predetermined associations given a priori against unconstrained learning using cross-situational learning. This study investigates the long-term coherence, immediate usability and learning time for each condition. Results demonstrate that for unconstrained learning, the long-term coherence is unaffected, though at the cost of increased learning time and hence decreased immediate usability.
2011
Autonomous Mental Development, IEEE Transactions on 4(3):192-203, 2011
Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework ...MORE ⇓
Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework developed for learning spatial concepts and words to learning temporal intervals. This paper considers a new aspect of time, the naming of concepts like morning, afternoon, dawn, and dusk, which are events that are part of day-night cycles, but are not defined by specific time points on a clock. Grounding of such terms refers to events and features of the diurnal cycle, such as light levels. We studied event-based time in which robots experienced day-night cycles that varied with the seasons throughout a year. Then we used meet-at tasks to demonstrate that the words learned were grounded, where the times to meet were morning and afternoon, rather than specific clock times. The studies show how words and concepts for a novel aspect of cyclic time can be grounded through experience with events rather than by times as measured by clocks or calendars.