Anthony F. Morse
2017
Cognitive science 41(S1):32-51, 2017
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple ...MORE ⇓
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to "switch" between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills.
2012
Advances in Complex Systems 15(03n04):1250031, 2012
The problem of how young learners acquire the meaning of words is fundamental to language development and cognition. A host of computational models exist which demonstrate various mechanisms in which words and their meanings can be transferred between a teacher and learner. ...MORE ⇓
The problem of how young learners acquire the meaning of words is fundamental to language development and cognition. A host of computational models exist which demonstrate various mechanisms in which words and their meanings can be transferred between a teacher and learner. However these models often assume that the learner can easily distinguish between the referents of words, and do not show if the learning mechanisms still function when there is perceptual ambiguity about the referent of a word. This paper presents two models that acquire meaning-word mappings in a continuous semantic space. The first model is a cross-situational learning model in which the learner induces word-meaning mappings through statistical learning from repeated exposures. The second model is a social model, in which the learner and teacher engage in a dyadic learning interaction to transfer word-meaning mappings. We show how cross-situational learning, despite there being no information to the learner as to the exact referent of a word during learning, still can learn successfully. However, social learning outperforms cross-situational strategies both in speed of acquisition and performance. The results suggest that cross-situational learning is efficient for situations where referential ambiguity is limited, but in more complex situations social learning is the more optimal strategy.