Language Evolution and Computation Bibliography

Our site (www.isrl.uiuc.edu/amag/langev) retired, please use https://langev.com instead.
A. Greco
2004
Symbol grounding transfer with hybrid self-organizing/supervised neural networksPDF
IJCNN04 International Joint Conference on Neural Networks, 2004
Abstract This paper reports new simulations on an extended neural network model for the transfer of symbol grounding. It uses a hybrid and modular connectionist model, consisting of an unsupervised, self-organizing map for stimulus classification and a supervised ...
2002
Symbol Grounding and the Symbolic Theft HypothesisPDF
Simulating the Evolution of Language 9.0:191-210, 2002
Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. ...MORE ⇓
Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. ...
2000
Connection Science 12(2):143-162, 2000
Neural network models of categorical perception (compression of withincategory similarity and dilation of between-category differences) are applied to the symbol-grounding problem (of how to connect symbols with meanings) by connecting analogue sensorimotor projections to ...MORE ⇓
Neural network models of categorical perception (compression of withincategory similarity and dilation of between-category differences) are applied to the symbol-grounding problem (of how to connect symbols with meanings) by connecting analogue sensorimotor projections to arbitrary symbolic representations via learned category-invariance detectors in a hybrid symbolic/non-symbolic system. Our nets are trained to categorize and name 50 2 50 pixel images (e.g. circles, ellipses, squares and rectangles) projected on to the receptive field of a 7 2 7 retina. They first learn to do prototype matching and then entry-level naming for the four kinds of stimuli, grounding their names directly in the input patterns via hidden-unit representations ('sensorimotor toil'). We show that a higher-level categorization (e.g. 'symmetric' versus 'asymmetric') can be learned in two very different ways: either (1) directly from the input, just as with the entry-level categories (i.e. by toil); or (2) indirectly, from Boolean combinations of the grounded category names in the form of propositions describing the higher-order category ('symbolic theft'). We analyse the architectures and input conditions that allow grounding (in the form of compression/ separation in internal similarity space) to be 'transferred' in this second way from directly grounded entry-level category names to higher-order category names. Such hybrid models have implications for the evolution and learning of language.
1999
Language and the acquisition of implicit and explicit knowledge: a pilot study using neural networks
Cognitive Systems 5(2):148-165, 1999