Les Gasser
2010
Lingua 120(8):2061-2079, 2010
Sociolinguistic studies have demonstrated that centrally-connected and peripheral members of social networks can both propel and impede the spread of linguistic innovations. We use agent-based computer simulations to investigate the dynamic properties of these network roles in a ...MORE ⇓
Sociolinguistic studies have demonstrated that centrally-connected and peripheral members of social networks can both propel and impede the spread of linguistic innovations. We use agent-based computer simulations to investigate the dynamic properties of these network roles in a large social influence network, in which diffusion is modeled as the probabilistic uptake of one of several competing variants by agents of unequal social standing. We find that highly-connected agents, structural equivalents of leaders in empirical studies, advance on-going change by spreading competing variants. Isolated agents, or loners, holding on to existing variants are safe-keepers of variants considered old or new depending on the current state of the rest of the population. Innovations spread following a variety of S-curves and stabilize as norms in the network only if two conditions are simultaneously satisfied: (1) the network comprises extremely highly connected and very isolated agents, and (2) agents are biased to pay proportionally more attention to better connected, or popular, neighbors. These findings reconcile competing models of individual network roles in the selection and propagation process of language change, and support Bloomfield's hypothesis that the spread of linguistic innovations in heterogeneous social networks depend upon communication density and relative prestige.
Connection Science 22(1):1-24, 2010
We study the emergence of shared representations in a population of agents engaged in a supervised classification task, using a model called the classification game. We connect languages with tasks by treating the agents' classification hypothesis space as an information channel. ...MORE ⇓
We study the emergence of shared representations in a population of agents engaged in a supervised classification task, using a model called the classification game. We connect languages with tasks by treating the agents' classification hypothesis space as an information channel. We show that by learning through the classification game, agents can implicitly perform complexity regularisation, which improves generalisation. Improved generalisation also means that the languages that emerge are well adapted to the given task. The improved language-task fit springs from the interplay of two opposing forces: the dynamics of collective learning impose a preference for simple representations, while the intricacy of the classification task imposes a pressure towards representations that are more complex. The push-pull of these two forces results in the emergence of a shared representation that is simple but not too simple. Our agents use artificial neural networks to solve the classification tasks they face, and a simple counting algorithm to learn a language as a form-meaning mapping. We present several experiments to demonstrate that both compositional and holistic languages can emerge in our system. We also demonstrate that the agents avoid overfitting on noisy data, and can learn some very difficult tasks through interaction, which they are unable to learn individually. Further, when the agents use simple recurrent networks to solve temporal classification tasks, we see the emergence of a rudimentary grammar, which does not have to be explicitly learned.
2009
Adaptive Behavior 17(3):213-235, 2009
The iterated classification game (ICG) combines the classification game with the iterated learning model (ILM) to create a more realistic model of the cultural transmission of language through generations. It includes both learning from parents and learning from peers. Further, ...MORE ⇓
The iterated classification game (ICG) combines the classification game with the iterated learning model (ILM) to create a more realistic model of the cultural transmission of language through generations. It includes both learning from parents and learning from peers. Further, it eliminates some of the chief criticisms of the ILM: that it does not study grounded languages, that it does not include peer learning, and that it builds in a bias for compositional languages. We show that, over the span of a few generations, a stable linguistic system emerges that can be acquired very quickly by each generation, is compositional, and helps the agents to solve the classification problem with which they are faced. The ICG also leads to a different interpretation of the language acquisition process. It suggests that the role of parents is to initialize the linguistic system of the child in such a way that subsequent interaction with peers results in rapid convergence to the correct language.
2008
Language Scaffolding as a Condition for Growth in Linguistic ComplexityPDF
Proceedings of the 7th International Conference on the Evolution of Language, pages 187-194, 2008
It is widely assumed that, over their evolutionary history, languages increased in complexity from simple signals to protolanguages to complex syntactic structures. This papers investigates processes for increasing linguistic complexity while maintaining communicability across a ...MORE ⇓
It is widely assumed that, over their evolutionary history, languages increased in complexity from simple signals to protolanguages to complex syntactic structures. This papers investigates processes for increasing linguistic complexity while maintaining communicability across a pop- ulation. We assume that linguistic communicability is important for reliably exchanging infor- mation critical for coordination-based tasks. Interaction, needed for learning others languages and converging to communicability, bears a cost. There is a threshold of interaction (learning) effort beyond which convergence either doesn t pay or is practically impossible. Our central findings, established mainly through simulation, are: 1) There is an effort-dependent frontier of tractability for agreement on a language that balances linguistic complexity against linguis- tic diversity in a population. For a given maximum convergence effort either a) languages must be simpler, or b) their initial average communicability must be greater. Thus, if either conver- gence cost or high average communicability over time are important, then even agents who have the capability for using complex languages must not invent them from the start; they must start simple and grow. 2) A staged approach to increasing complexity, in which agents initially con- verge on simple languages and then use these to scaffold greater complexity, can outperform initially-complex languages in terms of overall effort to convergence. This performance gain improves with more complex final languages.
Simple, but not too Simple: Learnability vs. Functionality in Language EvolutionPDF
Proceedings of the 7th International Conference on the Evolution of Language, pages 299-306, 2008
We show that artificial language evolution involves the interplay of two opposing forces: pres- sure toward simple representations imposed by the dynamics of collective learning, and pres- sure towards complex representations imposed by requirements of agents tasks. The push-pull ...MORE ⇓
We show that artificial language evolution involves the interplay of two opposing forces: pres- sure toward simple representations imposed by the dynamics of collective learning, and pres- sure towards complex representations imposed by requirements of agents tasks. The push-pull of these two forces results in the emergence of a language that is balanced: simple but not too simple. We introduce the classification game to study the emergence of these balanced languages and their properties. Our agents use artificial neural networks to learn how to solve tasks, and a simple counting algorithm to simultaneously learn a language as a form-meaning mapping. We show that task-language coupling drives the simplicity-complexity balance, and that both compositional and holistic languages can emerge.
2007
Anticipatory Behavior in Adaptive Learning Systems, LNAI/LNCS, 2007
We review some of the main theories about how language emerged. We suggest that including the study of the emergence of artificial languages, in simulation settings, allows us to ask a more general question, namely, what are the minimal initial conditions for the emergence of ...MORE ⇓
We review some of the main theories about how language emerged. We suggest that including the study of the emergence of artificial languages, in simulation settings, allows us to ask a more general question, namely, what are the minimal initial conditions for the emergence of language? This is a very important question from a technological viewpoint, because it is very closely tied to questions of intelligence and autonomy. We identify anticipation as being a key underlying computational principle in the emergence of language. We suggest that this is in fact present implicitly in many of the theories in contention today. Focused simulations that address precise questions are necessary to isolate the roles of the minimal initial conditions for the emergence of language.
2006
AAMAS, pages 1378-1380, 2006
We study how decentralized agents can develop shared vocabularies without global coordination. Answering this question can help us understand the emergence of many communication systems, from bacterial communication to human languages, as well as helping to design algorithms for ...MORE ⇓
We study how decentralized agents can develop shared vocabularies without global coordination. Answering this question can help us understand the emergence of many communication systems, from bacterial communication to human languages, as well as helping to design algorithms for supporting self-organizing information systems such as social tagging or ad-word systems for the web. We introduce a formal communication model in which senders and receivers can adapt their communicative behaviors through a type of win-stay lose-shift adaptation strategy. We find by simulations and analysis that for a given number of meanings, there exists a threshold for the number of words below which the agents can't converge to a shared vocabulary. Our finding implies that for a communication system to emerge, agents must have the capability of inventing a minimum number of words or sentences. This result also rationalizes the necessity for syntax, as a tool for generating unlimited sentences.
SAB06, pages 804-815, 2006
An important problem for societies of natural and artificial animals is to converge upon a similar language in order to communicate. We call this the language convergence problem. In this paper we study the complexity of finding the optimal (in terms of time to convergence) ...MORE ⇓
An important problem for societies of natural and artificial animals is to converge upon a similar language in order to communicate. We call this the language convergence problem. In this paper we study the complexity of finding the optimal (in terms of time to convergence) algorithm for language convergence. We map the language convergence problem to instances of a Decentralized Partially Observable Markov Decision Process to show that the complexity can vary from P-complete to NEXP-complete based on the scenario being studied.
Symbol Grounding and Beyond: Proceedings of the Third International Workshop on the Emergence and Evolution of Linguistic Communication, pages 180-191, 2006
We suggest that the primary motivation for an agent to construct a symbol-meaning mapping is to solve a task. The meaning space of an agent should be derived from the tasks that it faces during the course of its lifetime. We outline a process in which agents learn to solve ...MORE ⇓
We suggest that the primary motivation for an agent to construct a symbol-meaning mapping is to solve a task. The meaning space of an agent should be derived from the tasks that it faces during the course of its lifetime. We outline a process in which agents learn to solve multiple tasks and extract a store of ``cumulative knowledge'' that helps them to solve each new task more quickly and accurately. This cumulative knowledge then forms the ontology or meaning space of the agent. We suggest that by grounding symbols to this extracted cumulative knowledge agents can gain a further performance benefit because they can guide each others' learning process. In this version of the symbol grounding problem meanings cannot be directly communicated because they are internal to the agents, and they will be different for each agent. Also, the meanings may not correspond directly to objects in the environment. The communication process can also allow a symbol meaning mapping that is dynamic. We posit that these properties make this version of the symbol grounding problem realistic and natural. Finally, we discuss how symbols could be grounded to cumulative knowledge via a situation where a teacher selects tasks for a student to perform.
SAB06, pages 765-776, 2006
We study the role of the agent interaction topology in distributed language learning. In particular, we utilize the replicator-mutator framework of language evolution for the creation of an emergent agent interaction topology that leads to quick convergence. In our system, it is ...MORE ⇓
We study the role of the agent interaction topology in distributed language learning. In particular, we utilize the replicator-mutator framework of language evolution for the creation of an emergent agent interaction topology that leads to quick convergence. In our system, it is the links between agents that are treated as the units of selection and replication, rather than the languages themselves. We use the Noisy Preferential Attachment algorithm, which is a special case of the replicator-mutator process, for generating the topology. The advantage of the NPA algorithm is that, in the short-term, it produces a scale-free interaction network, which is helpful for rapid exploration of the space of languages present in the population. A change of parameter settings then ensures convergence because it guarantees the emergence of a single dominant node which is chosen as teacher almost always
2002
Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 362-369, 2002
To create multi-agent systems that are both adaptive and open, agents must collectively learn to generate their own concepts, interpretations, and even languages actively in an online fashion. The issue is that there is no pre- existing global concept to be learned; instead, ...MORE ⇓
To create multi-agent systems that are both adaptive and open, agents must collectively learn to generate their own concepts, interpretations, and even languages actively in an online fashion. The issue is that there is no pre- existing global concept to be learned; instead, agents are in effect collectively designing a concept that is evolving as they exchange information. This paper presents a framework of {\it mutual online concept learning} (MOCL) in a shared world. MOCL extends the classical online concept learning from single-agent to multi-agent setting. Based on the Perceptron algorithm, we design a specific MOCL algorithm, called the {\it mutual perceptron convergence algorithm}, which can converge within a finite number of mistakes under some conditions. Analysis of the convergence conditions shows that the possibility of convergence depends on the number of participating agents and the quality of the instances they produce. Finally, we point out applications of MOCL and the convergence algorithm to the formation of linguistic knowledge in the form of dynamically generated shared vocabulary and grammar structure for multiple agents.