Jun Wang
2006
AAMAS, pages 1378-1380, 2006
We study how decentralized agents can develop shared vocabularies without global coordination. Answering this question can help us understand the emergence of many communication systems, from bacterial communication to human languages, as well as helping to design algorithms for ...MORE ⇓
We study how decentralized agents can develop shared vocabularies without global coordination. Answering this question can help us understand the emergence of many communication systems, from bacterial communication to human languages, as well as helping to design algorithms for supporting self-organizing information systems such as social tagging or ad-word systems for the web. We introduce a formal communication model in which senders and receivers can adapt their communicative behaviors through a type of win-stay lose-shift adaptation strategy. We find by simulations and analysis that for a given number of meanings, there exists a threshold for the number of words below which the agents can't converge to a shared vocabulary. Our finding implies that for a communication system to emerge, agents must have the capability of inventing a minimum number of words or sentences. This result also rationalizes the necessity for syntax, as a tool for generating unlimited sentences.
Computational Approaches to Linguistic ConsensusPDF
Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, 2006
Abstract The main question we ask is how a common language might come about in complex adaptive language systems comprising many agents. Our primary objective is to analyze and design complex language models so that a group of agents can converge on ...
2002
Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 362-369, 2002
To create multi-agent systems that are both adaptive and open, agents must collectively learn to generate their own concepts, interpretations, and even languages actively in an online fashion. The issue is that there is no pre- existing global concept to be learned; instead, ...MORE ⇓
To create multi-agent systems that are both adaptive and open, agents must collectively learn to generate their own concepts, interpretations, and even languages actively in an online fashion. The issue is that there is no pre- existing global concept to be learned; instead, agents are in effect collectively designing a concept that is evolving as they exchange information. This paper presents a framework of {\it mutual online concept learning} (MOCL) in a shared world. MOCL extends the classical online concept learning from single-agent to multi-agent setting. Based on the Perceptron algorithm, we design a specific MOCL algorithm, called the {\it mutual perceptron convergence algorithm}, which can converge within a finite number of mistakes under some conditions. Analysis of the convergence conditions shows that the possibility of convergence depends on the number of participating agents and the quality of the instances they produce. Finally, we point out applications of MOCL and the convergence algorithm to the formation of linguistic knowledge in the form of dynamically generated shared vocabulary and grammar structure for multiple agents.