Journal :: Artificial Intelligence
2005
Artificial Intelligence 167(1-2):1-12, 2005
How does language relate to the non-linguistic world? If an agent is able to communicate linguistically and is also able to directly perceive and/or act on the world, how do perception, action, and language interact with and influence each other? Such questions are surely ...
Artificial Intelligence 167(1-2):206-242, 2005
This paper describes a new model on the evolution and induction of compositional structures in the language of a population of (simulated) robotic agents. The model is based on recent work in language evolution modelling, including the iterated learning model, the language game ...MORE ⇓
This paper describes a new model on the evolution and induction of compositional structures in the language of a population of (simulated) robotic agents. The model is based on recent work in language evolution modelling, including the iterated learning model, the language game model and the Talking Heads experiment. It further adopts techniques recently developed in the field of grammar induction. The paper reports on a number of different experiments done with this new model and shows certain conditions under which compositional structures can emerge. The paper confirms previous findings that a transmission bottleneck serves as a pressure mechanism for the emergence of compositionality, and that a communication strategy for guessing the references of utterances aids in the development of qualitatively `good' languages. In addition, the results show that the emerging languages reflect the structure of the world to a large extent and that the development of a semantics, together with a competitive selection mechanism, produces a faster emergence of compositionality than a predefined semantics without such a selection mechanism.
2004
Artificial Intelligence 154(1-2):1-42, 2004
We consider the problem of linguistic agents that communicate with each other about a shared world. We develop a formal notion of a language as a set of probabilistic associations between form (lexical or syntactic) and meaning (semantic) that has general applicability. Using ...MORE ⇓
We consider the problem of linguistic agents that communicate with each other about a shared world. We develop a formal notion of a language as a set of probabilistic associations between form (lexical or syntactic) and meaning (semantic) that has general applicability. Using this notion, we define a natural measure of the mutual intelligibility, F(L,L'), between two agents, one using the language L and the other using L'. We then proceed to investigate three important questions within this framework: (1) Given a language L, what language L' maximizes mutual intelligibility with L? We find surprisingly that L' need not be the same as L and we present algorithms for approximating L' arbitrarily well. (2) How can one learn to optimally communicate with a user of language L when L is unknown at the outset and the learner is allowed a finite number of linguistic interactions with the user of L? We describe possible algorithms and calculate explicit bounds on the number of interactions needed. (3) Consider a population of linguistic agents that learn from each other and evolve over time. Will the community converge to a shared language and what is the nature of such a language? We characterize the evolutionarily stable states of a population of linguistic agents in a game-theoretic setting. Our analysis has significance for a number of areas in natural and artificial communication where one studies the design, learning, and evolution of linguistic communication systems.
1998
Artificial Intelligence 103(1-2):133-156, 1998
The paper proposes a set of principles and a general architecture that may explain how language and meaning may originate and complexify in a group of physically grounded distributed agents. An experimental setup is introduced for concretising and validating specific mechanisms ...MORE ⇓
The paper proposes a set of principles and a general architecture that may explain how language and meaning may originate and complexify in a group of physically grounded distributed agents. An experimental setup is introduced for concretising and validating specific mechanisms based on these principles. The setup consists of two robotic heads that watch static or dynamic scenes and engage in language games, in which one robot describes to the other what they see. The first results from experiments showing the emergence of distinctions, of a lexicon, and of primitive syntactic structures are reported.