James Pustejovsky, Professor of Computer Science, Language and Linguistics, Brandeis University

James Pustejovsky, Professor of Computer Science, Language and Linguistics, Brandeis University

The demand for more sophisticated human-computer interactions is rapidly increasing, as users become more accustomed to conversation-like interactions with their devices. In this paper, we examine this changing landscape in the context of human-machine interaction in a shared workspace to achieve a common goal. In our prototype system, people and avatars cooperate to build blocks world structures through the interaction of language, gesture, vision, and action. This provides a platform to study computational issues involved in multimodal communication.  In order to establish elements of the common ground in discourse between speakers, we have created an embodied 3D simulation, enabling both the generation and interpretation of multiple modalities, including: language, gesture, and the visualization of objects moving and agents acting in their environment. The simulation is built on the modeling language VoxML that encodes objects with rich semantic typing and action affordances, and actions themselves as multimodal programs, enabling contextually salient inferences and decisions in the environment.  We illustrate this with a walk-through of multimodal communication in a shared task.

 

Download Paper 1, Paper 2, Paper 3

 

 

 

Share|