Situated Interaction with a Virtual Human
In Virtual Reality environments, real humans can meet
virtual humans to collaborate on tasks. The agent Max is such a virtual human, providing the human user with a face-toface collaboration partner in the SFB 360 construction tasks.
This paper describes how Max can assist by combining manipulative capabilities for assembly actions with conversational capabilities for mixed-initiative dialogue. During the interaction, Max employs speech, gaze, facial expression, and gesture and is able to initiate assembly actions. We present the underlying model of Max’s competences for managing situated interactions, and we show how the required faculties of perception, action, and cognition are realized and connected in his architecture.
On the basis of this paper, Anne Schumacher from
University of Potsdam prepared the following scientific presentation
Situated Interaction with a Virtual Human Perception, Action, and Cognition.