Modeling Emotional Expressions as Sequences of Behaviors
In this paper we present a system which allows a virtual character to display multimodal sequential expressions i.e. expressions that are composed of different signals partially ordered in time and belonging to different nonverbal communicative channels. It is composed of a language for the description of such expressions from real data and of an algorithm that uses this description to automatically generate emotional displays. We explain in detail the process of creating multimodal sequential expressions, from the annotation to the synthesis of the behavior.