Automated Generation of Emotive Virtual Humans
Emotive
virtual humans (VHs) are important for affective interactions with embodied conversation agents. However, emotive VHs require significant resources and time. As an example, the VHs in movies and video games require teams of animators and months of work. VHs can also be imbued with emotion using appraisal theory methods that use psychology based models to generate emotions by using the VH's goals and beliefs to evaluate external events. These external events require manual tagging or natural language understanding. As an alternative approach, we propose tagging VH responses with emotions using textual affect sensing methods. The method developed by Neviarouskaya et al. uses syntactic parses and a database of words and associated emotion intensities.We use this database, and because these emotions are associated with specific words, we can combine the emotions with audio timing information to generate lip-synched facial expressions. Our approach, AutoEmotion, allows us to automatically add basic emotions to VHs without the need for manual animation or tagging or natural language understanding.