The Rapport 1.0 system is a set of components to investigate how to build rapport with virtual humans by using audio and visual cues from the human participant. It has been used in a few user studies here at ICT. It consists of 5 components:
*core\rapport\audio-laun - This is an audio detection app which listens to audio on the microphone, looking for key signals, like loudness, backchannel feedback, etc. Whatever it detects, it sends out vhmsgs.
*core\rapport\gesturedetection - This is an app that communicates with Watson and based on the data received, detects if there’s a head nod, head shake and other features. It sends out vhmsgs for features that it detects.
*core\Watson - This is the vision system that uses a webcam or a firewire stereo webcam
*core\rapport\response - This is an app that receives the vhmsgs from ‘audio-laun’ and ‘gesturedetection’ and based on rules in the given config file, sends out vhmsgs to control the behavior of the character.
*core\rapport\soundwoz - This is a helper app that you can use to test out ‘response’ to make sure it’s working correctly and the rules are set up properly.
http://vhtoolkit.ict.usc.edu/index.php/Rapport