A Combined Semantic and Motion Capture Database for Real-Time Sign Language Synthesis
Over the past decade, motion capture data has become a popular research tool, and motion databases have grown exponentially. Indexing, querying, and retrieving data has thus become more difficult, and has necessitated innovative approaches to using these databases. Our aim is to make this approach feasible for
virtual agents signing in French Sign Language (LSF), taking into account the semantic information implicitly contained in language data. We thus structure a database in two autonomous units, taking advantage of differing indexing methods within each. This allows us to effectively retrieve captured motions to produce LSF animations. We describe our methods for querying motion in the semantic database, computing transitory segments between concatenated signs, and producing realistic animations of a virtual LSF signer.