Body movements for virtual actors
This paper proposes two simple and robust algorithmic solutions for two frequently encountered problems in in-habited virtual heritage worlds: a) the 'still-frozen' behavior of virtual actors once their predefined-scripted anima-tion is executed and b) their real-time rendering which is mostly often based on non physically-principled local il-lumination models. Our described framework thus provides a) a procedural keyframe generator for realistic idle motions blended with the virtual actor's scripted behavior and b) an extension to precomputed radiance transfer for multi-segmented hierarchies for global illumination for real-time. Thus on one hand the visual discrepancy between the actor's dynamic illumination model and the environment static high dynamic range lightmaps is greatly mini-mized; on the other hand, once the virtual actors complete their speech, they continue realistic subtle body motions in a non-repetitive manner, waiting for their next verse instead of stopping in a specific pose. Our case study on verses of ancient Greek tragedies in the virtually restituted ancient theatre of Aspendos illustrates the above inte-grated algorithms and overall methodology.