Verbs and Adverbs: Multidimensional Motion Interpolation
IEEE Computer Graphics and Applications
Design of a Virtual Human Presenter
IEEE Computer Graphics and Applications
Model-based Animation of Coverbal Gesture
CA '02 Proceedings of the Computer Animation
Hand Gesture Symmetric Behavior Detection and Analysis in Natural Conversation
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Natural methods for robot task learning: instructive demonstrations, generalization and practice
AAMAS '03 Proceedings of the second international joint conference on Autonomous agents and multiagent systems
Learning Movement Sequences from Demonstration
ICDL '02 Proceedings of the 2nd International Conference on Development and Learning
Geostatistical motion interpolation
ACM SIGGRAPH 2005 Papers
Visualizing Quaternions (The Morgan Kaufmann Series in Interactive 3D Technology)
Visualizing Quaternions (The Morgan Kaufmann Series in Interactive 3D Technology)
Practical motion capture in everyday surroundings
ACM SIGGRAPH 2007 papers
Analytical inverse kinematics with body posture control
Computer Animation and Virtual Worlds
SmartBody: behavior realization for embodied conversational agents
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Action capture with accelerometers
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Interactive motion modeling and parameterization by direct demonstration
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Augmented mirror: interactive augmented reality system based on kinect
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part IV
Immersive interfaces for building parameterized motion databases
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
While interactive virtual humans are becoming widely used in education, training and delivery of instructions, building the animations required for such interactive characters in a given scenario remains a complex and time consuming work. One of the key problems is that most of the systems controlling virtual humans are mainly based on pre-defined animations which have to be re-built by skilled animators specifically for each scenario. In order to improve this situation this paper proposes a framework based on the direct demonstration of motions via a simplified and easy to wear set of motion capture sensors. The proposed system integrates motion segmentation, clustering and interactive motion blending in order to enable a seamless interface for programming motions by demonstration.