Proceedings of the fourth symposium on Virtual reality modeling language
Guest Editors' Introduction: Computer Animation for Virtual Humans
IEEE Computer Graphics and Applications
Motion Control of Virtual Humans
IEEE Computer Graphics and Applications
Goal-Directed Navigation for Animated Characters Using Real-Time Path Planning and Control
CAPTECH '98 Proceedings of the International Workshop on Modelling and Motion Capture Techniques for Virtual Environments
The InViWo Toolkit: Describing Autonomous Virtual Agents and Avatars
IVA '01 Proceedings of the Third International Workshop on Intelligent Virtual Agents
The blind men and the elephant revisited
From brows to trust
A general analytic approach for SantosTM upper extremity workspace
Computers and Industrial Engineering
A Prototype for Future Spoken Dialog Systems Using an Embodied Conversational Agent
PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
DelsArtMap: applying Delsarte's aesthetic system to virtual agents
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
An interactive virtual guide for the AR based visit of archaeological sites
Journal of Visual Languages and Computing
IICS'04 Proceedings of the 4th international conference on Innovative Internet Community Systems
Hi-index | 0.00 |
The last few years have seen great maturation in the computation speed and control methods needed to portray 3D virtual humans that are suitable for real interactive applications. We first describe the state of the art, then focus on the particular approach taken at the University of Pennsylvania with the Jack system. Various aspects of real-time virtual humans are considered, such as their appearance and motion, interactive control, autonomous actions, gestures, attention, locomotion and multiple individuals. The underlying architecture consists of a sense-control-act structure that permits reactive behaviors to be locally adaptive to the environment, and a PaT-Net (parallel transition network) parallel-finite-state machine controller that can be used to drive virtual humans through complex tasks. We then argue for a deep connection between language and animation and describe current efforts in linking them through two systems: the Jack Presenter and the JackMOO extension to lambdaMOO. Finally, we outline a parameterized action representation for mediating between language instructions and animated actions.