Flocks, herds and schools: A distributed behavioral model
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
A global human walking model with real-time kinematic personification
The Visual Computer: International Journal of Computer Graphics - Special issue on computer animation 1989/90
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
Artificial fishes: physics, locomotion, perception, behavior
SIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
IEEE Computer Graphics and Applications
An Architecture for Action, Emotion, and Social Behavior
MAAMAW '92 Selected papers from the 4th European Workshop on on Modelling Autonomous Agents in a Multi-Agent World, Artificial Social Systems
The Animation of Autonomous Actors Based on Production Rules
CA '96 Proceedings of the Computer Animation
InViWo agents: write once, display everywhere
Web3D '03 Proceedings of the eighth international conference on 3D Web technology
The InViWo Toolkit: Describing Autonomous Virtual Agents and Avatars
IVA '01 Proceedings of the Third International Workshop on Intelligent Virtual Agents
Animating behavior of virtual agents: the virtual park
ICCSA'03 Proceedings of the 2003 international conference on Computational science and its applications: PartIII
A new architecture for simulating the behavior of virtual agents
ICCS'03 Proceedings of the 1st international conference on Computational science: PartI
Hi-index | 0.00 |
This paper explains methods to provide autonomous virtual humans with the skills necessary to perform stand-alone role in films, games and interictive television. We present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of our geometric. physical, and auditory Virtual Environments, we introduce the perception action principles with a few simple examples. We emphasize the concept of virtual sensors for virtual humans. In particular, we describe our experiences in implementing virtual sensors such as vision sensors, tactile sensors, and hearing sensors. We then describe knowledge-based navigation. knowledge-based locomotion and in more details sensor-based tennis.