A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments
ACM Transactions on Graphics (TOG)
Detail to attention: exploiting visual tasks for selective rendering
EGRW '03 Proceedings of the 14th Eurographics workshop on Rendering
ACM SIGGRAPH 2005 Papers
Predicting and Evaluating Saliency for Simplified Polygonal Models
ACM Transactions on Applied Perception (TAP)
Computational mechanisms for gaze direction in interactive visual environments
Proceedings of the 2006 symposium on Eye tracking research & applications
Crowd Simulation
Clone attack! Perception of crowd variety
ACM SIGGRAPH 2008 papers
Populating ancient pompeii with crowds of virtual romans
VAST'07 Proceedings of the 8th International conference on Virtual Reality, Archaeology and Intelligent Cultural Heritage
Real-time shader rendering for crowds in virtual heritage
VAST'05 Proceedings of the 6th International conference on Virtual Reality, Archaeology and Intelligent Cultural Heritage
Investigating the effect of real-time stylisation techniques on user task performance
Proceedings of the 6th Symposium on Applied Perception in Graphics and Visualization
The whys, how tos, and pitfalls of user studies
ACM SIGGRAPH 2009 Courses
IEEE Computer Graphics and Applications - Special issue on non-photorealistic rendering a virtual environment for teaching social skills
Variety Is the Spice of (Virtual) Life
MIG '09 Proceedings of the 2nd International Workshop on Motion in Games
Improving crowd behaviour for games and virtual worlds
Proceedings of the Fifth International Conference on the Foundations of Digital Games
The saliency of anomalies in animated human characters
ACM Transactions on Applied Perception (TAP)
Gazing at games: using eye tracking to control virtual characters
ACM SIGGRAPH 2010 Courses
Perceiving motion transitions in pedestrian crowds
Proceedings of the 17th ACM Symposium on Virtual Reality Software and Technology
Simulating believable crowd and group behaviors
ACM SIGGRAPH ASIA 2010 Courses
CAROSA: a tool for authoring NPCs
MIG'10 Proceedings of the Third international conference on Motion in games
Simulating heterogeneous crowd behaviors using personality trait theory
SCA '11 Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Imperceptible relaxation of collision avoidance constraints in virtual crowds
Proceedings of the 2011 SIGGRAPH Asia Conference
Aggregate gaze visualization with real-time heatmaps
Proceedings of the Symposium on Eye Tracking Research and Applications
Efficient rendering of animated characters through optimized per-joint impostors
Computer Animation and Virtual Worlds
Technical Section: Stream-based animation of real-time crowd scenes
Computers and Graphics
DressUp!: outfit synthesis through automatic optimization
ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH Asia 2012
Generating diverse ethnic groups with genetic algorithms
Proceedings of the 18th ACM symposium on Virtual reality software and technology
Perceptual importance of lighting phenomena in rendering of animated water
ACM Transactions on Applied Perception (TAP)
Believability in simplifications of large scale physically based simulation
Proceedings of the ACM Symposium on Applied Perception
Evaluating the distinctiveness and attractiveness of human motions on realistic virtual bodies
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
Populated virtual environments need to be simulated with as much variety as possible. By identifying the most salient parts of the scene and characters, available resources can be concentrated where they are needed most. In this paper, we investigate which body parts of virtual characters are most looked at in scenes containing duplicate characters or clones. Using an eye-tracking device, we recorded fixations on body parts while participants were asked to indicate whether clones were present or not. We found that the head and upper torso attract the majority of first fixations in a scene and are attended to most. This is true regardless of the orientation, presence or absence of motion, sex, age, size, and clothing style of the character. We developed a selective variation method to exploit this knowledge and perceptually validated our method. We found that selective colour variation is as effective at generating the illusion of variety as full colour variation. We then evaluated the effectiveness of four variation methods that varied only salient parts of the characters. We found that head accessories, top texture and face texture variation are all equally effective at creating variety, whereas facial geometry alterations are less so. Performance implications and guidelines are presented.