Full body interaction for serious games in motor rehabilitation
Proceedings of the 2nd Augmented Human International Conference
smARTbox: out-of-the-box technologies for interactive art and exhibition
Proceedings of the 2012 Virtual Reality International Conference
Kinect based 3D object manipulation on a desktop display
Proceedings of the ACM Symposium on Applied Perception
Human behavior analysis from depth maps
AMDO'12 Proceedings of the 7th international conference on Articulated Motion and Deformable Objects
Evaluating user's energy consumption using kinect based skeleton tracking
Proceedings of the 20th ACM international conference on Multimedia
Perception markup language: towards a standardized representation of perceived nonverbal behaviors
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Usability benchmarks for motion tracking systems
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Human limb segmentation in depth maps based on spatio-temporal Graph-cuts optimization
Journal of Ambient Intelligence and Smart Environments
Hi-index | 0.00 |
The Flexible Action and Articulated Skeleton Toolkit (FAAST) is middleware to facilitate integration of full-body control with virtual reality applications and video games using OpenNI-compliant depth sensors (currently the PrimeSensor and the Microsoft Kinect). FAAST incorporates a VRPN server for streaming the user's skeleton joints over a network, which provides a convenient interface for custom virtual reality applications and games. This body pose information can be used for goals such as realistically puppeting a virtual avatar or controlling an on-screen mouse cursor. Additionally, the toolkit also provides a configurable input emulator that detects human actions and binds them to virtual mouse and keyboard commands, which are sent to the actively selected window. Thus, FAAST can enable natural interaction for existing off-the-shelf video games that were not explicitly developed to support input from motion sensors. The actions and input bindings are configurable at run-time, allowing the user to customize the controls and sensitivity to adjust for individual body types and preferences. In the future, we plan to substantially expand FAAST's action lexicon, provide support for recording and training custom gestures, and incorporate real-time head tracking using computer vision techniques.