Watch what I do: programming by demonstration
Watch what I do: programming by demonstration
KidSim: end user programming of simulations
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Programming by example
a CAPpella: programming by demonstration of context-aware applications
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Turn it this way: grounding collaborative action with remote gestures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Rapid Prototyping of Activity Recognition Applications
IEEE Pervasive Computing
Programming-by-Demonstration of reaching motions-A next-state-planner approach
Robotics and Autonomous Systems
Tracking free-weight exercises
UbiComp '07 Proceedings of the 9th international conference on Ubiquitous computing
Robot Programming by Demonstration
Robot Programming by Demonstration
Topobo: programming by example to create complex behaviors
ICLS '10 Proceedings of the 9th International Conference of the Learning Sciences - Volume 2
Gesture coder: a tool for programming multi-touch gestures by demonstration
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Supporting hand gestures in mobile remote collaboration: a usability evaluation
BCS-HCI '11 Proceedings of the 25th BCS Conference on Human-Computer Interaction
Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing
A tutorial on human activity recognition using body-worn inertial sensors
ACM Computing Surveys (CSUR)
YouMove: enhancing movement training with an augmented reality mirror
Proceedings of the 26th annual ACM symposium on User interface software and technology
Hi-index | 0.01 |
Particularly in sports or physical rehabilitation, users have to perform body movements in a specific manner for the exercises to be most effective. It remains a challenge for experts to specify how to perform such movements so that an automated system can analyse further performances of it. In a user study with 10 participants we show that experts' explicit estimates do not correspond to their performances. To address this issue we present MotionMA, a system that: (1) automatically extracts a model of movements demonstrated by one user, e.g. a trainer, (2) assesses the performance of other users repeating this movement in real time, and (3) provides real-time feedback on how to improve their performance. We evaluated the system in a second study in which 10 other participants used the system to demonstrate arbitrary movements. Our results demonstrate that MotionMA is able to extract an accurate movement model to spot mistakes and variations in movement execution.