On-line construction of the convex hull of a simple polyline
Information Processing Letters
A computational approach to edge detection
Readings in computer vision: issues, problems, principles, and paradigms
The Recognition of Human Movement Using Temporal Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to algorithms
Upper Limb Robot Mediated Stroke Therapy—GENTLE/s Approach
Autonomous Robots
Recognition of human body motion using phase space constraints
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Monocular tracking of the human arm in 3D
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Model-based tracking of self-occluding articulated objects
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Encouraging physical therapy compliance with a hands-Off mobile robot
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
Upper limb rehabilitation and evaluation of children using a humanoid robot
Proceedings of the 2nd Workshop on Child, Computer and Interaction
Hi-index | 0.00 |
The objective of this research effort is to integrate therapy instruction with child-robot play interaction in order to better assess upper-arm rehabilitation. Using computer vision techniques such as Motion History Imaging MHI, edge detection, and Random Sample Consensus RANSAC, movements can be quantified through robot observation. In addition, incorporating prior knowledge regarding exercise data, physical therapeutic metrics, and novel approaches, a mapping to therapist instructions can be created allowing robotic feedback and intelligent interaction. The results are compared with ground truth data retrieved via the Trimble 5606 Robotic Total Station and visual experts for the purpose of assessing the efficiency of this approach. We performed a series of upper-arm exercises with two male subjects, which were captured via a simple webcam. The specific exercises involved adduction and abduction and lateral and medial movements. The analysis shows that our algorithmic results compare closely to the results obtain from the ground truth data, with an average algorithmic error is less than 9% for the range of motion and less than 8% for the peak angular velocity of each subject.