Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robot grippers for use with fibrous materials
International Journal of Robotics Research
Large steps in cloth simulation
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Robust treatment of collisions, contact and friction for cloth animation
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Implicit Probabilistic Models of Human Motion for Synthesis and Tracking
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Tracking People with Twists and Exponential Maps
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Constraining Human Body Tracking
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
"GrabCut": interactive foreground extraction using iterated graph cuts
ACM SIGGRAPH 2004 Papers
SCAPE: shape completion and animation of people
ACM SIGGRAPH 2005 Papers
A system for articulated tracking incorporating a clothing model
Machine Vision and Applications
International Journal of Robotics Research
Grasping Non-stretchable Cloth Polygons
International Journal of Robotics Research
A 2D human body model dressed in eigen clothing
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Flexible object manipulation
Towards a comprehensive chore list for domestic robots
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.