Object detection using strongly-supervised deformable part models
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Exploring the spatial hierarchy of mixture models for human pose estimation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
Detecting actions, poses, and objects with relational phraselets
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Appearance sharing for collective human pose estimation
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
A review of motion analysis methods for human Nonverbal Communication Computing
Image and Vision Computing
Discriminative hierarchical part-based models for human parsing and action recognition
The Journal of Machine Learning Research
Learning visual symbols for parsing human poses in images
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Despite recent successes, pose estimators are still somewhat fragile, and they frequently rely on a precise knowledge of the location of the object. Unfortunately, articulated objects are also very difficult to detect. Knowledge about the articulated nature of these objects, however, can substantially contribute to the task of finding them in an image. It is somewhat surprising, that these two tasks are usually treated entirely separately. In this paper, we propose an Articulated Part-based Model (APM) for jointly detecting objects and estimating their poses. APM recursively represents an object as a collection of parts at multiple levels of detail, from coarse-to-fine, where parts at every level are connected to a coarser level through a parent-child relationship (Fig. 1(b)-Horizontal). Parts are further grouped into part-types (e.g., left-facing head, long stretching arm, etc) so as to model appearance variations (Fig. 1(b)-Vertical). By having the ability to share appearance models of part types and by decomposing complex poses into parent-child pairwise relationships, APM strikes a good balance between model complexity and model richness. Extensive quantitative and qualitative experiment results on public datasets show that APM outperforms state-of-the-art methods. We also show results on PASCAL 2007 - cats and dogs - two highly challenging articulated object categories.