Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discrete Applied Mathematics
What Energy Functions Can Be Minimizedvia Graph Cuts?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Putting Objects in Perspective
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
What, where and how many? combining object detectors and CRFs
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Graph cut based inference with co-occurrence statistics
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
Superparsing: scalable nonparametric image parsing with superpixels
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Multiscale conditional random fields for image labeling
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
In the last few years, substantially different approaches have been adopted for segmenting and detecting "things" (object categories that have a well defined shape such as people and cars) and "stuff" (object categories which have an amorphous spatial extent such as grass and sky). This paper proposes a framework for scene understanding that relates both things and stuff by using a novel way of modeling high order potentials. This representation allows us to enforce labelling consistency between hypotheses of detected objects (things) and image segments (stuff) in a single graphical model. We show that an efficient graph-cut algorithm can be used to perform maximum a posteriori (MAP) inference in this model. We evaluate our method on the Stanford dataset [1] by comparing it against state-of-the-art methods for object segmentation and detection.