Learning Spectral Clustering, With Application To Speech Separation
The Journal of Machine Learning Research
Learning Probabilistic Models for Contour Completion in Natural Images
International Journal of Computer Vision
Multi-Class Segmentation with Relative Location Prior
International Journal of Computer Vision
Fast Generalized Belief Propagation for MAP Estimation on 2D and 3D Grid-Like Markov Random Fields
Proceedings of the 30th DAGM symposium on Pattern Recognition
Benchmarking Image Segmentation Algorithms
International Journal of Computer Vision
A coarse-and-fine Bayesian belief propagation for correspondence problems in computer vision
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
Conditional random field for text segmentation from images with complex background
Pattern Recognition Letters
FastInf: An Efficient Approximate Inference Library
The Journal of Machine Learning Research
Norm-product belief propagation: primal-dual message-passing for approximate inference
IEEE Transactions on Information Theory
Efficient combination of probabilistic sampling approximations for robust image segmentation
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
Fast memory-efficient generalized belief propagation
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
Learning segmentation of documents with complex scripts
ICVGIP'06 Proceedings of the 5th Indian conference on Computer Vision, Graphics and Image Processing
Hi-index | 0.06 |
Significant progress in image segmentation has beenmade by viewing the problem in the framework of graphpartitioning. In particular, spectral clustering methods suchas "normalized cuts" (ncuts) can efficiently calculate goodsegmentations using eigenvector calculations. However,spectral methods when applied to images with local connectivityoften oversegment homogenous regions. More importantly,they lack a straightforward probabilistic interpretationwhich makes it difficult to automatically set parametersusing training data.In this paper we revisit the typical cut criterion proposedin [1, 5]. We show that computing the typical cut isequivalent to performing inference in an undirected graphicalmodel. This equivalence allows us to use the powerfulmachinery of graphical models for learning and inferringimage segmentations. For inferring segmentations weshow that the generalized belief propagation (GBP) algorithmcan give excellent results with a runtime that is usuallyfaster than the ncut eigensolver. For learning segmentationswe derive a maximum likelihood learning algorithmto learn affinity matrices from labelled datasets. We illustrateboth learning and inference on challenging real andsynthetic images.