Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Randomizing Outputs to Increase Prediction Accuracy
Machine Learning
Using Correspondence Analysis to Combine Classifiers
Machine Learning
Combining Classifiers with Meta Decision Trees
Machine Learning
How to Make Stacking Better and Faster While Also Taking Care of an Unknown Weakness
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Stacking Bagged and Dagged Models
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Trainable fusion rules. I. Large sample size case
Neural Networks
Trainable fusion rules. II. Small sample-size effects
Neural Networks
Issues in stacked generalization
Journal of Artificial Intelligence Research
On deriving the second-stage training set for trainable combiners
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Hi-index | 0.00 |
An unsupervised multi-spectral, multi-resolution, multiple-segmenter for textured images with unknown number of classes is presented. The segmenter is based on a weighted combination of several unsupervised segmentation results, each in different resolution, using the modified sum rule. Multi-spectral textured image mosaics are locally represented by four causal directional multi-spectral random field models recursively evaluated for each pixel. The single-resolution segmentation part of the algorithm is based on the underlying Gaussian mixture model and starts with an over segmented initial estimation which is adaptively modified until the optimal number of homogeneous texture segments is reached. The performance of the presented method is extensively tested on the Prague segmentation benchmark using the commonest segmentation criteria and compares favourably with several leading alternative image segmentation methods.