Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Machine Learning
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Maximum and Minimum Likelihood Hebbian Learning for Exploratory Projection Pursuit
Data Mining and Knowledge Discovery
Complexity Pursuit: Separating Interesting Components from Time Series
Neural Computation
A Projection Pursuit Algorithm for Exploratory Data Analysis
IEEE Transactions on Computers
Outlier resistant PCA ensembles
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part III
Maximum likelihood topology preserving ensembles
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Automated Ham Quality Classification Using Ensemble Unsupervised Mapping Models
KES '07 Knowledge-Based Intelligent Information and Engineering Systems and the XVII Italian Workshop on Neural Networks on Proceedings of the 11th International Conference
ViSOM ensembles for visualization and classification
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Boosting unsupervised competitive learning ensembles
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Maximum likelihood topology preserving ensembles
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
Statistical re-sampling techniques have been used extensively and successfully in the machine learning approaches for generations of classifier and predictor ensembles. It has been frequently shown that combining so called unstable predictors has a stabilizing effect on and improves the performance of the prediction system generated in this way. In this paper we use the re-sampling techniques in the context of a topology preserving map which can be used for scale invariant classification, taking into account the fact that it models the residual after feedback with a family of distributions and finds filters which make the residuals most likely under this model. This model is applied to artificial data sets and compared with a similar version based on the Self Organising Map (SOM).