Computation with infinite neural networks
Neural Computation
Kernel principal component analysis
Advances in kernel methods
Exploiting generative models in discriminative classifiers
Proceedings of the 1998 conference on Advances in neural information processing systems II
Bayesian approach for neural networks—review and case studies
Neural Networks
Learning More Accurate Metrics for Self-Organizing Maps
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Learning a kernel matrix for nonlinear dimensionality reduction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
A Nonlinear Mapping for Data Structure Analysis
IEEE Transactions on Computers
Discriminatory Data Mapping by Matrix-Based Supervised Learning Metrics
ANNPR '08 Proceedings of the 3rd IAPR workshop on Artificial Neural Networks in Pattern Recognition
Hi-index | 0.00 |
The goal of this work is to improve visualizations by using a task-related metric in dimension reduction. In supervised setting, metric can be learned directly from data or extracted from a model fitted to data. Here, two model-based approaches are tried: extracting a global metric from classifier parameters, and doing dimension reduction in feature space of a classifier. Both approaches are tested using four dimension reduction methods and four real data sets. Both approaches are found to improve visualization results. Especially working in classifier feature space is beneficial for showing possible cluster structure of the data.