Predicting the Presence of Oil Slicks After an Oil Spill
ECCBR '08 Proceedings of the 9th European conference on Advances in Case-Based Reasoning
ACM Computing Surveys (CSUR)
Fast approximate spectral clustering
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Nonlinear Component Analysis for Large-Scale Data Set Using Fixed-Point Algorithm
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part III
A new CBR approach to the oil spill problem
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Limited stochastic meta-descent for kernel-based online learning
Neural Computation
Matrix-based kernel principal component analysis for large-scale data set
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Adaptive kernel principal component analysis
Signal Processing
ICNC'09 Proceedings of the 5th international conference on Natural computation
Fast principal component analysis based on hardware architecture of generalized Hebbian algorithm
ISICA'10 Proceedings of the 5th international conference on Advances in computation and intelligence
CROS: A Contingency Response multi-agent system for Oil Spills situations
Applied Soft Computing
Efficient GHA-based hardware architecture for texture classification
ICCCI'10 Proceedings of the Second international conference on Computational collective intelligence: technologies and applications - Volume Part II
Linear and kernel methods for multivariate change detection
Computers & Geosciences
Artificial Intelligence Review
(OBIFS) isotropic image analysis for improving a predicting agent based systems
Expert Systems with Applications: An International Journal
Review: A review of novelty detection
Signal Processing
Hi-index | 0.00 |
We develop gain adaptation methods that improve convergence of the kernel Hebbian algorithm (KHA) for iterative kernel PCA (Kim et al., 2005). KHA has a scalar gain parameter which is either held constant or decreased according to a predetermined annealing schedule, leading to slow convergence. We accelerate it by incorporating the reciprocal of the current estimated eigenvalues as part of a gain vector. An additional normalization term then allows us to eliminate a tuning parameter in the annealing schedule. Finally we derive and apply stochastic meta-descent (SMD) gain vector adaptation (Schraudolph, 1999, 2002) in reproducing kernel Hilbert space to further speed up convergence. Experimental results on kernel PCA and spectral clustering of USPS digits, motion capture and image denoising, and image super-resolution tasks confirm that our methods converge substantially faster than conventional KHA. To demonstrate scalability, we perform kernel PCA on the entire MNIST data set.