Slow feature analysis: unsupervised learning of invariances
Neural Computation
Training products of experts by minimizing contrastive divergence
Neural Computation
Incorporating Prior Knowledge into Boosting
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Simple, robust, scalable semi-supervised learning via expectation regularization
Proceedings of the 24th international conference on Machine learning
Learning long-range vision for autonomous off-road driving
Journal of Field Robotics - Special Issue on LAGR Program, Part II
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning from measurements in exponential families
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Posterior Regularization for Structured Latent Variable Models
The Journal of Machine Learning Research
Weakly supervised classification of objects in images using soft random forests
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Decoding Finger Flexion from Electrocorticographic Signals Using a Sparse Gaussian Process
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Acoustic Modeling Using Deep Belief Networks
IEEE Transactions on Audio, Speech, and Language Processing
Sleep stage classification using unsupervised feature learning
Advances in Artificial Neural Systems - Special issue on Advances in Unsupervised Learning Techniques Applied to Biosciences and Medicine
Hi-index | 0.00 |
Recent years have seen a great interest in using deep architectures for feature learning from data. One drawback of the commonly used unsupervised deep feature learning methods is that for supervised or semi-supervised learning tasks, the information in the target variables are not used until the final stage when the classifier or regressor is trained on the learned features. This could lead to over-generalized features that are not competitive on the specific supervised or semi-supervised learning tasks. In this work, we describe a new learning method that combines deep feature learning on mixed labeled and unlabeled data sets. Specifically, we describe a weakly supervised learning method of a prior supervised convolutional stacked auto-encoders (PCSA), of which information in the target variables is represented probabilistically using a Gaussian Bernoulli restricted Boltzmann machine (RBM). We apply this method to the decoding problem of an ECoG based Brain Computer Interface (BCI) system. Our experimental results show that PCSA achieves significant improvement in decoding performance on benchmark data sets compared to the unsupervised feature learning as well as to the current state-of-the-art algorithms that are based on manually crafted features.