Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Integrating Faces and Fingerprints for Personal Identification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean field methods for classification with Gaussian processes
Proceedings of the 1998 conference on Advances in neural information processing systems II
Adaptive mixtures of local experts
Neural Computation
Statistical feature fusion for gait-based human recognition
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Expectation propagation for approximate Bayesian inference
UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
Critic-driven ensemble classification
IEEE Transactions on Signal Processing
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Detecting Guessed and Random Learners' Answers through Their Brainwaves
UMAP '09 Proceedings of the 17th International Conference on User Modeling, Adaptation, and Personalization: formerly UM and AH
Predicting Learner Answers Correctness through Brainwaves Assesment and Emotional Dimensions
Proceedings of the 2009 conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling
Hi-index | 0.00 |
This paper describes a unified approach, based on Gaussian Processes, for achieving sensor fusion under the problematic conditions of missing channels and noisy labels. Under the proposed approach, Gaussian Processes generate separate class labels corresponding to each individual modality. The final classification is based upon a hidden random variable, which probabilistically combines the sensors. Given both labeled and test data, the inference on unknown variables, parameters and class labels for the test data is performed using the variational bound and Expectation Propagation. We apply this method to the challenge of classifying a student's interest level using observations from the face and postures, together with information from the task the students are performing. Classification with the proposed new approach achieves accuracy of over 83%, significantly outperforming the classification using individual modalities and other common classifier combination schemes.