A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Knowledge and Information Systems
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation
Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction
Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction
LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
Efficient penetration depth approximation using active learning
ACM Transactions on Graphics (TOG)
Hi-index | 0.00 |
The under-sample classification problem is discussed for 21 normal childrenand 21 children with reading disability. We first rejected data of one subject in each group and produced 441 sub-datasets including 40 subjects in each. Regarding each sub-dataset, we extracted features through nonnegative Tucker decomposition (NTD) from event-related potentials, and used the leave-one-out paradigm for classification. Averaged accuracies over 441 sub-datasets were 77.98% (linear discriminate analysis), 73.55% (support vector machine), and 76.97% (adaptive boosting). In summary, assuming K observations with known labels, for the new observation without the group information, the feature of the new observation can be extracted through performing NTD to extract features from data of all observations (K+1). Since the group information of the first K observations is known, their features can train the classifier, and then, the trained classifier recognizes new features to determine the group information of new observation.