Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Balanced Ensemble Approach to Weighting Classifiers for Text Classification
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Load Identification in Neural Networks for a Non-intrusive Monitoring of Industrial Electrical Loads
Computer Supported Cooperative Work in Design IV
Editorial: Hybrid learning machines
Neurocomputing
UbiComp '07 Proceedings of the 9th international conference on Ubiquitous computing
Editorial: Hybrid intelligent algorithms and applications
Information Sciences: an International Journal
Speedy local search for semi-supervised regularized least-squares
KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence
Hi-index | 0.00 |
In this work we employ ensemble classifiers for the problem of nonintrusive appliance load monitoring. In practical scenarios the question arises how to efficiently and automatically learn statistical models for appliance recognition, which is an important step for various problems in process recognition, healthcare, and energy consulting. This work is an application study that analyzes multi-class support vector machines (SVMs), and K-nearest neighbors (KNN) in the problem domain of automatically recognizing appliances. By combining two types of classifiers with varying parameterizations to ensembles, we reduce the classification error, and increase the robustness of the classifier. In the experimental part we consider a field study with household appliances, and compare the classifiers w.r.t. various training set and neighborhood sizes. It turns out that the ensembles belong to the best classifiers in all training set scenarios.