C4.5: Programs for Machine Learning
C4.5: Programs for Machine Learning
Generating Accurate Rule Sets Without Global Optimization
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Fine-Grained Activity Recognition by Aggregating Abstract Object Usage
ISWC '05 Proceedings of the Ninth IEEE International Symposium on Wearable Computers
Analyzing features for activity recognition
Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies
Gesture spotting with body-worn inertial sensors to detect user activities
Pattern Recognition
Accurate activity recognition in a home setting
UbiComp '08 Proceedings of the 10th international conference on Ubiquitous computing
Improving the recognition of interleaved activities
UbiComp '08 Proceedings of the 10th international conference on Ubiquitous computing
PERCOM '09 Proceedings of the 2009 IEEE International Conference on Pervasive Computing and Communications
A long-term evaluation of sensing modalities for activity recognition
UbiComp '07 Proceedings of the 9th international conference on Ubiquitous computing
MARS: a muscle activity recognition system enabling self-configuring musculoskeletal sensor networks
Proceedings of the 12th international conference on Information processing in sensor networks
Dynamic sensor data segmentation for real-time knowledge-driven activity recognition
Pervasive and Mobile Computing
Hi-index | 0.00 |
Human activity recognition aims to infer the actions of one or more persons from a set of observations captured by sensors. Usually, this is performed by following a fixed length sliding window approach for the features extraction where two parameters have to be fixed: the size of the window and the shift. In this paper we propose a different approach using dynamic windows based on events. Our approach adjusts dynamically the window size and the shift at every step. Using our approach we have generated a model to compare both approaches. Experiments with public datasets show that our method, employing simpler models, is able to accurately recognize the activities, using fewer instances, and obtains better results than the approaches used by the datasets authors.