Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
SoundButton: Design of a Low Power Wearable Audio Classification System
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Mining models of human activities from the web
Proceedings of the 13th international conference on World Wide Web
SoundSense: scalable sound sensing for people-centric applications on mobile phones
Proceedings of the 7th international conference on Mobile systems, applications, and services
Daily Routine Recognition through Activity Spotting
LoCA '09 Proceedings of the 4th International Symposium on Location and Context Awareness
Exploring semi-supervised and active learning for activity recognition
ISWC '08 Proceedings of the 2008 12th IEEE International Symposium on Wearable Computers
Environmental sound recognition with time-frequency audio features
IEEE Transactions on Audio, Speech, and Language Processing
A survey of mobile phone sensing
IEEE Communications Magazine
Audio-based context recognition
IEEE Transactions on Audio, Speech, and Language Processing
Recognizing Daily Life Context Using Web-Collected Audio Data
ISWC '12 Proceedings of the 2012 16th Annual International Symposium on Wearable Computers (ISWC)
Combining crowd-generated media and personal data: semi-supervised learning for context recognition
Proceedings of the 1st ACM international workshop on Personal data meets distributed multimedia
Hi-index | 0.00 |
Human activity recognition systems traditionally require a manual annotation of massive training data, which is laborious and non-scalable. An alternative approach is mining existing online crowd-sourced repositories for open-ended, free annotated training data. However, differences across data sources or in observed contexts prevent a crowd-sourced based model reaching user-dependent recognition rates. To enhance the use of crowd-sourced data in activity recognition, we take an essential step forward by adapting a generic model based on crowd-sourced data to a personalized model. In this work, we investigate two adapting approaches: 1) a semi-supervised learning to combine crowd-sourced data and unlabeled user data, and 2) an active-learning to query the user for labeling samples where the crowd-sourced based model fails to recognize. We test our proposed approaches on 7 users using auditory modality on mobile phones with a total data of 14 days and up to 9 daily context classes. Experimental results indicate that the semi-supervised model can indeed improve the recognition accuracy up to 21% but is still significantly outperformed by a supervised model on user data. In the active learning scheme, the crowd-sourced model can reach the performance of the supervised model by requesting labels of 0.7% of user data only. Our work illustrates a promising first step towards an unobtrusive, efficient and open-ended context recognition system by adapting free online crowd-sourced data into a personalized model.