Machine Learning - Special issue on inductive transfer
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Convergence of alternating optimization
Neural, Parallel & Scientific Computations
A tutorial on support vector regression
Statistics and Computing
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Practical robust localization over large-scale 802.11 wireless networks
Proceedings of the 10th annual international conference on Mobile computing and networking
A kernel-based learning approach to ad hoc sensor network localization
ACM Transactions on Sensor Networks (TOSN)
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
A model of inductive bias learning
Journal of Artificial Intelligence Research
WiFi-SLAM using Gaussian process latent variable models
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Activity recognition: linking low-level sensors to high-level intelligence
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Transfer Learning beyond Text Classification
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
A calibration-free localization solution for handling signal strength variance
MELT'09 Proceedings of the 2nd international conference on Mobile entity localization and tracking in GPS-less environments
Evolutionary cross-domain discriminative hessian eigenmaps
IEEE Transactions on Image Processing
Three challenges in data mining
Frontiers of Computer Science in China
Cross-domain activity recognition via transfer learning
Pervasive and Mobile Computing
Learning concept bundles for video search with complex queries
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Multi-task learning with one-class SVM
Neurocomputing
Hi-index | 0.00 |
In this paper, we propose a latent multi-task learning algorithm to solve the multi-device indoor localization problem. Traditional indoor localization systems often assume that the collected signal data distributions are fixed, and thus the localization model learned on one device can be used on other devices without adaptation. However, by empirically studying the signal variation over different devices, we found this assumption to be invalid in practice. To solve this problem, we treat multiple devices as multiple learning tasks, and propose a multi-task learning algorithm. Different from algorithms assuming that the hypotheses learned from the original data space for related tasks can be similar, we only require the hypotheses learned in a latent feature space are similar. To establish our algorithm, we employ an alternating optimization approach to iteratively learn feature mappings and multi-task regression models for the devices. We apply our latent multi-task learning algorithm to real-world indoor localization data and demonstrate its effectiveness.