Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
SIAM Review
Artificial Intelligence Review - Special issue on lazy learning
A class-dependent weighted dissimilarity measure for nearest neighbor classification problems
Pattern Recognition Letters
MindReader: Querying Databases Through Multiple Examples
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Local feature extraction and its applications using a library of bases
Local feature extraction and its applications using a library of bases
An Adaptable Time Warping Distance for Time Series Learning
ICMLA '06 Proceedings of the 5th International Conference on Machine Learning and Applications
Chromosome classification using dynamic time warping
Pattern Recognition Letters
Toward accurate dynamic time warping in linear time and space
Intelligent Data Analysis
Proceedings of the VLDB Endowment
Faster retrieval with a two-pass dynamic-time-warping lower bound
Pattern Recognition
Learning instance specific distances using metric propagation
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
A method of learning weighted similarity function to improve the performance of nearest neighbor
Information Sciences: an International Journal
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Large margin nearest local mean classifier
Signal Processing
Journal of Computational and Applied Mathematics
Histogram distance for similarity search in large time series database
IDEAL'10 Proceedings of the 11th international conference on Intelligent data engineering and automated learning
Dynamic time warping constraint learning for large margin nearest neighbor classification
Information Sciences: an International Journal
Weighted dynamic time warping for time series classification
Pattern Recognition
Hi-index | 0.00 |
To classify time series by nearest neighbors, we need to specify or learn one or several distance measures. We consider variations of the Mahalanobis distance measures which rely on the inverse covariance matrix of the data. Unfortunately--for time series data--the covariance matrix has often low rank. To alleviate this problem we can either use a pseudoinverse, covariance shrinking or limit the matrix to its diagonal. We review these alternatives and benchmark them against competitive methods such as the related Large Margin Nearest Neighbor Classification (LMNN) and the Dynamic Time Warping (DTW) distance. As we expected, we find that the DTW is superior, but the Mahalanobis distance measures are one to two orders of magnitude faster. To get best results with Mahalanobis distance measures, we recommend learning one distance measure per class using either covariance shrinking or the diagonal approach.