SIGMOD '95 Proceedings of the 1995 ACM SIGMOD international conference on Management of data
Evaluating a class of distance-mapping algorithms for data mining and clustering
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Content-Based Image Retrieval at the End of the Early Years
IEEE Transactions on Pattern Analysis and Machine Intelligence
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
GPCA: an efficient dimension reduction scheme for image compression and retrieval
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning an image manifold for retrieval
Proceedings of the 12th annual ACM international conference on Multimedia
Fuzzy SVM for content-based image retrieval: a pseudo-label support vector machine framework
IEEE Computational Intelligence Magazine
Semantic Subspace Projection and Its Applications in Image Retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Learning similarity measure for natural image retrieval with relevance feedback
IEEE Transactions on Neural Networks
Shape indexing using self-organizing maps
IEEE Transactions on Neural Networks
Robust Positive semidefinite L-Isomap Ensemble
Pattern Recognition Letters
Nyström approximations for scalable face recognition: a comparative study
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.01 |
Landmark multidimensional scaling (LMDS) uses a subset of data (landmark points) to solve classical multidimensional scaling (MDS), where the scalability is increased but the approximation is noise-sensitive. In this paper we present an LMDS ensemble where we use a portion of the input in a piecewise manner to solve classical MDS, combining individual LMDS solutions which operate on different partitions of the input. Ground control points (GCPs) that are shared by partitions considered in the ensemble, allow us to align individual LMDS solutions in a common coordinate system through affine transformations. We incorporate priors into combining multiple LMDS solutions such that the weighted averaging by priors improves the noise-robustness of our method. Our LMDS ensemble is much less noise-sensitive while maintaining the scalability and the speed of LMDS. Experiments on synthetic data (noisy grid) and real-world data (similar image retrieval) confirm the high performance of the proposed LMDS ensemble.