A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Learning a kernel matrix for nonlinear dimensionality reduction
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Integrating constraints and metric learning in semi-supervised clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning a Mahalanobis Metric from Equivalence Constraints
The Journal of Machine Learning Research
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Partial order embedding with multiple kernels
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Structure preserving embedding
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
When Is There a Representer Theorem? Vector Versus Matrix Regularizers
The Journal of Machine Learning Research
Hi-index | 0.00 |
An important problem in cognitive psychology is to quantify the perceived similarities between stimuli. Previous work attempted to address this problem with multidimensional scaling (MDS) and its variants. However, there are several shortcomings of the MDS approaches. We propose Yada, a novel general metric-learning procedure based on two-alternative forced-choice behavioral experiments. Our method learns forward and backward nonlinear mappings between an objective space in which the stimuli are defined by the standard feature vector representation and a subjective space in which the distance between a pair of stimuli corresponds to their perceived similarity. We conduct experiments on both synthetic and real human behavioral datasets to assess the effectiveness of Yada. The results show that Yada outperforms several standard embedding and metric-learning algorithms, both in terms of likelihood and recovery error.