Neural Computation
Bayesian regularization and pruning using a Laplace prior
Neural Computation
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
LOF: identifying density-based local outliers
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Machine Learning
Bioinformatics: the machine learning approach
Bioinformatics: the machine learning approach
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Support Vector Data Description
Machine Learning
Convex Optimization
Learning and evaluating classifiers under sample selection bias
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A Survey of Outlier Detection Methodologies
Artificial Intelligence Review
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Estimating the Support of a High-Dimensional Distribution
Neural Computation
An Efficient Implementation of an Active Set Method for SVMs
The Journal of Machine Learning Research
Discriminative learning for differing training and test distributions
Proceedings of the 24th international conference on Machine learning
Supervised feature selection via dependence estimation
Proceedings of the 24th international conference on Machine learning
Covariate Shift Adaptation by Importance Weighted Cross Validation
The Journal of Machine Learning Research
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
Inlier-Based Outlier Detection via Direct Density Ratio Estimation
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Adaptive importance sampling with automatic model selection in value function approximation
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Density Ratio Estimation: A New Versatile Tool for Machine Learning
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Neural Networks
Flexible sample selection strategies for transfer learning in ranking
Information Processing and Management: an International Journal
On minimum distribution discrepancy support vector machine for domain adaptation
Pattern Recognition
No bias left behind: covariate shift adaptation for discriminative 3d pose estimation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Change-point detection in time-series data by relative density-ratio estimation
SSPR'12/SPR'12 Proceedings of the 2012 Joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Neural Computation
Active learning for noisy oracle via density power divergence
Neural Networks
MLDM'13 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition
Change-point detection with feature selection in high-dimensional time-series data
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Differential privacy based on importance weighting
Machine Learning
Review: A review of novelty detection
Signal Processing
Hi-index | 0.00 |
We address the problem of estimating the ratio of two probability density functions, which is often referred to as the importance. The importance values can be used for various succeeding tasks such as covariate shift adaptation or outlier detection. In this paper, we propose a new importance estimation method that has a closed-form solution; the leave-one-out cross-validation score can also be computed analytically. Therefore, the proposed method is computationally highly efficient and simple to implement. We also elucidate theoretical properties of the proposed method such as the convergence rate and approximation error bounds. Numerical experiments show that the proposed method is comparable to the best existing method in accuracy, while it is computationally more efficient than competing approaches.