Correspondence analysis with least absolute residuals
Computational Statistics & Data Analysis - Special issue on statistical data analysis based on the L:0I1:0E norm and relate
Resistant lower rank approximation of matrices by iterative majorization
Computational Statistics & Data Analysis
The nature of statistical learning theory
The nature of statistical learning theory
Bayesian parameter estimation via variational methods
Statistics and Computing
Principal component analysis of binary data by iterated singular value decomposition
Computational Statistics & Data Analysis
A majorization algorithm for simultaneous parameter estimation in robust exploratory factor analysis
Computational Statistics & Data Analysis
Hi-index | 0.03 |
Majorization methods solve minimization problems by replacing a complicated problem by a sequence of simpler problems. Solving the sequence of simple optimization problems guarantees convergence to a solution of the complicated original problem. Convergence is guaranteed by requiring that the approximating functions majorize the original function at the current solution. The leading examples of majorization are the EM algorithm and the SMACOF algorithm used in Multidimensional Scaling. The simplest possible majorizing subproblems are quadratic, because minimizing a quadratic is easy to do. In this paper quadratic majorizations for real-valued functions of a real variable are analyzed, and the concept of sharp majorization is introduced and studied. Applications to logit, probit, and robust loss functions are discussed.