Efficient steady-state analysis based on matrix-free Krylov-subspace methods
DAC '95 Proceedings of the 32nd annual ACM/IEEE Design Automation Conference
Fast methods for extraction and sparsification of substrate coupling
Proceedings of the 37th Annual Design Automation Conference
Simulating the Behavior of MEMS Devices: Computational Methods and Needs
IEEE Computational Science & Engineering
Fast curvature matrix-vector products for second-order gradient descent
Neural Computation
A simple strategy for varying the restart parameter in GMRES(m)
Journal of Computational and Applied Mathematics
Multivariate Regression and Machine Learning with Sums of Separable Functions
SIAM Journal on Scientific Computing
Orthogonal polynomials for complex Gaussian processes
IEEE Transactions on Signal Processing - Part I
A Generalized Memory Polynomial Model for Digital Predistortion of RF Power Amplifiers
IEEE Transactions on Signal Processing
Analysis of conjugate gradient algorithms for adaptive filtering
IEEE Transactions on Signal Processing
Hi-index | 7.29 |
A stochastic conjugate gradient method for the approximation of a function is proposed. The proposed method avoids computing and storing the covariance matrix in the normal equations for the least squares solution. In addition, the method performs the conjugate gradient steps by using an inner product that is based on stochastic sampling. Theoretical analysis shows that the method is convergent in probability. The method has applications in such fields as predistortion for the linearization of power amplifiers.