ACM Transactions on Mathematical Software (TOMS)
On the worst-case arithmetic complexity of approximating zeros of polynomials
Journal of Complexity
Polynomial and matrix computations (vol. 1): fundamental algorithms
Polynomial and matrix computations (vol. 1): fundamental algorithms
Algorithm 631: Finding a bracketed zero by Larkin's method of rational interpolation
ACM Transactions on Mathematical Software (TOMS)
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Optimal Order of One-Point and Multipoint Iteration
Journal of the ACM (JACM)
A modified Newton method for polynomials
Communications of the ACM
Journal of Computational and Applied Mathematics
Univariate polynomials: nearly optimal algorithms for factorization and rootfinding
Proceedings of the 2001 international symposium on Symbolic and algebraic computation
Point estimation of simultaneous methods for solving polynomial equations: a survey
Journal of Computational and Applied Mathematics
Univariate polynomials: nearly optimal algorithms for numerical factorization and root-finding
Journal of Symbolic Computation - Computer algebra: Selected papers from ISSAC 2001
A 2002 update of the supplementary bibliography on roots of polynomials
Journal of Computational and Applied Mathematics
Solving Polynomials with Small Leading Coefficients
SIAM Journal on Matrix Analysis and Applications
SIAM Journal on Matrix Analysis and Applications
The amended DSeSC power method for polynomial root-finding
Computers & Mathematics with Applications
A family of methods for solving nonlinearequations using quadratic interpolation
Computers & Mathematics with Applications
Implicit double shift QR-algorithm for companion matrices
Numerische Mathematik
New progress in real and complex polynomial root-finding
Computers & Mathematics with Applications
Journal of Computational and Applied Mathematics
Root-refining for a polynomial equation
CASC'12 Proceedings of the 14th international conference on Computer Algebra in Scientific Computing
Hi-index | 0.09 |
A typical iterative polynomial root-finder begins with a relatively slow process of computing a crude but sufficiently close initial approximation to a root and then rapidly refines it. The policy of using the same iterative process at both stages of computing an initial approximation and refining it, however, is neither necessary nor most effective. The efficiency of an iteration at the former stage resists formal study and is usually decided empirically, whereas formal study of the efficiency at the latter stage of refinement is not hard and is the subject of the current paper. We define this local efficiency as log"1"0qd=log"1"0(q^1^/^d) (q is the convergence order, and d is the number of function evaluations per iteration); it is inversely proportional to the number of flops involved. Assuming that about 2n flops are needed per evaluation of a polynomial of a degree n at a single point, we extend the definition to cover the recent matrix methods for polynomial root-finding as well as some methods that combine n approximations to all n roots to refine them simultaneously. For the approximation of a single root of a polynomial of degree n, the maximum local efficiency achieved so far is log"1"02~0.301..., but we show its growth to infinity for simultaneous approximation of all n roots as n grows to infinity.