The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Lectures on modern convex optimization: analysis, algorithms, and engineering applications
Lectures on modern convex optimization: analysis, algorithms, and engineering applications
Smooth minimization of non-smooth functions
Mathematical Programming: Series A and B
Recursive Aggregation of Estimators by the Mirror Descent Algorithm with Averaging
Problems of Information Transmission
Google's PageRank and Beyond: The Science of Search Engine Rankings
Google's PageRank and Beyond: The Science of Search Engine Rankings
Hi-index | 0.01 |
Consideration was given to estimation of the eigenvector corresponding to the greatest eigenvalue of a stochastic matrix. There exist numerous applications of this problem arising at ranking the results of search, coordination of the multiagent system actions, network control, and data analysis. The standard technique for its solution comes to the power method with an additional regularization of the original matrix. A new randomized algorithm was proposed, and a uniform--over the entire class of the stochastic matrices of a given size--upper boundary of the convergence rate was validated. It is given by {ie342-1}, where C is an absolute constant, N is size, and n is the number of iterations. This boundary seems promising because ln(N) is smallish even for a very great size. The algorithm relies on the mirror descent method for the problems of convex stochastic optimization. Applicability of the method to the PageRank problem of ranking the Internet pages was discussed.