On the Quality of Spectral Separators
SIAM Journal on Matrix Analysis and Applications
Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms
Journal of the ACM (JACM)
Approximation algorithms
Scaling personalized web search
WWW '03 Proceedings of the 12th international conference on World Wide Web
On different facets of regularization theory
Neural Computation
STOC '04 Proceedings of the thirty-sixth annual ACM symposium on Theory of computing
Proceedings of the 15th international conference on World Wide Web
Local Graph Partitioning using PageRank Vectors
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
MapReduce: simplified data processing on large clusters
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
An algorithm for improving graph partitions
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Statistical properties of community structure in large social and information networks
Proceedings of the 17th international conference on World Wide Web
Estimating PageRank on graph streams
Proceedings of the twenty-seventh ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Geometry, flows, and graph-partitioning algorithms
Communications of the ACM
The Claremont report on database research
ACM SIGMOD Record
Sparse Online Learning via Truncated Gradient
The Journal of Machine Learning Research
MAD skills: new analysis practices for big data
Proceedings of the VLDB Endowment
Empirical comparison of algorithms for network community detection
Proceedings of the 19th international conference on World wide web
Fast incremental and personalized PageRank
Proceedings of the VLDB Endowment
Fast personalized PageRank on MapReduce
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Randomized Algorithms for Matrices and Data
Randomized Algorithms for Matrices and Data
The Journal of Machine Learning Research
Hi-index | 0.00 |
Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmic-statistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worst-case algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large-scale databases and that also have good inferential or predictive properties.