Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Predicting clicks: estimating the click-through rate for new ads
Proceedings of the 16th international conference on World Wide Web
Estimating rates of rare events at multiple resolutions
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Controlled experiments on the web: survey and practical guide
Data Mining and Knowledge Discovery
All of Statistics: A Concise Course in Statistical Inference
All of Statistics: A Concise Course in Statistical Inference
User browsing models: relevance versus examination
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
The sum of its parts: reducing sparsity in click estimation with query segments
Information Retrieval
Effects of search success on search engine re-use
Proceedings of the 20th ACM international conference on Information and knowledge management
Hierarchical composable optimization of web pages
Proceedings of the 21st international conference companion on World Wide Web
Trustworthy online controlled experiments: five puzzling outcomes explained
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Improving the sensitivity of online controlled experiments by utilizing pre-experiment data
Proceedings of the sixth ACM international conference on Web search and data mining
Ad click prediction: a view from the trenches
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Online controlled experiments at large scale
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Evaluating and predicting user engagement change with degraded search relevance
Proceedings of the 22nd international conference on World Wide Web
Using historical click data to increase interleaving sensitivity
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Optimization strategies for A/B testing on HADOOP
Proceedings of the VLDB Endowment
Counterfactual reasoning and learning systems: the example of computational advertising
The Journal of Machine Learning Research
Designing and deploying online field experiments
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.00 |
At Google, experimentation is practically a mantra; we evaluate almost every change that potentially affects what our users experience. Such changes include not only obvious user-visible changes such as modifications to a user interface, but also more subtle changes such as different machine learning algorithms that might affect ranking or content selection. Our insatiable appetite for experimentation has led us to tackle the problems of how to run more experiments, how to run experiments that produce better decisions, and how to run them faster. In this paper, we describe Google's overlapping experiment infrastructure that is a key component to solving these problems. In addition, because an experiment infrastructure alone is insufficient, we also discuss the associated tools and educational processes required to use it effectively. We conclude by describing trends that show the success of this overall experimental environment. While the paper specifically describes the experiment system and experimental processes we have in place at Google, we believe they can be generalized and applied by any entity interested in using experimentation to improve search engines and other web applications.