Practical guide to controlled experiments on the web: listen to your customers not to the hippo
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Do it wrong quickly: how the web changes the old marketing rules
Do it wrong quickly: how the web changes the old marketing rules
Controlled experiments on the web: survey and practical guide
Data Mining and Knowledge Discovery
Seven pitfalls to avoid when running controlled experiments on the web
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Overlapping experiment infrastructure: more, better, faster experimentation
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Improving the sensitivity of online controlled experiments by utilizing pre-experiment data
Proceedings of the sixth ACM international conference on Web search and data mining
Scaling big data mining infrastructure: the twitter experience
ACM SIGKDD Explorations Newsletter
Graph cluster randomization: network exposure to multiple universes
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Online controlled experiments at large scale
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Uncertainty in online experiments with dependent data: an evaluation of bootstrap methods
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Personalization of web-search using short-term browsing context
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Robust assessment of changes in cellular networks
Proceedings of the ninth ACM conference on Emerging networking experiments and technologies
Designing and deploying online field experiments
Proceedings of the 23rd international conference on World wide web
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.00 |
Online controlled experiments are often utilized to make data-driven decisions at Amazon, Microsoft, eBay, Facebook, Google, Yahoo, Zynga, and at many other companies. While the theory of a controlled experiment is simple, and dates back to Sir Ronald A. Fisher's experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, the deployment and mining of online controlled experiments at scale--thousands of experiments now--has taught us many lessons. These exemplify the proverb that the difference between theory and practice is greater in practice than in theory. We present our learnings as they happened: puzzling outcomes of controlled experiments that we analyzed deeply to understand and explain. Each of these took multiple-person weeks to months to properly analyze and get to the often surprising root cause. The root causes behind these puzzling results are not isolated incidents; these issues generalized to multiple experiments. The heightened awareness should help readers increase the trustworthiness of the results coming out of controlled experiments. At Microsoft's Bing, it is not uncommon to see experiments that impact annual revenue by millions of dollars, thus getting trustworthy results is critical and investing in understanding anomalies has tremendous payoff: reversing a single incorrect decision based on the results of an experiment can fund a whole team of analysts. The topics we cover include: the OEC (Overall Evaluation Criterion), click tracking, effect trends, experiment length and power, and carryover effects.