A caching relay for the World Wide Web
Selected papers of the first conference on World-Wide Web
A limited memory algorithm for bound constrained optimization
SIAM Journal on Scientific Computing
Analysis and modeling of world wide web traffic
Analysis and modeling of world wide web traffic
Finding high-quality content in social media
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Characterizing typical and atypical user sessions in clickstreams
Proceedings of the 17th international conference on World Wide Web
Ads by whom? ads about what?: exploring user influence and contents in social advertising
Proceedings of the first ACM conference on Online social networks
Hi-index | 0.00 |
Metrics such as click counts are vital to online businesses but their measurement has been problematic due to inclusion of high variance robot traffic. We posit that by applying statistical methods more rigorous than have been employed to date that we can build a robust model of thedistribution of clicks following which we can set probabilistically sound thresholds to address outliers and robots. Prior research in this domain has used inappropriate statistical methodology to model distributions and current industrial practice eschews this research for conservative ad-hoc click-level thresholds. Prevailing belief is that such distributions are scale-free power law distributions but using more rigorous statistical methods we find the best description of the data is instead provided by a scale-sensitive Zipf-Mandelbrot mixture distribution. Our results are based on ten data sets from various verticals in the Yahoo domain. Since mixture models can overfit the data we take care to use the BIC log-likelihood method which penalizes overly complex models. Using a mixture model in the web activity domain makes sense because there are likely multiple classes of users. In particular, we have noticed that there is a significantly large set of "users" that visit the Yahoo portal exactly once a day. We surmise these may be robots testing internet connectivity by pinging the Yahoo main website. Backing up our quantitative analysis is graphical analysis in which empirical distributions are plotted against heoretical distributions in log-log space using robust cumulative distribution plots. This methodology has two advantages: plotting in log-log space allows one to visually differentiate the various exponential distributions and secondly, cumulative plots are much more robust to outliers. We plan to use the results of this work for applications for robot removal from web metrics business intelligence systems.