Toppling Top Lists: Evaluating the Accuracy of Popular Website Lists
Kimberly Ruth, Deepak Kumar, Brandon Wang, Luke Valenta, Zakir DurumericAbstract
Researchers rely on lists of popular websites like the Alexa Top Million to both measure the state of the web and evaluate proposed protocols and systems. Prior work has questioned the correctness and consistency of these lists, but without any “ground truth” to compare against, there has been no direct evaluation of lists. In this paper, we endeavor to evaluate the relative accuracy of seven top lists of websites. We derive a set of popularity metrics from server-side requests seen at a major CDN that authoritatively services a significant portion of the most popular websites. We evaluate top lists against these metrics and show that most capture web popularity poorly, with the exception of the Chrome User Experience Report dataset, which evaluates in line with the differences between different popularity metrics calculated from the same CDN data source. We explore the biases that lower the accuracy of other lists, and, last, we develop recommendations for researchers studying the web in the future.