The political blogosphere and the 2004 U.S. election: divided they blog
Proceedings of the 3rd international workshop on Link discovery
Tweet the debates: understanding community annotation of uncollected sources
WSM '09 Proceedings of the first SIGMM workshop on Social media
Characterizing debate performance via aggregated twitter sentiment
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Twitinfo: aggregating and visualizing microblogs for event exploration
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Democrats, republicans and starbucks afficionados: user classification in twitter
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Identification of live news events using Twitter
Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Location-Based Social Networks
Summarizing sporting events using twitter
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
Finding and assessing social media information sources in the context of journalism
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Sentiment analysis on evolving social streams: how self-report imbalances can help
Proceedings of the 7th ACM international conference on Web search and data mining
Hi-index | 0.00 |
Social media have been employed to assess public opinions on events, markets, and policies. Most current work focuses on either developing aggregated measures or opinion extraction methods like sentiment analysis. These approaches suffer from unpredictable turnover in the participants and the information they react to, making it difficult to distinguish meaningful shifts from those that follow from known information. We propose a novel approach to tame these sources of uncertainty through the introduction of "computational focus groups" to track opinion shifts in social media streams. Our approach uses prior user behaviors to detect users' biases, then groups users with similar biases together. We track the behavior streams from these like-minded sub-groups and present time-dependent collective measures of their opinions. These measures control for the response rate and base attitudes of the users, making shifts in opinion both easier to detect and easier to interpret. We test the effectiveness of our system by tracking groups' Twitter responses to a common stimulus set: the 2012 U.S. presidential election debates. While our groups' behavior is consistent with their biases, there are numerous moments and topics on which they behave "out of character," suggesting precise targets for follow-up inquiry. We also demonstrate that tracking elite users with well-established biases does not yield such insights, as they are insensitive to the stimulus and simply reproduce expected patterns. The effectiveness of our system suggests a new direction both for researchers and data-driven journalists interested in identifying opinion shifting processes in real-time.