Written by: Josh Rosenberg
Primary Source: Joshua M. Rosenberg, November 5, 2015
Tonight, Spencer Greenhalgh and I are continuing our exploration of AECT Twitter data (you can get some more background info and see yesterday’s work here). Again, we’ve focused on the hashtags #aect, #aect15, and #aect2015. Although #aect15 is the official hashtag, we noticed that #aect2015 is still getting a lot of attention, which raises some interesting possibilities for what we could look at. That’s not our focus for today, though. Instead, we thought we’d see what people like and dislike at AECT as seen in their tweets. Well, sort of. We’re using a relatively crude form of sentiment analysis (i.e., measuring levels of emotion in a text) that is based on counting up “positive words” and “negative words” in tweets and assigning each tweet a score based off of that. There are more sophisticated forms of sentiment analysis out there, and we think those might work better for this data, but this is still a fun way to look at just what you can do about Twitter data. Just don’t read too much into anything! We downloaded all of the tweets that our tracker has snagged so far and did some basic cleaning. This time around, we got rid of anything before November 1st. We also noticed that the duplicate tweet catcher we set up yesterday isn’t working 100% properly. When our tracker grabs URLs from tweets, it assigns them short URLs, which can be different each time it snags the URL. As a result, there’s at least one instance where our code didn’t filter out a duplicate, but it’s late, so we’ll leave the fine tuning for another day.
First things first: Here’s a breakdown of tweets by sentiment level. The graph isn’t perfect (again: late), so here’s one important thing to keep in mind: The sentiment scores for these tweets are all whole numbers, so it would be more accurate for the values along the x-axis to be at the center of the bar to the right, if that makes sense. Keeping that in mind, the vast majority of these tweets are neutral (i.e., have a sentiment score of 0). There’s a small number of them that are negative, and then a larger amount that are positive.
So, from here, we can do some interesting things. We created two subsets of tweets, one for positive tweets and one for negative ones. Then, taking a cue from yesterday, we made wordclouds to see which words were associated with positive tweets and which ones were associated with negative ones. We reproduce them here for your viewing pleasure; again, though, don’t read too much into anything.
Here are the words that appeared frequently (at least 5 times; bigger words appear more frequently) in positive tweets using AECT hashtags:
So, this is a start. We can’t really draw any firm conclusions from this, but we think it could give us some interesting ideas for where to take this further.
Latest posts by Josh Rosenberg (see all)
- A Shiny interactive web application to quantify how robust inferences are to potential sources of bias (sensitivity analysis) - January 19, 2018
- Outcomes from a self-generated utility value intervention in science (in IJER) - December 30, 2017
- Review of ‘What’s Worth Teaching: Rethinking Curriculum in the Age of Technology’ - November 7, 2017