By Daniel Lundgaard
Bots and their impact on online conversations is rapidly becoming an important problem on social media. If we look at the conversation around the current Coronavirus pandemic, somewhere between 45% to 60% of the accounts on Twitter that promoted disinformation were identified as bots, in the anti-vaccine debate researchers have found that bots are used to “weaponize” online health communication and create discord, and in the climate change debate research suggests that about a quarter of all tweets are produced by bots.
These bots are used in a wide range of misinformation “strategies”. Based on findings from my own research and a review of current research on the topic, I have summarized what I perceive as the three main “strategies” where we know that bots have been used:
Amplifying certain opinions. The simplest strategy where bots have been used is in efforts to amplify a specific opinion, often by continuously re-tweeting the same tweet or link, or by only endorsing the shared posts of people with similar interests.
Flooding the discourse. Malicious actors often seek to increase confusion and challenge the current status quo e.g. the scientific consensus that climate change is man-made. In this strategy, bots are used to spread large volumes of information and start multiple conversations (often covering both sides of the debate), which makes it easier to question the current consensus. A similar tactic is as often seen in disinformation campaigns where large amounts of “fake news”-outlets create a new media ecosystem, and because of the increased volume of information, the voice of the validated outlets is “drowned”, which empowers the fake news outlets.
Linking issues to current tensions. Efforts to link debates to current tensions seek to polarize opinions and cause divide as seen within the vaccine debate where a debate was associated with current racial/ethnic divisions. Here bots are mainly used to either explicitly make the connection in their own tweets, or by commenting on content shared by others, suggesting the presence of a link to certain socioeconomic tensions.
With these strategies in mind identifying the users that in reality are bots seems like a crucial task. However, detecting and adequately handling these bots has proven to be a challenge for the major social networking sites such as Facebook and Twitter.
Nonetheless, after reviewing current tools made available for bot detection, current research on the topic and my own findings from an analysis of roughly 5 million tweets about climate change, I have identified a few tips that might help you to spot these bots – and potentially their impact on the conversation. For this list, I have left out bot-detection approaches that are based on reviewing patterns not normally visible to most users e.g. network features detection if the same group of users follow and re-tweet/like another group of users with similar language and message.
The user profile
Reviewing the user profile appears as one of the best ways for “normal” users to detect a bot. The most simple indicators could be a missing profile picture, however sophisticated bots might use stolen photos and here a quick “reverse image search” (right-clicking on the profile image and “search google for image”) might reveal something about the source of the image e.g. that it is taken from someone else. A generic (or poorly worded) profile description might also be an indicator, and in my own research I have found that reviewing the content of user profile descriptions is even better than reviewing the content of the tweets shared on a specific topic for predicting opinions.
Different or “stiff” language
The conversation on Twitter is often informal and people often use abbreviations or structure their sentences differently, which can be difficult to copy. As a result, bots might appear mechanical or rigid in its language – often returning to the same topic, share the same link over and over again, or returning to a topic that should have outlived the rather short life-cycle of some topics on Twitter.
Lack of humor
Granted, everyone misunderstands a joke sometimes and people can have trouble with understanding sarcasm. Because of this, understanding humor, especially sarcasm, also remains one of the major challenges for bots to both understand but also respond accordingly. This is particularly relevant on Twitter, where conversations may refer to shared understandings, inside jokes or memes used in a certain way within a community, which even sophisticated bots may have trouble understanding and adapting to.
Temporal behavior
Reviewing past activity, in particular with focus on patterns in temporal behavior might also be useful e.g. by spotting that a user seems to tweet at the same hour every day if it shares multiple tweets pr. Minute, or if the user immediately retweets or comments on other posts, which can be an indicator of an automated and pre-defined response.
It is important to acknowledge that not all bots are seeking to manipulate political conversations on social media. However, while some bots definitely are created for noble purposes, bots are increasingly becoming an important tool for various (potentially malicious) actors and their efforts to shape conversations on social media – especially Twitter. As a result, we, as a society needs to become better at detecting bots and limiting their power to shape the online debate, and I hope that by reading this blog I might have broadened your understanding of bots – and hopefully you have picked up a few tricks to spot potential bots appearing in your Twitter feed.
About the author
Daniel Lundgaard is a PhD Fellow at the Department of Management, Society and Communication at Copenhagen Business School. His research investigates how communication on social media (e.g. the use of emotions, certain forms of framing or linguistic features) shapes the ways we discuss and think about organizational and societal responsibilities.
Photo by ?? Claudio Schwarz | @purzlbaum on Unsplash