The presidential election is less than six months away, and as the race to the Oval Office continues, campaign initiatives — especially political advertising and commentary on the current issues of 2020 — are kicking into high gear.
Many Texans may be tuning into social media to research candidates and their platforms, or notice President Donald Trump turning to Twitter again and again to disperse his thoughts. That’s not to mention that social media — especially Twitter — has been a hot spot of activity amidst ongoing conversations about the coronavirus and racial crises.
But are the tweets and posts you’re reading written by a real person? Or are they posted by a bot?
In 2018, the Pew Research Center did a study that found that accounts suspected of being bots are responsible for as many as two-thirds of all tweets that link to popular websites.
In 2016, one study by researchers at the University of Southern California analyzed election-related tweets sent in September and October 2016 and found that 19 percent were sent by bots.
And most recently, new research from Carnegie Mellon University reported that nearly half of the Twitter accounts spreading messages on the social media platform about the coronavirus pandemic are likely bots, researchers said in the report published in May.
“Twitter has become the main information source for a significant fraction of Americans. It’s the platform many people use to get their daily news, spanning many topics, but importantly, including politics, policy and social issues,” said Emilio Ferrara of the study. Ferrara is a research assistant professor at the USC Viterbi School of Engineering’s Information Sciences Institute. “We need to guarantee that this platform is reliable and that it does not compromise the democratic political process by fostering the spread of rumors or misinformation.”
In addition to their use — and strong influence in elections — bots are all over the current news about the coronavirus.
In a study of 200 million tweets, Carnegie Mellon researchers found that 45 percent of them were from accounts that acted more like computers than humans, though they weren’t sure what groups or individuals were behind the bots.
Experts at Carnegie Mellon University, where research into bots is ongoing, say that they are seeing up to two times more bot activity than they expected based on previous natural disasters, crises and elections.
If you’re worried about encountering a bot online, experts say there are a few key things to look for. The Carnegie Mellon researchers are using a “bot hunter” tool which flags accounts that might be run by computers, not humans. Twitter is also doing its part in attempting to flag tweets or accounts that might be spreading misinformation, regardless of the topic.
A social media bot may post way more times than is humanly possible, post at odd hours such as in the middle of the night, or appear to be in several different countries within a few hours. Experts are also taking into account who is following the account, their general presence on a platform, hashtag usage and if they are using copied and pasted messaging. The best thing to do, they say, is “stay vigilant” and fact check with multiple sources.
With 2020 heading into an election in the middle of many still unknown outcomes of the coronavirus pandemic, it’s making social media particularly vulnerable to bots attempting to spread misinformation in general.
Reuters reported in March that Russian media sources have recently engaged a widespread “disinformation campaign” to grow panic and distrust in the West about the coronavirus.
Analysis of the 2016 presidential election made a strong case that Russian bots had influenced the outcome and election of President Trump. Experts from Carnegie Mellon say it’s possible that this could happen again in November.
And although not all bots — or just automated tweeting or posting from accounts — are acting with bad intentions, Ferrara emphasized that awareness is important, because a key takeaway in understanding bots is that because they are computerized, they’re always evolving and getting better at what they do. So it might be harder to recognize them as it gets closer to the 2020 elections.
“Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots to produce more human-like content,” Ferrara said. “With the upcoming 2020 U.S. elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”