The UK is going to the polls next week and election campaigning is in full swing. We all agree that for our democracy to be healthy, people need to be able to express themselves freely. But when the conversation we’re seeing online isn’t authentic because it comes from bots – accounts run by computers that have been programmed to look like humans – then there is cause for huge concern.  

We therefore looked to see if there are accounts that might be bots spreading political messages during the UK electoral period. We chose to focus on X because it’s a platform where there are a lot of political conversations.

What we found was concerning: a small group of accounts that appear to be bots have posted more than 60,000 tweets since the general election was announced. It is estimated that these tweets have been seen a staggering 150 million times

This is a snap investigation following the snap election and so the number of accounts we looked at was necessarily limited. As a result, the number of bot-like accounts we uncovered is also limited, but they have an extremely oversized influence given how prolifically they post and the number of people who are seeing their content. 

Why are bots a problem?

Bots that are particularly dangerous for our democracies hide the fact that they are bots and spread political messages that are frequently and intentionally divisive and hateful. Accounts like these threaten our democracies by drowning out the voices of real voters and subverting the conversation. 

It takes time and money to set up fake accounts, programme them to amplify particular topics and give them at least a bit of original content to tweet alongside all their re-tweets. When done at scale, these are the sort of operations that have been carried out by foreign governments. 

We know that these dangers are real. For example, tweets from confirmed Russian bots were seen over 10 million times during the Brexit referendum. Indeed, the problem is likely to get worse as advances in generative AI make it quick and cheap to create text that looks like it might have been written by a human. 

Social media corporations bear a responsibility to prevent their platforms from being abused in this way. The UK’s new Online Safety Act requires platforms to protect against the risk of foreign interference – and bots are one of the ways that a foreign power can attempt to sow division.


UK election bots.jpg

As the UK prepares for its next general election, a new investigation has identified several bot-like accounts who may be sowing the seeds of division. Paul Hanson / Alamy Stock Photo


What did we do?

We focused on two topics that are central to the current UK election debates: climate change and migration [1]. We gathered all the tweets since the election was announced that used specific hashtags related to those topics. 

The hashtags deliberately covered a wide spectrum of views, from #welcomerefugees to #migration and #stoptheboats, and from #climatecrisis to #netzero to #endnetzero [2].   

We then searched for evidence that any of the accounts posting tweets with these hashtags might be bots. For researchers outside of the platforms, it’s often not possible to know for sure that an account is a bot, but it is possible to look for [3]: 

  • Accounts that post enormous numbers of tweets per day
  • Accounts that rarely write any of their own content but almost always retweet others
  • Accounts that have a handle that ends in a long string of numbers indicating that the account holder used the default account name provided by X instead of creating their own unique account name
  • Accounts without a profile picture that appears to be of the person running the account or with a profile picture that shows signs of being generated by an AI tool or having been stolen from elsewhere on the web
  • Accounts with low numbers of followers

One or two of these red flags by themselves do not create any suspicion of a bot – there are plenty of accounts that don’t have many followers and there’s nothing wrong with not including a photo of yourself in your bio. 

Indeed, anonymous accounts are often an important way for people to be able to participate in online conversations safely. 

But when there are three or more of these red flags, and when at least one of those red flags is that the account tweets practically all the time, then it’s reasonable to suspect that the account could be partially or fully automated. 

We then manually checked each bot-like account and removed any we thought showed evidence of being controlled by a real person. 

Bots are often set up to work in a coordinated way, amplifying the same type of content as each other. So to deepen our analysis, we looked at the most common hashtags used by a shortlist of possible bots that we had drawn up early in the investigation. 

It showed that an anti-Meghan Markle hashtag and #labourlosing were the two most popular hashtags used. Because the second of those hashtags directly relates to the UK election, we added it into our analysis.

How many possible bots did we find?

Our focused analysis of a handful of hashtags that have been used in tweets since 22 May turned up evidence of 10 accounts that look like they might be bots, from a sample of maximum 500 tweets per hashtag. 

This isn’t a giant number of accounts, but the point is that a small number of prolific accounts can have an outsize influence on the electoral debate. 

Together, these 10 bot-like accounts have published more than 60,000 tweets in the few weeks since the election was called and those tweets are estimated to have been shown to people more than 150 million times.

Our searches for tweets about migration found four possible bots, three using #stoptheboats and one using #refugeeswelcome. Our searches on climate change turned up fewer possible bots: one using #endnetzero [4] and one using #climatecrisis. The other possible bots were found via #labourlosing. 

All of these accounts have had days when they’ve posted more than 200 tweets, and four of them have had days when they’ve posted an extraordinary 500+ tweets. 

What sort of content are these possible bots amplifying?

The overwhelming majority of the bot-like accounts we found (8/10) are overtly party political – they don’t just express political opinions, but clearly align themselves for or against a particular political party [5]. 

For example, they use party logos as their profile picture, they regularly retweet the party or they use hashtags that encourage or discourage voting for a particular party.

Two of the three bot-like accounts we found using #stoptheboats encourage people to vote for Reform UK. 

The one bot-like account we found using #climatecrisis encouraged people not to vote Conservative by, for example, including the hashtag #GetTheToriesOut in their account bio. 

All of the five bot-like accounts we found using #Labourlosing promoted Reform UK.

Some of the bot-like accounts spread extreme and violent Islamophobia and homophobia. Some spread anti-Semitism and transphobia. Some state that climate change is a “hoax”, that vaccines have created a “genocide” and that the so-called great replacement theory is a fact. 

One account is clearly a fan of President Putin, retweeting content that says he is “the greatest President ever.”

Who might be behind these bot-like accounts?

When you find accounts that very much look like they’re bots, the first question that comes to mind is who might have set them up. Who is it who’s paying for them? 

It is not possible for us to answer that. What we can say is that if these accounts really are bots, then given the amount of disinformation and hate that they post, they could well have been paid for by someone with a vested interest in dividing us or getting particular political parties into power. 

What should be done about this?

This is a problem that is made far worse by social media platforms: they are designed in ways which can be easily exploited and weaponised to drive political conversations in divisive and harmful directions. 

The responsibility should therefore lie with the social media corporations to make sure that their platforms are not being manipulated and our democracies are not being put at risk. 

The EU has recently passed legislation, the Digital Services Act, that requires platforms to mitigate any risks that their platforms pose to electoral processes, with the threat of huge fines if they don’t.

The major social media platforms already ban harmful bots. For example, X’s policies state that you may not “artificially amplify […] information or engage in behavior that manipulates or disrupts people’s experience” and that users that violate this policy may have the visibility of their posts limited and, in severe cases, their accounts suspended. 

The problem, however, is that the policies aren’t sufficiently enforced.

We call upon X to investigate whether the list of potential bots that we have provided to them violate their policies and to invest more in protecting our democratic debate from manipulation. 

We wrote to X to give them the opportunity to comment on these findings but they did not respond to our findings.

Appendix

These are the red flags used to detect accounts that in combination look like they might be a bot. 

Overall, these flags indicate a low investment in setting up a profile (which is easier to automate en masse) and a high investment in sharing content across a platform at high volumes, which is consistent with the aims of setting up automated accounts.

The account is a prolific tweeter and tweets in high volume:

  • The account has tweeted more than 200 times a day in the last year
  • The account has tweeted more than 60 times a day on average over the lifetime of the account

The account posts a low amount of original or high-quality content and predominantly retweets (retweets are easier to automate, and this may generate some following but is unlikely to lead to mass followings):

  • The account retweets other accounts’ tweets more than 90% of the time
  • The account has fewer than 1,000 followers

The profile was able to be set up quickly:

  • The account’s handle ends in a long string of apparently random numbers, suggestive of a handle being used that was generated automatically by X
  • The account does not have a profile picture that depicts the account owner, such as no profile picture at all, or a cartoon or logo being used, which are easier to access and set up than accounts with original photos

Endnotes

[1] We used Information Tracer to help with our analysis. 

[2] In addition, we searched #migrantcrisis, #smallboatscrisis, #ltn and #climatescam. None of those turned up examples of accounts with 3 or more red flags for bot-like activity. 

[3] For further details on the red flags we used, see the appendix.

[4] The account using #endnetzero also uses #stoptheboats

[5] We do not have any evidence to suggest that any UK political party is paying for, using or promoting bots as part of their election campaigns.