• Global Witness tested the election integrity commitments of major social media platforms by submitting advertisements containing harmful disinformation
  • TikTok approved 50% of the submitted ads – breaching its own policies which prohibit all ads containing political content
  • Facebook accepted one ad containing harmful disinformation – an improvement on a previous test during the 2022 Midterms
  • YouTube initially approved 50% of the ads but blocked publication of all ads until the submission of formal identification – a significantly more robust barrier for disinformation-spreaders compared to other platforms    
  • The advertisements were removed by Global Witness prior to publication online


[17 October 2024, Washington DC, US] – Just weeks ahead of the US presidential election, major social media platforms failed to detect harmful disinformation, according to a Global Witness investigation released today. 

American voters increasingly make their voting decision based on information gathered online, primarily through social media platforms. In light of this, three of the most popular platforms – TikTok, YouTube, and Facebook – have made public commitments to protect the integrity of the election. 

We submitted eight advertisements containing false election claims and threats to put these commitments to the test. We translated them into ‘algospeak’ (using numbers and symbols as stand-ins for letters) as this has become an increasingly common method of bypassing content moderation filters. All ads were specifically designed to clearly breach existing publisher policies (see notes to editors for more information on the content of the ads). 

After the platforms informed us whether the ads had been accepted for publication, we deleted them to ensure no disinformation was spread.

The results of our investigation show that despite improvements, major social media companies are still risking the proliferation of disinformation in exchange for profit. Our findings are supported by our previous investigations in the US (2022 Midterms), Brazil (2022 General Election) and India (2024 General Election).

 

Ava Lee, Digital Threats Campaign Lead at Global Witness, said: 

“Days away from a tightly fought US presidential race, it is shocking that social media companies are still approving thoroughly debunked and blatant disinformation on their platforms. 

"For several years, we have tested numerous times whether platforms have the safeguards in place to catch this type of harmful content. While we’re pleased to see YouTube improve on their previous performance, it is disappointing that TikTok and Facebook appear to be dropping the ball, yet again, in such an important moment. 

"In 2024, everyone knows the danger of electoral disinformation and how important it is to have quality content moderation in place. There’s no excuse for these platforms to still be putting democratic processes at risk.” 

 

Although TikTok has improved since approving 90% of submitted disinformation ads in 2022, its continued poor performance is particularly noteworthy due to the strict publisher policy of the platform regarding political content. It clearly prohibits all political content in ads, which should have made it easy for the platform to reject our submissions. Despite this clear-cut policy TikTok performed the worst in this test. Already under scrutiny from US officials for potential foreign interference operations, our findings put a further spotlight on TikTok’s role in US political discourse. 

Political debates are increasingly playing out online and, in the US, primarily on the three platforms we put to the test. With two out of three platforms accepting ads containing blatant disinformation, the integrity of the US presidential election is at risk. We are calling on Facebook and TikTok, in particular, to increase their efforts to protect political debate in the US from harmful disinformation (see notes to editors for specific recommendations).

We approached TikTok, Meta and Google for comment. TikTok confirmed that all of the ads submitted by Global Witness breached their advertising policies. They looked into why some of the ads were accepted and found that their machine moderation system approved them in error, although they noted that the ads were never published. They said that they would use our findings to help re-train their moderation system to be better able to detect similar violative ads in the future. Both TikTok and Google said that their ad review processes have multiple layers and that they are constantly improving their policy enforcement systems. Meta did not respond to our request for comment.