TikTok and YouTube have passed a second test to detect election disinformation, showing how vital independent investigations are to raise enforcement standards
In case you missed it, in the month before the US Presidential election we published an investigation into whether key social media platforms could effectively block election disinformation.
We tested this by placing eight ads on platforms including TikTok and YouTube, all of which violated their policies by doing things like containing outright false election information (e.g. that you can vote online), or promoting content that threatened electoral workers (e.g. calling for a repeat of the 6 January 2021 attacks on the Capitol).
We translated these ads into "algospeak", using numbers and symbols as stand-ins for letters, to mimic what bad actors might do and better test the effectiveness of the platforms’ moderation systems.
Our findings back in October were that:
- TikTok did the worst, because it approved 50% of our ads
- YouTube did the best, because although it approved 50% of our ads, it told us that it wouldn’t publish any of them without the provision of personal identification (like a passport or driver’s license), creating a higher barrier to the publishing of disinformation on the platform
In response to our October investigation, TikTok confirmed all our ads breached their policies, claimed that they had accepted some in error, said our ads “may” have undergone additional stages of review, and said that they would use our findings to help detect similar violative ads in the future.
We decided to test this.
We resubmitted the exact same eight ads to both YouTube and TikTok to see what would happen. Both platforms passed our second test:
- TikTok disapproved all our ads (including the ones they had approved previously).
- YouTube suspended our entire account under their suspicious payments policy and flagged half of our ads as making unreliable claims or containing US election advertising (which would require further verification). The ads were all listed as “not eligible” and "under review" (so not approved).
As always, we deleted all of the ads before they went live, to ensure no harmful disinformation was actually published.
These results are encouraging and show that platforms are capable of improving their standards when pushed.
However, this was not a difficult test for them to pass, considering the ads were word-for-word copies of those submitted the previous month.
Nevertheless, these results demonstrate how important it is that independent actors like journalists, academics and NGOs are allowed and encouraged to test the moderation systems of platforms.
At a time when the incoming US President has threatened to clamp down on efforts to fight disinformation and platforms are shutting down access to vital transparency tools, as Meta did earlier this year with Crowdtangle, work like this becomes even more important.
Users deserve to be able to trust that platforms will serve them ads containing accurate and reliable election information.
It shouldn’t be up to everyday citizens to fact check every piece of election information they are shown, especially when it is served to them in paid advertising.
Organisations like ours must continue to ensure that the policies that the platforms claim to provide actually work in reality.
Finally, it would be remiss of us to celebrate this small win without acknowledging that the situation outside of the US is very different.
As we have reported in the past, TikTok and YouTube have failed similar tests in other elections, such as in India and Ireland.
These threats to democracy will continue until social media platforms dedicate sufficient resources to content moderation in every jurisdiction in which they operate.