Facebook, X/Twitter, TikTok and YouTube all approved adverts featuring extreme and violent misogynistic hate speech against women journalists in South Africa, a new investigation by Global Witness and the South African public interest law firm, Legal Resources Centre (LRC) can reveal.
The investigation submitted 10 adverts containing hate speech targeting women journalists to Facebook/Meta, TikTok, X/Twitter and YouTube in four languages: English, Afrikaans, Xhosa, and Zulu (40 adverts total). The ads, based on real-world examples of abuse received by women journalists, were violent, sexualised and dehumanising, referring to women as vermin, prostitutes, or psychopaths and calling for them to be beaten and killed. They included statements such as “they’re just all sheep and should be slaughtered” and “they all need to die”.
Despite the extreme content of the adverts and the fact that they clearly breach the social media platforms’ own policies on hate speech, all four platforms approved the vast majority of the adverts:
- Meta and TikTok approved all 40 ads within just 24 hours
- YouTube also approved them all but flagged 21 of the 40, with an approved but ‘limited’ status, thus still deemed appropriate for some audiences
- X/Twitter approved them all, aside from two English adverts, which had their publication ‘halted’ after we conducted further tests into the platform’s approval process
None of the adverts were coded or difficult to interpret, and all followed the platforms’ definitions of prohibited hate speech. The adverts also included video footage that further breached the platforms’ policies on hate speech. After capturing these results, we deleted all the adverts before they were published so none will appear on the platforms.
These findings come as a new global UN survey shows that the vast majority of people polled around the world are concerned about hate speech and disinformation on social media. Against this backdrop, journalists who identify as women are facing escalating threats online, both in South Africa and globally, with this violence often moving offline[1]. These are often calculated assaults that rise out of political tensions or as a response to journalists exposing corruption, and can be instigated by politicians, their associates, and their supporters with a goal to undermine and silence those who hold them to account. As well as being a serious threat to women’s freedom of speech, livelihoods, and personal safety, this gendered hate speech is therefore also a threat to media freedom and democracy. This issue has never been more urgent, with South Africa one of at least 65 countries with elections next year, involving 2 billion voters.
Sherylle Dass, Regional Director at the LRC, said:
“As we approach the biggest election year so far this century, the stakes have never been higher - protecting press freedom and the safety of journalists is essential to uphold the democratic process.
We are deeply concerned that social media platforms appear to be neglecting to enforce their content moderation policies in global majority countries in particular, such as South Africa, which are often outside the international media spotlight.
The platforms must act to properly resource content moderation and protect the rights and safety of users, wherever they are in the world, especially during critical election periods.”
Global Witness has previously conducted numerous investigations from Myanmar to Ethiopia to the US, demonstrating social media platforms’ failure to enforce their hate speech policies is a widespread, systemic and ongoing problem, with often devastating real-world consequences. In South Africa, our recent joint investigation with the LRC showed Facebook, TikTok and YouTube all approved adverts containing violent xenophobic hate speech. These tests also reveal failures in the platforms’ content moderation systems, which use AI to automate a lot of the work. In light of the recent AI Summit in the UK and the growing discussions around the potential future risks at the ‘frontier’ of AI, this shows that there are real and present failures in how AI is being used right now. Despite the overwhelming evidence, social media corporations have not taken action to address the issue and Meta/Facebook, X/Twitter, and Google (which owns YouTube) have drastically cut the teams and contracting firms responsible for dealing with hate speech and mis/disinformation over the last year.[2]
Ferial Haffajee, Associate Editor of the Daily Maverick and former Editor-at-large at HuffPost South Africa, said:
“As a female journalist in South Africa, I have been targeted and abused online, simply for doing my job. This has taken a huge toll on me and my loved ones.
Global Witness and the LRC’s latest exposé shows that social media corporations are not practising what they preach, allowing even the most extreme and violent forms of content to be published and risking complicity with perpetrators of online violence.
Along with many other journalists, I have tried to use the social media platforms’ reporting mechanisms and even contacted the companies directly, but it is to no avail. They knowingly turn a blind eye while playing host to assaults on women’s rights and media freedom.”
Hannah Sharpe, Digital Threats Campaigner at Global Witness, said:
“In the age of the ‘manosphere’, women are under constant threat from misogynistic attacks online, and our investigation shows that platforms continue to enable and even profit from this hate speech.
To protect women and minoritised communities, press freedom, and democracy, together we have to challenge Big Tech’s predatory business model, in which billionaire social media CEOs are raking in huge sums through platforms designed to promote enraging, extreme and hateful content.
We all want an online world that connects us rather than dividing us - to achieve this we need social media corporations to build safety-by-design into their platforms and governments to bring forward balanced regulation grounded in human rights that holds platforms accountable.”
In response to Global Witness’ investigation, a Meta spokesperson said: "These ads violate our policies and have been removed. Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes. That's why ads can be reviewed multiple times, including once they go live.”
A TikTok spokesperson said that hate has no place on TikTok and that their policies prohibit hate speech. They said that their auto-moderation technology correctly flagged the submitted advertisements as potentially violating their policies but a second review, by a human moderator, incorrectly overrode the decision. They said that their content moderators speak English, Afrikaans, Xhosa and Zulu and that they are investing in Trust and Safety globally, including expanding operations for the Africa-based TikTok community[3].
Google and X/Twitter were approached for comment but did not respond.