- Meta greenlights hate-filled Facebook ads, including calls for forced sterilisation of immigrants and trans people.
- Ads targeting teens with extreme dieting messaging and gay conversion therapy also approved.
- Campaigners urge Norwegian politicians to back proposed protections for citizens, including full ban on surveillance advertising.
Thursday 3 November, London - Meta is failing to block Facebook ads containing extreme hate speech and disinformation in Norway, a new investigation has found, underscoring the need for urgent regulatory action to rein in Big Tech’s broken business model.
Between 6 and 7 October, researchers from global campaign groups SumOfUs and Global Witness submitted 12 highly inflammatory Facebook ads, including three quotes from the manifesto of Anders Behring Breivik, the far-right terrorist who murdered 77 people in Norway in July, 2011. Meta approved all 12 ads within 24 hours, two of them almost instantly. The ads were removed by the researchers before publication, meaning they were never seen by Facebook users.
The shocking findings come as Norway’s parliament considers a package of measures to beef up protection for citizens online, including a full ban on surveillance advertising and the establishment of an algorithmic oversight board. Other countries are also considering a crackdown on targeted advertising, including the US, where the Federal Trade Commission is running a public consultation on new rules to constrain commercial surveillance.
Campaigners are calling on politicians in Norway, and across the world, to back these measures as part of urgently needed action to address the harms being inflicted on individuals and wider society by global technology platforms.
All 12 ads used in the investigation violated Meta’s own policies, Norwegian law, or both [1]. As well as the Breivik quotes, the ads included: calls for forced sterilisation of immigrants and trans people; anti-semitic and anti-muslim hate speech; LGBTQ hate speech, including an ad for gay conversion therapy targeting teen boys; Covid health disinformation; and extreme dieting messaging targeting teen girls.
Vicky Wyatt, campaigns manager at SumOfUs, said: “This is beyond dystopian. Meta isn’t just ushering in this kind of horror content, its systems also allow it to be targeted at those most vulnerable to the messaging, upending elections, harming children and supercharging hate and division. Warning lights are flashing red all over the world – we need action right now to end Big Tech’s abuses.”
Rosie Sharpe, Senior Campaigner at Global Witness, said: “We’re shocked by the results of this experiment, but unfortunately not surprised. These kinds of disgraceful views proliferate online but have dire real world consequences. Time and time again Meta’s content moderation systems are failing to stop hate-filled and violence inducing content. It’s well past time for Meta to act and finally do something about its hate speech problem.”
This investigation follows a string of reports exposing Meta’s failures to protect users in regions across the world. Recent SumOfUs research in Brazil uncovered an ecosystem of ads and posts peddling conspiracy theories about the integrity of the election and supporting far-right calls for a coup. Successive Global Witness investigations have also shown Meta is failing to detect ads containing hate speech and electoral disinformation in Myanmar, Kenya, Ethiopia, Brazil and the United States.
Despite the wealth of evidence of systemic failures and real-world harms, Meta has failed to take substantive corrective measures. That is why SumOfUs and Global Witness are calling on Norway and other governments to tackle the algorithmic systems underpinning this destructive business model. Specific recommendations include: banning surveillance advertising; ensuring robust data protection laws; and requiring tech platforms to carry out independent human rights assessments that must be made public. The groups are also calling on Meta to overhaul their content moderation systems so they can live up to the standards they set themselves.
In response to our findings, a Meta spokesperson said “Hate speech and harmful content have no place on our platforms, and these types of ads should not be approved. That said, these ads never went live, and our ads review process has several layers of analysis and detection, both before and after an ad goes live. We continue to improve how we detect violating ads and behavior and make changes based on trends in the ads ecosystem.”
For more information, contact Andrea Desky: [email protected] and Rewan Al-Haddad: [email protected]
Or Dominic Kavakeb, [email protected]