New investigation into hateful content in the US and India reveals ineffective reporting processes
London, February 1 - Flagged misogynistic hate-filled content was kept live on YouTube and Indian microblogging site Koo despite violating companies’ own policies, according to a new test of platform reporting mechanisms carried out by Global Witness and the Internet Freedom Foundation (IFF).
The joint investigation identified and reported real-life pieces of hate speech content on YouTube and Koo in both the US and India that targeted women on the basis of gender, some of which included Islamophobic, racist, and casteist hate. These included posts that described women as “dogs”, “whores” and “100 percent worthless,” and stated that their “genes are trash, absolute trash”, referred to a prominent Muslim journalist as a “terrorist”, demeaned black women, and targeted Dalits (a marginalised protected community in India) with denigrating slurs. All of these violated the platforms’ hate speech policies and should therefore be taken down once reported, according to both companies’ official processes. [1] To test these processes, we reported 79 videos on YouTube and 23 posts on Koo containing prohibited hate speech. [2]
Despite the fact that the content clearly contravenes the platforms’ policies, our test found that:
- YouTube did not remove any of the 79 reported videos containing hate speech. A full month after the content was reported, the status of only one video changed, to require users to be over 18 to watch it, and all the videos remained live on the platform.
- Koo also failed to act on most of the content violating its policies we reported. Out of 23 posts we reported, the platform removed six, or just over a quarter. It reviewed and took no action on 15 others and failed to provide any response for two of the reported posts.
- While both platforms left the vast majority of reported content live on its site, Koo did review and respond to most within a day, in contrast to YouTube’s lack of response after more than a month.
Previous research by Global Witness and others has repeatedly demonstrated that social media platforms’ failure to moderate banned hate speech content is a widespread, systemic and ongoing problem, with often devastating consequences, from Myanmar to Ethiopia to the US. [3] In light of these exposés, many social media platforms have pointed to the reporting tools they give users, allowing harmful content to be reviewed and removed if it violates their policies. Yet our latest findings show that these reporting mechanisms are flawed and ineffective, with the lax responses of YouTube and Koo showing both platforms are failing to properly review and act on material they say has no place on their sites, even once reported.
Not only are social media platforms failing to address this online hate, but their data-hungry and engagement-driven business model may in fact be playing into its amplification by favouring expressions of outrage and polarising content.
Our findings that social media platforms are enabling misogynistic hate online come against the backdrop of a surge of online violence against women and girls in recent years, threatening women’s safety, leading to serious and long-lasting mental health impacts, silencing women in online spaces and creating a chilling effect on their engagement in public and political life, from journalism to leadership roles.
Moreover, our evidence of platform inaction is particularly concerning in a year in which both the US and India, together with over sixty other countries, are due to hold national elections. Online hate speech and disinformation have already led to attacks on journalists and the targeting and murders of religious minorities in both countries, as well as the offline harassment of election workers in the US.
Prateek Waghre, Executive Director at Internet Freedom Foundation, said:
“Our investigation demonstrates that social media platforms continue to leave the door open for hateful speech to flourish, endangering women and minorities, and enabling a toxic information ecosystem in a critical year for global democracy.”
“Social media corporations’ failure to respond to content that is in violation of their own policies in the world’s two largest democracies shows their alarming lack of preparedness around elections, where the febrile political climate risks amplifying the threat and impact of extreme and harmful online content. The burgeoning social media user base and rapidly evolving digital landscape in India further heighten these risks in the lead-up to the elections.”
Henry Peck, Digital Threats to Democracy Campaigner at Global Witness, said:
“Time and time again, we have seen how online hate speech causes real-world harms, putting the lives of its targets at risk and fuelling broader conflict and violence, with a disproportionate impact on women and marginalised communities.”
“Instead of continuing to generate revenue from hate, YouTube, Koo, and other social media platforms must urgently act to properly resource content moderation, enforce their reporting processes, and disincentivise extreme and hateful content.
“As close to half the world’s population potentially go to the polls in 2024, it has never been more crucial for social media corporations to learn from previous mistakes and protect their users’ safety and spaces for democratic engagement, both online and offline.”
In response to Global Witness and Internet Freedom Foundation’s investigation, a Koo spokesperson said the company is committed to making the platform safe for users and endeavours to keep developing systems and processes to detect and remove harmful content. They said it conducts an initial screening of content using an automated process which identifies problematic content and reduces its visibility. They said subsequently reported content is evaluated by a manual review team to determine if deletion is warranted, following several guiding principles.
Google was approached for comment but did not respond.