Big Tech platforms failing to assess risks to users despite new EU rules

Published:

eu flags flutter in wind in brussells
Big Tech companies published the first round of risk assessments required under the Digital Services Act. Alex Mihis / Global Witness

Large tech platforms like TikTok and Facebook are meant to assess potential harms to users under the EU's Digital Services Act, but a coalition finds first reports wanting

Together with the DSA Civil Society Coordination Group, Global Witness’s Digital Threats campaign contributed to an analysis of the first round of risk assessment reports under the EU Digital Services Act.

The EU rules are the most sweeping to date for regulating tech companies anywhere in the world. Under the Digital Services Act (DSA) "very large online platforms" (such as Facebook, YouTube, X and TikTok) must identify and mitigate risks that they pose to users at least once every year. These risks include the spread of illegal content, as well as social harms like their impact on people’s fundamental rights, elections, and public health.

Risk assessment reports offer a way to systematically lift the lid on the inner workings of powerful tech companies that play an increasingly dominant role in how we access information on the world around us.

At a time when tech companies are announcing sweeping changes to how they keep people safe online, including gutting fact-checking and retiring vital transparency tools, this type of scrutiny is all the more important.

In this joint briefing, prepared by 40 civil society organisations from across Europe, we identify both promising practices and critical gaps, offering recommendations to improve future iterations of these reports and ensure meaningful compliance with the DSA.

Overall, our analysis finds tech companies have failed to adequately assess and address the actual and foreseeable risks posed by their services. The point of the Digital Services Act is to make platforms safer. But if platforms haven’t properly assessed the risks they face, they can’t possibly mitigate against those risks.

For risk assessment to be a genuine tool for accountability, the reports must be meaningful. Tech companies cannot be allowed to simply gloss over potential problems without substantiating their claims or consulting with impacted communities and external experts.

Recommendations

Platforms should:

  • Focus on design-related risks, particularly those tied to recommender systems.
  • Enhance transparency by providing verifiable data on mitigation measures.
  • Engage meaningfully with stakeholders to ensure risk assessments reflect real-world harms.

Read the full briefing here

Related