The Digital Services Act and Online Content Regulation: A Slippery Slope for Human Rights?

Richard Wingfield
The GNI Blog
Published in
5 min readJul 15, 2020

--

Vint Cerf, one of the “fathers of the Internet,” has said that “the Internet is a reflection of our society and that mirror is going to be reflecting what we see.” Few would disagree with this. It means, however, that the Internet reflects both the very best elements of society and the very worst. Indeed, it can sometimes feel like the Internet is more like a carnival mirror, distorting and exaggerating some of those worst aspects of human behaviour: extremism, bullying, hate speech.

But Vint went on to say that “if we do not like what we see in that mirror the problem is not to fix the mirror, we have to fix society.” But for many policymakers, there is a sense that the mirror does, in fact, need fixing. They argue that the Internet is not simply a reflection of society, but significantly different: the rules of what we can and cannot do are set, in large part, by private companies, not necessarily by governments; algorithms, which determine what we see online, are being increasingly manipulated for harmful ends; the Internet provides new means for individuals to disguise or anonymise themselves; and its universal and border-free nature makes it far harder for governments to enforce their laws online.

Whether you agree or disagree with Cerf, governments around the world are increasingly taking steps to force tech companies to take more responsibility for the platforms they have created and the harms that manifest therein. Unfortunately, many of the “harmful” things that happen in the real world can manifest online, too, and so the question arises, which are the ones that require new regulation?

Different governments are taking very different approaches: some focusing on a very narrow set of illegal types of material, such as terrorist and violent content; others seeking to address a broader range of illegal material which could also include hate speech; and some also trying to address certain harms that are legal, but which governments still feel they should tackle, such as disinformation or encouraging eating disorders. The question remains an open one for the European Commission as it starts to work on the Digital Services Act. The three parliamentary own-initiative reports (OIRs) developed so far provide some thought on what its scope should be, and propose quite different suggestions:

  • The IMCO report proposes focusing narrowly on “illegal content”, noting that the scope of this term should be set out in the DSA and should include “violations of EU rules on consumer protection, product safety or the offer or sale of food or tobacco products and counterfeit medicines;”
  • The LIBE report proposes a broader scope, namely “harmful and illegal content,” with “opaque political advertising and disinformation on COVID-19 causes and remedies” as examples of the former;
  • The JURI report focuses on ensuring transparency and fairness in the application of platforms’ own content moderation policies, which can and often do prohibit a broader swath of content than domestic laws; although it would prohibit the removal of user-generated content based on characteristics such as race, sex, disability or sexual orientation. It specifically states that companies should not be making decisions about whether content is legal or not, and proposes a simplified legal procedure to address disputes instead.

The fact that these three OIRs make different proposals reflects that policymakers in different EU institutions (and elsewhere) are struggling with the appropriate scope of content and conduct to regulate. In GNI’s forthcoming “Content Regulation and Human Rights in the Digital Age” policy brief, developed following a broad range of conversations with different stakeholders, will provide a helpful tool, setting out recommendations on how to design content-related regulations so as to achieve legitimate public policy goals, while mitigating unintended human rights consequences.

There are perhaps two themes among the many that have emerged from these discussions that are particularly relevant when it comes to the scope of the content covered. First, there is a strong belief that policymakers should ensure that any categories of prohibited content fall within one of the enumerated “legitimate purposes” in Article 19(3) of the ICCPR (e.g., to protect public safety). It is critical that content that might even be controversial, shocking, or offensive is not prohibited by law simply because it makes certain audiences uncomfortable. Indeed, it is perhaps more important than ever, during times of real-world crises, that we are able to challenge each other robustly on political, social and other issues; raise awareness about issues that concern us; and push for a more tolerant society; all of which can cause discomfort to some.

Second, many stakeholders believe strongly that regulation should not create new distinctions between what is permissible online and offline; while proposals which include clearly defined forms of illegal content raise fewer concerns from a freedom of expression perspective, many governments are considering following the path in outlined in the LIBE report and introducing new requirements which would restrict forms of speech which — even if considered harmful — are legal. This is much more difficult to reconcile with international human rights laws and standards on freedom of expression. In addition to this principled concern, the practical — and often illogical — consequence of such an approach is that a particular “harmful” statement could be permitted when said in person, by telephone, or in print, for example, but not when expressed on social media. For both speakers, and those affected by “harmful” statements, such a distinction would feel wholly arbitrary.

Policymakers at the European Commission — and more broadly — face difficult choices around the scope of regulation when it comes to the types of content targeted. There is pressure from certain quarters to set a broad scope, beyond narrow categories of illegal content, which would increase the ambition and impact of the Digital Services Act. Such an approach, however, raises real challenges, not only from a human rights perspective, but ultimately with its feasibility and effectiveness in tackling the very harms it intends to address.

Richard Wingfield is the Head of Legal at Global Partners Digital

--

--

Richard Wingfield
The GNI Blog

Head of Legal at Global Partners Digital. Posts are written in a professional capacity. @rich_wing on Twitter.