Decision №2020–801 DC of 18 June 2020

Richard Wingfield
8 min readJun 22, 2020

Introduction

On 18 June 2020, the French Constitutional Council (CC) declared parts of a recently passed piece of legislation to be unconstitutional. The law in question is the “law aimed at combatting hateful content on the internet” (or, in the original French, la “loi visant à lutter contre les contenus haineux sur internet”), nicknamed the “Avia law”, after Laetitia Avia, a Member of the National Assembly who proposed it.

In this post, I want to examine the Constitutional Council’s decision as it relates to two parts of the law — paragraphs I and II of Article 1 — which would establish a new regime for the removal of various forms of illegal online content from online service providers. (While many other provisions of the law were found to be unconstitutional, it is these provisions that went to the very heart of the law, with the majority of other unconstitutional provisions being consequent to these core ones.) I also want to look at the impact that this may have on the human rights compliance of other, similar pieces of legislation which are being proposed, and the potential broader impact of the decision.

For those who wish to read the decision in full, it can be found on the Constitutional Council’s website, in French, here. I have made my own attempt at translating the decision into English here, but there may well be errors.

What did the Avia law say?

Before looking at the decision, it’s probably helpful to summarise what those two provisions proposed to do:

Article 1, paragraph I would have amended an existing piece of legislation, Law №2004–575 of 21 June 2004 on confidence in the digital economy (loi n° 2004–575 du 21 juin 2004 pour la confiance dans l’économie numérique). Under the existing law, the French administrative authorities have the power to direct any online service provider to remove specified pieces of content which they consider to constitute content which glorifies or encourages acts of terrorism, or child sexual abuse imagery. Where the online service fails to do so within 24 hours, or if the authority is not able to identify and notify the responsible online service, the authority can request ISPs and search engines to block access to the web addresses containing the content in question. Article 1, paragraph I of the Avia law would reduce the time period from 24 hours to 1 hour, and increases the potential sanctions for failure to comply to one year’s imprisonment and a fine of 250,000 euro (up from the current sanction of one year’s imprisonment and a fine of 75,000 euro).

Article I, paragraph II would also have amended the Law №2004–575 of 21 June 2004 on confidence in the digital economy by introducing a brand new regime to complement the existing one. Under this new regime, online service providers whose activity exceeded a particular size (to be set down in a decree) would have a new legal obligation to remove or make inaccessible certain forms of “manifestly illegal” content within 24 hours of being notified by one or more persons. These forms of illegal content include:

  • Apologising for the commission of certain crimes;
  • Encouraging discrimination, hatred or violence against a person or group of people on grounds of ethnicity, nationality, race or religion, sex, sexual orientation, gender identity or disability, or of causing discrimination against them;
  • Denying a crime against humanity;
  • Outrageously minimising, degrading or trivialising the existence of a crime of genocide or crime against humanity, a crime of slavery or a war crime;
  • Insults against a person or group of persons due to their sex, sexual orientation, gender identity or disability;
  • Sexual harassment;
  • Images or representations of a minor which are pornographic;
  • Direct encouragement or support for acts of terrorism; and
  • Dissemination of a pornographic message likely to be seen by a minor.

What did the Constitutional Council decide?

The CC found both of these provisions to be unconstitutional on the basis that they amounted to interferences with the right to freedom of expression (as protected by the 1789 Declaration of the Rights of Man and the CItizen) which were not necessary and proportionate to the aim being pursued. (The CC did accept that the aims being pursued by the legislature were legitime ones (namely to protect public order and the rights of third parties)).

The reasoning, understandably, differs in relation to paragraphs I and II since they provide for different regimes and requirements. In relation to paragraph I of Article 1, the CC highlighted those factors of the provisions that gave them concern, and there appear to be four in total.

  1. Whether or not a particular form of content amounted to content which glorified or encouraged acts of terrorism, or which constituted child sexual abuse imagery did not depend on any inherent quality of the content, but was solely determined by administrative authorities.
  2. It was not possible for online service providers to appeal against removal requests after the one hour period had elapsed or to suspend that one hour period.
  3. The one hour period meant that online service providers were unable to obtain a decision from a judge before having to remove content.
  4. The penalties for non-compliance were potentially one year’s imprisonment and a 250,000 euro fine.

While the CC found that these factors meant that the interference was not necessary and proportionate, there isn’t any indication of the weight it gave to each of these factors. As such, it’s not clear whether any of those factors in and of themselves meant that there was a disproportionate interference, or whether it was only their cumulative impact.

In relation to paragraph II of Article 1, the CC listed five factors that gave them concern:

  1. The decision as to whether the content is “manifestly illegal” or not is not one that a judge makes, but solely down to the online service provider.
  2. The decision of determining whether the content is “manifestly illegal” may be a difficult one: there may be very technical legal issues that need to be considered, particularly when it comes to crimes related to the press, where the context of any expression must be considered.
  3. The online service provider only has 24 hours to make a decision. Given the difficulties noted above, and the fact that the providers may have many reports to review, the period is a very brief one.
  4. Article 6–2 appears to include an exemption from liability where “the intentional nature of the infringement … may result from the absence of a proportionate and necessary examination of the notified content”, but it is not clear what the scope of this exemption is. There are no other exemptions, such as where an online service provider has to review multiple reports simultaneously.
  5. The penalties for non-compliance were potentially one year’s imprisonment and a 250,000 euro fine.

Again, while the CC found that these factors meant that the interference was not necessary and proportionate, there isn’t any indication of the weight it gave to each of these factors or whether it was only their cumulative impact (as opposed to the existence of a specific factor or set of factors) that led to the conclusion of disproportionality.

Why is the decision important?

The reasoning of the CC is important, since a number of governments around the world are looking at introducing legislation which would impose obligations on social media platforms, search engines and other online service providers to take steps to limit the availability of different forms of illegal (and, in some cases, legal but harmful) content. Many of these proposals are similar to those in the Avia law, particularly obligations to remove content when directed by a particular public authority, and to proactively remove certain forms of content within a specified period of time (whether notified of their existence or otherwise) with sanctions for failing to do so. Indeed, Germany adopted a comparable piece of legislation in 2018, the Netzwerkdurchsetzungsgesetz (also known, in English, as the Network Enforcement Act, or simply the NetzDG). The European Commission is considering something similar, albeit more narrowly focused on terrorist content. In Australia, the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 requires online content service providers to remove “abhorrent violent material” expeditiously.

The decision of the CC provides an example of a human rights based analysis to proposals such as these. But while the CC found that such provisions constituted unjustified interferences with freedom of expression, there are, of course, caveats to this conclusion being reached in other jurisdictions. Most importantly, the CC made its decision solely on the basis of the French human rights framework, rather than the European Convention on Human Rights or the International Covenant on Civil and Political Rights. Other courts examining these, or similar, proposals may well come to a different conclusion, particularly given the differences between the French legislative proposals and those that exist in other jurisdictions.

But the CC’s decision is still helpful in two regards. For one thing, this is the first decision of such an authoritative body on provisions such as these, and the French human rights framework is not wholly different from other human rights framework (particularly in that it requires any interferences with freedom of expression to pursue a legitimate aim, and to be necessary and proportionate, as is the case under both European and international human rights law). Secondly, the CC’s decisions helps to identify the key factors that should be considered when analysing legislative proposals such as these from a human rights perspective, for example:

  • Who decides whether content is illegal or not? Is it the online service provider itself, or an independent judicial authority?
  • How difficult is it to make that determination? Does it require a consideration of context, or are technical legal questions involved which would make it difficult (or even impossible) for a service provider to decide?
  • How long do online service providers have to make determinations, and is there the possibility of lengthening this time, or seeking a judicial decision on the content’s legality?
  • Are there any exemptions, for example where the online service provider has a large number of reports to consider, or where it wouldn’t be reasonable to expect an online service provider to be able to decide?
  • Can online content service providers appeal against decisions to remove content when so directed by a public authority?
  • Are any penalties for non-compliance proportionate, or will they incentivise the removal of content?

While these are already the questions that many human rights experts are asking of legislative proposals that they are examining, confirmation from a national constitutional body that these are, indeed, the ones to ask strengthens such an approach. It should also give hope to those concerned about the risks to freedom of expression from legislative proposals such as these that these concerns carry weight, and are concerns that should receive proper consideration from legislators and (failing success there), judges.

--

--

Richard Wingfield

Head of Legal at Global Partners Digital. Posts are written in a professional capacity. @rich_wing on Twitter.