Poland: Draft Law on the protection of freedom of speech on online social networking sites

Richard Wingfield
14 min readFeb 8, 2021

After a growing trend in recent years of governments — particularly governments in Europe — starting to introduce regulation of online platforms in order to address different types of illegal and harmful content and activity, 2020 saw a new phenomenon. In response to certain online platforms taking a more interventionist approach towards breaches of their terms of service by President Trump and other political figures, a small number of governments are now considering regulation which would in some way restrict the ability of online platforms to remove or moderate certain forms of content. Some early efforts include President Trump’s own Executive Order Preventing Online Censorship and UK Prime Minister Boris Johnson’s consideration of online platforms being subject to a legal “duty of impartiality”. But it is the Polish government that is most advanced, recently publishing draft legislation “on the protection of freedom of speech on online social networking sites”.

In terms of process, the draft law, prepared by the Ministry of Justice, has now been submitted to the Chancellery of the Prime Minister. Once approved and added to the list of legislative works by the Council of Ministers, it will be subject to consultation before being formally adopted by the government and brought to the Parliament.

While, in some contexts, legislation aimed at protecting freedom of expression online would be greeted with enthusiasm by the human rights community, the current Polish government is not one with a strong track record on this issue. Indeed, the level of freedom of expression in Poland has seen a continued weakening in recent years, and this legislation is likely aimed at ensuring that members of the government and their supporters have legal recourse if they consider that they are in any way being censored by online platforms.

In any event, in this blog post, I want to take a look at the draft law and set out some first thoughts on what it would mean from a human rights perspective. If you want to read the draft law, you can find it here (in Polish) and I’ve put together a Google Doc with an attempt at a translation into English here.

What would the draft law do?

The draft law does two main things. First, it introduces new obligations on online platforms in relation to their content moderation policies and enforcement, including a requirement to allow users to make complaints that legal content has been removed or that illegal content is accessible on the platform. Second, it establishes a new public body, the Freedom of Speech Council, which would be able to hear appeals from users dissatisfied with the outcome of that complaint procedure, and with the power to order the reinstatement of content which was not illegal. The draft law also contains provisions setting out administrative sanctions for non-compliance with its requirements.

After summarising the scope of the draft law (both the companies it would apply to, and the types of online content that it relates to) I will look at each of those elements in turn.

1. Scope

The law would apply to those that provide electronic services through “online social networking sites” which allow users to share any content with other users or the general public, and which are used by at least one million registered users in the country (Arts. 2 and 3). As such, the draft law targets a relatively small number of the largest platforms in the country, including, for example, Facebook, Twitter and YouTube. There are no exceptions, and so it would appear that even companies like Amazon that simply allow users to post comments reviewing products would be within scope.

The scope of the types of content that the draft law would apply to is “illegal content”, which is then defined in Art. 3 as comprising four categories: (i) content that violates personal rights, (ii) disinformation, (iii) criminal content, and (iv) content that violates decency, in particular, disseminating or praising violence, suffering or humiliation. Two of these categories are further defined. Disinformation is “false or misleading information, produced, presented and disseminated for profit or in violation of the public interest”. “Criminal content” is content which “praises or incites to commit” certain acts prohibited by the Penal Code, as well as content which “constitutes a prohibited act”. That list of specified acts prohibited by the Penal Code is extensive, and includes, among others:

  • crimes of aggression;
  • terrorist offences and offences against the state;
  • “insulting the Nation or the Republic of Poland”;
  • attacks on the armed forces;
  • murder;
  • deprivation of liberty and human trafficking;
  • harassment of another person which leads to fear or which violates their privacy;
  • offences against freedom of conscience and religion;
  • sexual offences;
  • offences against public officials in the exercise of their duties;
  • interference with elections; and
  • inciting the commission of an offence, or participating in a group whose purpose is to commit an offence.

2. Obligations on service providers

The draft law would impose three broad obligations on service providers.

a. Appointment of “representatives in the country”

Under Art. 16, service providers would need to appoint between one and three “representatives in the country”. The duties of these individuals would include representing the service provider in all court and out-of-court activities; considering complaints in the internal review proceedings (see below); providing answers and any information to institutions and bodies in relation to conducted proceedings; and participation in trainings on the legal status regarding complaints considered in internal review proceedings.

Service providers would need to provide the Office of Electronic Communications (Poland’s telecommunications regulator) with details of these representatives, including their email addresses and addresses for service. If the representative is a legal person, the service provider would also need to provide details of natural persons authorised to act on its behalf. This information would need to be published on the social network website.

b. Transparency reports

Within 3 months from the entry into force of the law, all service providers would have to submit a report for the previous year, containing information on their methods of counteracting disinformation and disseminating illegal content to date (Art. 46).

Separately, under Art. 15, any service provider which receives over 100 user complaints in a calendar year regarding the distribution of access to illegal content, restriction of access to content or limitation of access to a user’s profile, would need to prepare a report (in Polish) every six months on how these complaints are resolved. The report would need to be published on their website within one month from the end of the given six months. They would also need to send the report to the Office of Electronic Communications who would publish it in the regulator’s Official Journal. A template for the report would be developed by the Minister of Justice, in consultation with the minister responsible for computerisation.

c. Internal review proceedings

The draft law would require service providers to establish an “internal review procedure” which would allow users to make complaints about the restriction of access to content, restriction of access to their profile, and the dissemination of illegal content (Art. 19). As well as their own content moderation policies, the service provider would also need to publish the rules for conducting internal review proceedings on their website. They would also need to ensure that users could make complaints through a clear and permanently accessible mechanism.

Where a user submits a complaint, a representative in the country would confirm receipt to the email address indicated in the complaint. They would then need to examine the complaint and inform them of the decision reached within 48 hours. If their review concluded that the complaint was justified, they would need to restore restricted access to content, restore access to the user’s profile, or prevent the distribution of the illegal content, as the case may be (Art. 20). They would need to inform the user of the reasons for the decision, including the legal grounds with reference to the rules of the social network site. Additionally, they would need to provide information on how the user could submit a complaint to the Council if they are dissatisfied with the decision, and inform them that they could pursue a claim through civil proceedings or notify the authorities about the commission of a criminal offence. Under Art. 21, the Office of Electronic Communications would “exercise supervision over internal review proceedings” although it is not clear what form this supervision would take.

Any persons who conduct internal review proceedings would be required to be regularly provided with training (in Polish) by the service provider, at least every six months (Art. 17). The Minister of Justice would issue regulations relating to the training and its content.

3. Freedom of Speech Council

The draft law would establish a new public body, the Freedom of Speech Council (Art. 45). The Council would comprise five individuals, appointed to terms of six years by the Polish Sejm (the lower house of Parliament) (Arts. 6 and 7). While the draft law provides that an individual would need to receive a 3/5 majority of votes, it then goes on to say that where no candidate receives such a majority, a simple majority of votes will suffice. Given that the government currently has a majority of seats in the Sejm, but less than 3/5 of all seats, in practice this means that the members of the Council would be chosen by the government. The Council would meet in closed sessions (Art. 11).

Any user of an online social networking site within scope who is dissatisfied with the way that a complaint of theirs was handled during an interview review procedure would be able to submit a complaint to the Council (Art. 22). Details of the complaint would be sent by the Council to the representative(s) of that site, who would then be required to provide any relevant materials from the internal review proceedings to the Council within 24 hours (Art. 24).

Based on the materials presented to it, the Council would make a determination of whether the content was illegal or not within seven days (Art. 25). If the Council decides that the relevant content was illegal content, it will issue a decision declining the restoration of the content or access to the user’s profile. If, however, it decides that the relevant content was not illegal content, then it will issue a decision ordering the restoration of the content or access to the user’s profile.

The decision would be made solely on the basis of the materials provided by the user and the site. It would not hear from any witnesses, ask any questions of the parties, or obtain any expert opinion or evidence (Art. 26) and, as noted above, any meetings would take place in closed sessions. The decision would be limited to an indication of the facts which the Council considered to be proven and citation of the legal provisions constituting the legal basis for the decision (Art. 27). A decision ordering the restoration of content or access to the user’s profile would need to be complied with by the service provider within 24 hours (Art. 28) and the service provider would be prohibited from limiting access to the content that was the subject of the examination by the Council again (Art. 29).

4. Other provisions

As well as giving users the ability to report illegal content, forcing the service provider to undertake an internal review procedure, the draft law would also introduce a new procedure to speed up the removal of certain types of criminal content. Where criminal content is identified by the authorities, a prosecutor would be able to ask the service provider or representative in the country to provide necessary information, including identification of the user and the material posted (Art. 36).

If, based on this information, the criminal content contains material with pornographic content involving a minor or content that praises or incites the commission of acts of a terrorist nature, or that further access to the material creates the risk of causing significant damage or effects difficult to reverse, the prosecutor can immediately issue a decision, delivered to the electronic address of the representative in the country, ordering the service provider to prevent access to the content. The service provider must comply with that order immediately. The service provider may appeal such a decision to the local court.

5. Enforcement and sanctions

Under Art. 32, a service provider which failed to comply with any obligation in the law would be sanctioned by way of a fine of between PLN 50,000 to PLN 50,000,000, taking into account: the impact of the service provider’s omission on the scale of the resulting disinformation; the degree of violation of the public interest; the frequency of previous breaches of an obligation or of a prohibition of the same type as the breach of an obligation or prohibition, as a result of which a penalty is to be imposed; prior punishment for the same behaviour; and any actions taken by the party voluntarily to avoid the consequences of violating the law. Similarly, under Art. 33, a representative in the country who failed to comply with any obligation in the law would be sanctioned by way of a fine of the same amount and taking into account the same factors. Service providers and representatives in the country would be able to appeal to the Council for a reconsideration of the decision (Art. 34).

What are the human rights impacts of the draft law?

Despite the draft law’s title, this legislation creates more risks to freedom of expression than it addresses, not least due to the context in which it’s being proposed. Four in particular jump out in my mind.

The broad scope of criminal content

The argument that what is illegal offline should be illegal online is one commonly heard when governments talk about regulating online platforms, and the argument is instinctively logical. From a freedom of expression perspective, however, two considerations are critical: first, that the laws that restrict freedom of expression offline are themselves consistent with international human rights law; and, second, that the process for determining that online content is illegal meets the procedural requirements necessary under international human rights law.

Here, there are concerns with relation to both considerations. First, what is “criminal content” for the purpose of this law is extensive, and arguably many of the relevant provisions of the Penal Code themselves are inconsistent with international human rights law: insulting the Nation or the Republic of Poland (Art. 133), insulting the President (Art. 135), and offending a person’s religious feelings (Art. 196) for example. Indeed, recent years have seen individuals investigated, charged and convicted under provisions such as these for placing a t-shirt reading “constitution” on a statue of former president Lech Kaczyński; making a flag which added a rainbow to the national coat of arms; and singing the national anthem but adding a reference to refugees.

When these offences are prosecuted by public officials, there is at least some degree of transparency and public accountability. Individuals have the ability to challenge decisions in court, for example. But under this new law, determinations around whether something is illegal online or not will be made either by online platforms (not something they want to do, I’m sure) or by a public body which is closed, non-transparent and political. Despite the fact that this law is alleged to enhance freedom of expression, it is inevitable that a broader range of content will now be removed on the basis that it is “criminal”, either by the platforms or, more likely, by order of the Freedom of Speech Council, with almost no transparency.

Additional forms of “illegal content”

Content which is “criminal” is only one of four categories of content that the draft law would define as “illegal”. One of the other three is a type of content which can be the basis for civil proceedings: content that violates personal rights. Two others — as far as I can tell — are not currently prohibited under Polish law at all, but would now be considered “illegal” on online platforms: disinformation and content that violates decency.

For these two categories, despite being “illegal”, no court will ever make a determination on whether a particular piece of content falls into them. Such determinations will only be made by online platforms (again, not a responsibility they want to have) and by the Freedom of Speech Council. There is a strong likelihood that even if online platforms make efforts to avoid making determinations, the Freedom of Speech Council will designate content which is critical of the government or which the government doesn’t like as “disinformation” or “content that violates decency” and order online platforms to remove it. With financial sanctions for non-compliance with these orders, it’s unrealistic to expect them to refuse to follow them.

Rushed decisionmaking by platforms

It’s important to restate that none of the online platforms within scope of this draft law wants to have to make determinations, within 48 hours, of whether particular pieces of content are legal or illegal. But that is what the draft law would require. Even with the best legal experts in the world, making determinations within 48 hours will be difficult in all but the most obvious cases (for example child sexual abuse material). Exacerbating this challenge is the fact that platforms will only be able to have a maximum of three representatives in the country able to make such determinations, creating the potential for a large number of decisions having to be made with almost no time for meaningful consideration.

Online platforms could just refuse to remove or reinstate content or accounts, even following an appeal by a user under the procedure that the draft law would require, and only take action when ordered to do so by the Freedom of Speech Council. Indeed, companies who take their human rights responsibilities seriously — such as those who are members of the Global Network Initiative — may well do the absolute minimum that the law requires, and take such a course of action. It’s a risk strategy, however. Online platforms seen not to take their new legal responsibilities seriously will come under significant political pressure, and the draft law gives the Office of Electronic Communications the power to “exercise supervision” over those internal review proceedings, which could include taking action where it considered that the platform wasn’t sufficiently complying. Online platforms may well feel, therefore, that they need to be doing their best to make determinations of legality, but with the limited time and resources available to them, it’s highly likely that errors will be made, and legal content removed.

The lack of transparency and potential for political bias of Freedom of Speech Council

In theory, the establishment of an independent body to review the decisionmaking of online platforms and given some authority to intervene when decisions were leading to restrictions on the right to freedom of expression could be a good thing. Facebook, of course, has established a body which has such a mandate — the Oversight Board — but the idea of a public body established at the national level is worth considering.

The Freedom of Speech Council is not such a body and there is little point looking for potential opportunities for freedom of expression through its creation. It will be highly politicised, with its members inevitably chosen by the government and ordering the removal/reinstatement of content and accounts based on what the government wants. It is wholly untransparent, with meetings taking place in closed sessions, and no person able to be present during them. It is prohibited from taking into account any external evidence of information relating to contents/accounts it is considering, and is only required to give minimal justification for its decision.

This will not be a body that protects freedom of expression online; it will be a body that will do the government’s bidding, meaning more content will be taken down (and probably material which is critical of the government, or relating issues the government dislikes such as LGBT+ rights or pro-choice activity). To the extent that content/accounts will be reinstated, this will only happen when they belong to a government member or supporters, making it more difficult for online platforms to enforce critical policies aimed at preventing individuals from inciting hatred or violence, which will inevitably have been the reasons why those contents/accounts were removed and suspended in the first place.

Already we have seen the government of Hungary announce plans to introduce similar legislation. If this trend continues across the more authoritarian governments of Europe and other parts of the world, it bodes ill for the future of freedom of expression online.

--

--

Richard Wingfield

Head of Legal at Global Partners Digital. Posts are written in a professional capacity. @rich_wing on Twitter.