Knowledge Hub
EU Approaches To Disinformation And Online Hate Speech
How do countries tackle online hate speech and disinformation? This knowledge hub provides an overview of the different approaches EU member states have for tackling disinformation and online hate speech. Our database shows what approaches have been taken and where, from binding laws to national task forces and media campaigns.
Working Group
Capacity Building
Awareness Raising
Illegal Behaviour
Platform Action
Regulatory Body
Media Literacy
Disinformation
Hate speech
Disinformation
Disinformation is considered a matter of national security and social cohesion in Sweden, and government initiatives follow this logic.
On 1 January 2022, Sweden's Psychological Defence Agency (MPF) was established, with the express aim of preserving “open society’s free exchange of knowledge and information”. Aside from counteracting online deception and disinformation, it also ensures that government authorities effectively communicate with the public, both in peacetime and in high-alert situations. Despite being a new agency, the Swedish government proposed bolstering the MPF’s budget by nearly EUR 1 million (SEK 10 million) given that Sweden continues to be a primary target for Russian disinformation campaigns, and that general elections would take place in September 2022.
The MPF follows the work of other Swedish authorities previously tasked with researching domestic trends in disinformation, such as the Swedish Civil Contingencies Agency (MSB), which, in 2020, monitored disinformation campaigns about the COVID-19 pandemic across various social media platforms.
In March 2022, the government instructed country administrative boards to regularly provide security situation reports, which include cybersecurity readiness and salient disinformation campaigns affecting local populations. In late 2021, the Swedish government also gave the go-ahead to a proposal to amend the nation’s Radio and Television Act to strengthen independent media throughout the country and reduce the risk of disinformation. According to the proposal, service providers must supply more transparent information regarding their owners, and authorities will be granted more leeway in revoking broadcasting licenses in cases of a breach of advertising rules. The proposals came into effect on 1 July 2022.
Even so, the Swedish government has been reticent in implementing regulations that would put the onus of content moderation on large online platforms. In a 2021 response to the European Commission’s Code of Conduct on Disinformation, the government insisted that “measures aimed at online platforms take the form of self-regulation rather than legislation”. Sweden also sought to limit the scope of the DSA to illegal content and, instead, enhance the transparency of actions taken by social media companies, according to the Carnegie Endowment for International Peace. In Sweden, freedom of speech is highly protected in the constitution, therefore the current laws are focused more on defamation or incitement to hatred.
Sweden also allocates funds to neighbouring countries as part of a larger, holistic strategy to bolster freedom of expression and counteract disinformation in the region. Since 2017, the Swedish government has designated nearly EUR 1 million (SEK 10 million) in the annual budget for investment into civil society initiatives that strengthen freedom of expression and reduce disinformation. The Swedish Ministry of Foreign Affairs also regularly engages in high-level dialogues with regional governments and Big Tech platforms on the topic of disinformation.
Hate Speech
Hate speech is criminalised under Chapter 16, Section 8 of the Swedish Penal Code (expression of threat or contempt based on protected characteristics), and Chapter 29, Section 2.7 (discriminatory motive as an aggravating circumstance). Following a 2018 amendment, Chapter 18, Section 9 of the Penal Code now also prohibits discrimination. Specifically, a merchant may not refuse to provide services or sell items, nor may an organiser of events or gatherings refuse entry, based on protected individual characteristics. Protected individual characteristics include race, nationality, sexual orientation, gender and religion. Activists, however, are currently petitioning the government to amend established discrimination and hate speech provisions to include explicit protections for women as well.
Statistics reported by the Swedish Police indicate a high ratio of hate crime reports compared to prosecutions (for 2020: 3,150 reports; 334 prosecutions). While this is a marked improvement in absolute values compared to the last year on record (2018: 5,858 recorded hate crimes; 218 prosecutions), ODIHR notes that this could be due to a change in data methodology.
The considerably lower prosecution rates demonstrate a restrictive interpretation of hate speech provisions, as well as a balancing of free speech against other rights. Sweden has historically taken a stance against blocking or limiting internet access, arguing that “crimes should be prosecuted, not hidden”. Hence, there is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
The Swedish government has expressed openness to the European Commission’s proposed initiative to criminalise online hate speech and hate crimes to a greater degree at the EU level, namely that online companies such as Google and Meta should take greater responsibility for monitoring hate speech on their platforms. Online hate speech is currently prosecuted under the same provisions as offline hate speech in Sweden. According to the Swedish Crime Survey, online harassment affects 2.5% of the population, and disproportionately affects women ages 16 to 19. In the past three years, survey data has also revealed that 70% of journalists had experienced insults, 40% had experienced slander, and 23% had been the targets of harassment and unlawful threats online.
Several national preventative initiatives have been launched in recent years to combat online hate speech. The Swedish Media Council launched the No Hate Speech Movement, an educational campaign to curb hate speech against youth, and the Swedish Defence Research Agency has been tasked with analysing violent extremist propaganda in digital environments. Meanwhile, several public investigations have been launched to analyse current legal provisions on online hate against journalists and public servants, terrorist content on the internet, and the prevalence of racist organisations, with the aim of suggesting policy changes in line with EU regulations. Special budgetary funds have also been allocated in recent years to combat discrimination against LGBTQI+ communities, and government forums have been held to step up protections for Jewish communities online.
Disinformation
The last two years have seen significant momentum in Spain for grappling with the growing spread of disinformation.
In 2020, the Spanish government adopted a ministerial decree that updates the existing frameworks to prevent, detect and respond to disinformation campaigns. It established the Permanent Commission against Disinformation to facilitate coordination among those ministries responsible for operational activities to combat disinformation.
In October 2020, Spain’s National Security Council approved the Procedure for Intervention against Disinformation. The Procedure establishes four action levels to detect disinformation campaigns, analyse their impact on national security, and provide managerial support during crisis situations.
The Spanish government has responded to concerns regarding the effect of the ministerial order on freedom of expression by stating that under no circumstances would the procedure be used to “limit the free and legitimate right of the media to offer information” or to monitor or censor it.
Hate Speech
Spain has extensive laws on both hate speech and terrorism. Provisions on the latter have been criticized by civil society as disproportionate and too broad, as they overlap with hate speech provisions in their subject matter.
Hate speech is criminalised under Articles 510.1.a (incitement to hatred based on protected grounds), 510.1.c (denying or trivializing genocide and crimes against humanity), 510.2.a (harming people’s dignity based on protected grounds) and 510.2.b (exalting or justifying crimes against a group or individuals based on protected characteristics) of the Spanish Criminal Code. Meanwhile, Article 573.3 expands the definition of terrorism to include the “glorification” of terrorist activity and “incitement” to terrorism.
More specifically, Article 578.1 makes it a criminal offence to engage in any “public praise or justification” of terrorism, with aggravated penalties if such speech is expressed online (Article 578.2). The latter rationale exists in the Czech Penal Code as well, and was a source of criticism from the British human rights organization Article 19, which noted that “the government has not made the case” for increased sentences for online speech. Article 579 of the Criminal Code outlaws incitement to terrorism separately from “praise and justification”, albeit without differentiation between online and off-line expression.
Article 578.4 of the Criminal Code explicitly states that the removal of illegal content (related to terrorism or its glorification/incitement) may be only ordered by a court or a judge. The number of hate crimes reported by the Spanish Police is consistently high: 1,401 in 2020, of which 675 were prosecuted and 144 led to sentences.
In June 2021, the European Court of Human Rights (ECtHR) overturned a Spanish court’s conviction of a right-wing political figure, Erkizia Almandoz. The Criminal Chamber of Audiencia Nacional had previously found him guilty for condoning terrorism (punishable under Articles 578 and 579.2 of the Criminal Code), a ruling subsequently upheld by the Spanish Constitutional Court. The Court had ruled that Almandoz’ actions amounted to hate speech, but the ECtHR ruled that the speech in question fell within the ambit of free speech under Article 10 of the European Convention on Human Rights.
Provisions on hate speech have been complimented with a law penalising the intimidation, harassment and threatening of women considering an abortion. The law seeks to dissuade anti-abortion demonstrators camped outside abortion clinics and, more broadly, to protect abortion rights.
The Council for the Elimination of Racial or Ethnic Discrimination, a Spanish body in charge of promoting non-discrimination and equal treatment, coordinates nine CSOs, through the programme “Assistance and Orientation Service to Victims of racial or ethnic Discrimination”, to provide legal representation to victims of hate speech.
Disinformation
No concrete measures have been taken to tackle disinformation in Slovenia.
In early 2022, the government adopted a bill to promote digital literacy. Its focus remains on enhancing digital inclusion, however, and does not reference online safety. Media literacy efforts are active in the country, but only within the non-governmental sector and academia.
Hate Speech
The Slovenian Constitution prohibits the spread of hatred or violence on ethnic, national, racial or religious lines (Article 63). Article 8 of the Mass Media Act further prohibits the dissemination of programmes that promote discrimination and intolerance on similar grounds.
Hate speech is also criminalised under Article 297 (incitement to hatred based in protected grounds) of the Slovenian Penal Code. Cases of hate speech are referred to the State Prosecutor’s Office. According to ECRI, prosecutors interpret the Article too narrowly, which leads to a “significant impunity gap”. In 2020, 94 instances of hate crimes were reported under Article 297, with seven prosecuted and six resulting in convictions.
The State Prosecutor’s Office’s Working Group for Hate Speech has also been expanded; since March 2021, it doubles as a working group for both hate speech and hate crime. The Working Group seeks to standardize and streamline prosecutorial practices for hate speech and hate crimes in Slovenia.
There is currently no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
Disinformation
Similar to the situation in the Baltic states, public discourse regarding disinformation in Slovakia focuses on Russian disinformation campaigns.
The Slovakian Criminal Code prohibits the intentional dissemination of “false alarming news” that can give rise to grave or dangerous concern among members of the population (Article 361). Furthermore, if such information was intentionally shared with police, public authorities or the media, the offending individual can face imprisonment (Article 361.2). Negligently spreading “false alarming news” that incites a dangerous situation or a sense of “defeatism” in certain populations is also prohibited under the Criminal Code (Article 362). Meanwhile, Article 373 prohibits disinformation likely to lead to defamation.
In November 2021, the Slovakian government launched a Centre for Countering Hybrid Threats under the auspices of the Ministry of Interior. The Centre, among other activities, uses AI tools to detect disinformation.
In Slovakia, the Communication and Prevention Department of the Presidium of the Police Force is primarily responsible for monitoring disinformation. Police units monitor online hoaxes and fraud through a Facebook page and use constant fact-checking to prevent the spread of disinformation. According to its 2021 report, 189 hoaxes were debunked on the site, 151 of which concerned pandemic-related disinformation.
In February 2022, against the backdrop Russia’s invasion of Ukraine, the Slovak government passed a law (No. 55/2022, Coll.) empowering the National Security Office (NSA) to block websites and online service providers that spread “malicious content”. The office can do so at the behest of an authorised entity or on its own accord. The law also amends the Cyber Security Act to provide the NSA with greater powers.
“Malicious content” under the 2022 law constitutes “a program or data that causes or may cause a cybersecurity incident”. Meanwhile, “malicious activity” pertains to “any activity that causes or may cause a cybersecurity incident, fraudulent activity, theft of personal or sensitive data, serious misinformation, and other forms of hybrid threats”. Any decision taken by the NSA in this regard does not require a court order; the aggrieved party can, however, appeal in court. Critics of the law see it as a tool for censorship and note that it contradicts a recent ruling by the European Court of Human Rights (ECtHR), which deemed website blocking a violation of Article 10 of the European Convention on Human Rights (OOO Flavus and Others vs. Russia, June 2020).
In August 2022, Slovakia adopted the new media law, which includes the regulation of social media platforms sharing video content. According to the law, platforms are responsible to protect minors from harmful content, particularly hate speech and incitement to violence. If a platform violates this provision, fines of up to 100.000€ are applicable. Although this law does not prohibit disinformation as such, it mandates the Slovak media regulator to research harmful content on platforms, including disinformation.
Slovak civil society is also a strong player in working to counter disinformation campaigns. Notable examples include Konspiratori.sk and research by the Bratislava-based think tank GLOBSEC.
Hate Speech
Hate speech is criminalised under Articles 423 (defamation based on protected grounds), 424 (incitement to hatred based on protected grounds) and 140(e) (specific motivation for an offence based on protected grounds) of the Slovakian Criminal Code.
In January 2017, an amendment to the Criminal Code was passed to ensure the more effective investigation of extremist and racially motivated crimes. Since then, such crimes fall under the purview of the Special Prosecutor’s Office, the General Prosecutor’s Office and the Specialized Criminal Court. The following month, a special 125-person police unit was established within the National Criminal Agency (NAKA) to investigate crimes related to the support of terrorism, extremism and hate speech.
Within three years of NAKA’s creation, the number of extremism- and hate crime-related proceedings has increased, from 30 cases before 2017, to 176 in 2017, and 109 in 2019. The Specialized Criminal Court’s public database suggests that judges deal regularly with hate crimes, including those that take place online. Even so, the 2021 Media Pluralism Monitor reports that Slovakia remains at high risk (81%) regarding its ability to protect the public from illegal and harmful speech.
In 2019, the Slovakian Supreme Court found Milan Mazurek, a member of parliament from the right-wing People’s Party Our Slovakia, guilty of making racist remarks about the Roma minority. Although his mandate was revoked, he was subsequently re-elected in 2020. Meanwhile, in 2020, prosecutors brought a case against the party’s leader, Marian Kotleba, to Slovakia’s Specialized Criminal Court on charges of promoting neo-Nazi ideology, a violation of Article 423 of the Penal Code. He was found guilty and sentenced to four years of imprisonment. In April 2022, however, the Supreme Court reduced his prison time to six months on the condition that his conviction of being a neo-Nazi sympathizer remains.
Additionally, the new media law obliges platforms to protect minors from harmful content, including hate speech.
Disinformation
Romania criminalises only certain types of disinformation in its Criminal Code. False information that undermines national security (Article 404), or incites a war, war of aggression or internal conflict through “invented news” (Article 405), carries severe punishment under the Romanian Criminal Code. An individual found guilty of violating Article 404 faces up to five years of imprisonment. Article 405 mandates that convicted offenders spend from two to seven years behind bars.
During the pandemic, Romania experienced widespread disinformation campaigns against COVID-19 vaccines. Critics report that the Romanian government failed to launch an effective communication campaign to counter such disinformation or to establish resources providing the public with credible information.
On 16 March 2020, President Klaus Iohannis issued a decree declaring a state of emergency (initially for 30 days, followed by an extension for another 30 days) due to the pandemic. Article 54 of the decree limited the right to freedom of information. Romania’s Interior Ministry unilaterally invoked the article to suspend access to 15 online resources accused of “spreading false news”. This was performed without a court order or a clear means for judicial redress.
When the emergency period came to an end, on 15 May 2020, access to the 15 websites was restored, per a decision from the National Authority for Administration and Regulation in Communications (ANCOM).
Hate Speech
Hate speech is criminalised in Romania under Article 369 (incitement to hatred or discrimination based on protected grounds) and 77(h) (intent to discriminate on protected grounds as an aggravating circumstance) of the Romanian Criminal Code. Protected characteristics under Article 77(h) include language, social origin, health ailments and diseases, age, disability, financial status and political affiliation or opinion. Article 78 recommends maximum punishment if the offence was committed under “aggravating circumstances”.
Furthermore, Article 297 of the Criminal Code states that if a public employee, acting in the course of their duty, “creates a situation of inferiority based on race, nationality, ethnic origin, language, religion, sex, sexual orientation, political affiliation, wealth, age, disability, non-contagious disease or HIV/AIDS”, they face imprisonment of from two to seven years. Government Ordinance no. 31/2002 prohibits xenophobic or fascist organisations, and bans publicly condoning genocide, crimes against humanity and war, or honouring those responsible for such atrocities.
There is currently no legislation directly regulating online hate speech in Romania. The law regulating audio-visual communications (504/2002), however, obliges specific online service providers (which have some form of editorial control) to interrupt the transmission of racist or discriminatory content by blocking or restricting access when it has been brought to their attention. It also mandates that users have access to a notice and appeal mechanism. This inherently means that such online service providers also face liabilities under Romania’s Civil and Criminal Code (Articles 11 to 15).
The National Council for Combating Discrimination (CNCD), established upon the adoption of Government Ordinance No. 137/2000, is a quasi-judicial body under parliamentary authority. Since 2001, it has been tasked with combating discrimination by implementing anti-discrimination legislation adopted at both the national and EU levels. Among other responsibilities, the CNCD is also required to report to the Ministry of Communications and Information Society about any decision concerning online service providers.
In 2021, the Romanian government adopted the National Strategy for preventing and combating antisemitism, xenophobia, radicalization and hate speech 2021-2023. One of the strategy’s primary objectives is to develop a cohesive process to collate information on antisemitic and other hate crimes in the country, and to catalogue public records to glean the rate of prosecutions and convictions within 18 months of its adoption.
Romania has not, however, submitted hate crime statistics to ODIHR since 2018. According to the “eMore -Monitoring and reporting online hate speech in Europe” project report, both online and offline hate speech prosecutions in Romania remain low.
The European Court of Human Rights (ECtHR) observed in a recent ruling (Association Accept & Others vs. Romania, June 2021) that police and enforcement authorities are vested with positive obligations to uphold the dignity and the right to privacy of individuals under the European Convention on Human Rights. The ECtHR further stated that, while not every comment amounts to hate speech, in instances where hate speech or incitement to violence is clear, the state has a positive obligation to act, which includes taking complaints to trial. Article 369 of the Romanian Criminal Code was not in force at the time of the offence, and the ECtHR was therefore bound to examine the case through the lens of existing legislation (i.e., Article 317, incitement to discrimination or nationalist/chauvinist propaganda).
Disinformation
Portugal has regularly recognised concerns over the lack of a legal framework to combat disinformation. Responding to a rise in disinformation, the Portuguese parliament?government, in May 2021, introduced the Portuguese Charter on Human Rights in the Digital Age (Law No. 27/2021). The legislation, enacted in July 2021, creates an obligation on the state to protect its citizens from the effects of dissemination of disinformation, in line with the European Democratic Action Plan. It also seeks to provide a complaint redress mechanism through which citizens can complain about disinformation to the state’s media regulatory authority.
The nation’s media regulatory authority (Entidade Reguladora para a Comunicação Social) has been the body responsible for campaigning activities and initiatives in the field of media literacy since 2009. In April 2017, the cross-ministerial national initiative “Digital Skills e 2030” - Portugal (INCoDE.2030) was launched. The aim of the initiative is to strengthen information and communication technology competencies among the Portuguese population to better equip citizens to take part in the new digital labour market. The initiative dually addresses digital literacy.
Hate Speech
Hate speech is criminalised under Article 240 of the Portuguese Penal Code (discrimination, defamation, or incitement to hatred and violence on protected grounds). The Article also contains a list of protected characteristics, including discrimination on grounds of mental or physical disability, age, sexual orientation or gender identity, race, ethnicity, religion and nationality. Furthermore, Article 251 of the Penal Code penalises slander on religious grounds.
There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
Before the 2017 amendment of Article 240, the law required establishing intent to incite racial or religious discrimination in order to be considered a crime, which made enforcement difficult. In part to address this issue and to adhere to international commitments, Portugal’s National Plan to Combat Racism and Discrimination 2021-2025 proposes further expanding Article 240.
In August 2020, the Criminal Policy Framework Law (Law no. 55/2020), which seeks to prevent crimes motivated by bias, was approved. The law also recognises the internet’s role in catalysing such offences. Preventing and combating cybercrimes is expressly listed as one of the law’s commitment areas.
According to hate crime records periodically submitted by the Portuguese Ministry of Justice and the Prosecutor's Office to ODIHR, Portugal recorded 132 hate crimes in 2020. There is no information on prosecutions and convictions.
“HATE COVID-19.PT” is a joint project of the Portuguese National Cybersecurity Centre and the Lusa News Agency to detect online hate speech on social media platforms. The ten-month project commenced in May 2021 and seeks to build a “prototype” to proactively identify and detect messages conveying hate speech online.
Disinformation
In the wake of Russia’s invasion of Ukraine, Poland has witnessed relentless disinformation campaigns about its refugee intake policies. Aside from a few sector-specific initiatives, however, there has been no cohesive effort made to tackle the challenges posed by disinformation.
Previously, ahead of the 2018 local elections, the Polish government launched “Safe Elections” (Bezpieczne Wybory), a website created by an intergovernmental, cross-sectoral team that included representatives from social media platforms.
Responding to pandemic-related disinformation, in 2020, the Polish Press Agency also launched the online platform “Fake Hunter”, in collaboration with GovTech Polska, which works as both a fact-checking site, where individuals can submit information to be verified, and a resource hub.
Hate Speech
Poland’s Penal Code criminalises hate speech under Articles 119 violence or unlawful threats against individuals based on protected grounds), 256 (incitement to hatred based on protected grounds, including hatred based on not adhering to a religion, condoning fascism, communism, or any other totalitarian system), and 257 (public insult based on protected grounds). Actions in the spirit of educational, scientific, or artistic expression constitute exceptions to Article 256.
The enforcement of existing hate speech laws is effective in Poland, but instances of online hate speech remain high. In 2015, for example, prosecutors opened 1,548 legal proceedings against hate crimes, half of which pertained to online content.
Interestingly, as per the data collected by the Polish Police for the year 2020, 374 of 826 reported hate crimes reached the prosecution stage, with 276 resulting in convictions. Of these incidents, 215 pertained to racist and xenophobic hate crime, and 42 concerned homophobic and transphobic offences. The decrease in the hate crimes, as per a 2019 ODIHR study, stems from underreporting of hate crimes in Poland.
According to the Media Platform Monitor, which conducts risk assessments of EU countries on “pluralism in the digital era”, Poland was at high-risk (73%) in 2020, per an indicator of “protection against illegal and harmful speech”. It is important to note that the Polish Penal Code does not criminalise homophobic or transphobic speech. Furthermore, age and disability are also not recognized protected characteristics. This has led to a stalemate between the Polish government and social media platforms over removal procedures per existing community standards versus local laws. There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
In 2021, the Polish Government announced the expansion of “freedom of speech protections” in response to the blocking of former United States President Donald Trump’s social media accounts and heightened publicity about social media platforms’ powers to delete posts or suspend accounts. The proposal would establish a “Freedom of Speech Council”, which would assess complaints about blocking or deletion decisions by platforms and issue decisions regarding their restoration. The draft also introduces a complaint mechanism for users in cases where access to their content is blocked or their accounts are deleted without having violated any Polish law. Users can make an appeal before the Council if they find the platforms’ response to their complaints unsatisfactory. The Council’s decision may be appealed before a court.
In 2023, the Polish Supreme Court overturned a hate speech conviction of a man, who posted hateful comments on Facebook, making a precedent in interpretation of the law. This decision was condemned by Polish anti-racism NGO OMZRiK, which pointed to issues in the judiciary system.
Disinformation
Since 2019, the Netherlands has launched a series of measures to combat the spread of online disinformation, primarily through government campaigns, public-private partnerships, and the anticipated transposition of EU regulations.
According to the Dutch Criminal Code, the propagation of online disinformation is only illegal when libellous (Article 261), slanderous (Article 262), hateful (Article 137d) or defamatory (Article 137c), or when it aims to manipulate elections (Article 127). Such unlawful statements can also be subject to tort law (Article 6: 162 of the Dutch Civil Code) if damages can be proven for which compensation would be appropriate.
In October 2019, the Dutch government adopted a strategy against disinformation, emphasising critical media literacy, the transparency of social media platforms and political parties (preferably through self-regulation), and the maintenance of a pluriform media landscape. Fact-checking is deemed important as a means of countering disinformation, but addressing the content of disinformation is, according to the government, primarily the task of journalists and academics.
In February 2021, just weeks before the Dutch parliamentary elections, the Dutch interior minister provided an update on the measures outlined in the strategy. The international organization International IDEA, together with Dutch political parties and large internet platforms, launched a code of conduct for political advertisements to increase transparency and combat misleading information. Disinformation detection during the election period was spearheaded by public-private partnerships and published on a joint, publicly accessible website. According to the Dutch interior minister, further action to combat disinformation will come in the form of coregulation with EU partners, such as the European Democracy Action Plan, and a strengthened European Code of Conduct against Disinformation.
Building upon the success of its 2019 media literacy campaign titled “Stay curious. Stay critical”, in February 2022, the Dutch Interior Ministry published a Handbook on Disinformation, which includes tips for organisations and individuals dealing with disinformation. It published a similar document specifically on pandemic-related disinformation in 2021. According to the Dutch Media Authority’s 2021 Digital News Report, some 40% of people are concerned about what is real and fake on the internet, an increase of 10% from the previous year.
Hate Speech
Hate speech is criminalised under Articles 137c (group defamation) and 137d (incitement to hatred or discrimination) of the Dutch Criminal Code. In December 2019, the maximum penalties for incitement to violence, hatred, and discrimination, as well as abuse of sexual imagery and revenge pornography, were increased. In June 2021, lawmakers also lowered the burden of evidence needed by prosecutors per Book 2 of the Dutch Civil Code to disband hateful or undemocratic entities, organizations, groups or associations.
The 2022 Dutch Coalition Agreement promises a multiyear plan to combat discrimination, hate speech, racism and misogyny. In July 2023, Dutch Ministry of Justice and Security announced that the government wants to amend the Dutch Criminal Code and make denial or downplay of the Holocaust , genocide and war crimes illegal.
There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
According to ODIHR statistics, authorities reported 2,133 hate crimes in 2020, of which only 409 were prosecuted. Those figures mark a 6% increase in reported hate crimes from 2019. These included both hate speech and discrimination offenses, as well as physical assaults, threats and damage to property.
Disinformation
Spreading false news likely to alarm the public or disturb the peace is punishable under Article 82 of the Maltese Penal Code, which was added via amendment to the Penal Code in 2018 and carries a penalty of imprisonment for up to three months. The Article further states that if the instance of false news in question leads to a public disturbance, an individual could face up to six months in prison and/or a fine of up to EUR 1,000.
The 2020 Broadcasting (Amendment) Act (amendment to the Broadcasting Act to transpose the EU’s revised Audiovisual Media Services Directive 2018) includes measures to bolster media literacy, defined as “skills, knowledge and understanding that allow citizens to use media effectively and safely”. The Act aims to foster “critical thinking skills” to “enable citizens to access information and to use, critically assess and create media content responsibly and safely”. Media literacy campaigns have been active since 2016 in the country and are dedicated to training teachers, children and youth. A few initiatives focus on online safety, but none specifically address disinformation.
Hate Speech
The 2018 amendment to the Maltese Criminal Code also criminalised hate speech under Article 82A (threat, abuse or insulting behaviour based on protected grounds), carrying a penalty of imprisonment for up to 18 months. Articles 82B (condoning genocide) and 82C (condoning crimes against peace) are also punishable under the Code. Meanwhile, Articles 222A, 251D and 325A enhance sanctions for offences motivated by bias. There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
A dedicated Hate Crime and Hate Speech Unit was established in 2019 to provide victims of hate speech and hate crimes with legal and psychological assistance. The Stop Hate initiative, partially funded by the EU, under the Rights, Equality and Citizenship Programme, in collaboration with the Maltese government, strives to implement policies and strategies dedicated to addressing hate speech. Government departments and agencies, together with the Malta Police Force - Victim Support Unit, train officials and responders, sensitise the public through awareness campaigns and support victims after they have lodged complaints.
The Maltese government has not submitted periodic reports on hate crimes to ODIHR. The European Commission notified the Maltese government of failing to transpose the EU Framework Decision on the exchange of information on criminal records (2009/315/JHA), as well as the Council Decision (2009/316/JHA), which allows for the electronic exchange of criminal records among EU authorities.
According to a recent report submitted by the Home Ministry of Malta, at least 75 people were prosecuted between 2020 and March 2022 on cases related to online hate speech.
Disinformation
There is no cohesive framework to address disinformation in Luxembourg. Only limited relevant agencies or initiatives exist at state, counterintelligence or non-governmental levels that deal specifically with disinformation.
There are, however, government efforts to expand media literacy. In 2015, for example, the Ministry of Education, Children and Youth launched the national strategy Digital (4) Education. The strategy aims to enable students to develop those skills necessary for appropriately and responsibly using information and communications technologies, while also promoting innovative educational projects using digital technology in schools. Meanwhile, since 2010, the government initiative “BEE SECURE” has also addressed media literacy and the safe use of new media in the country. BEE SECURE’s 2021 annual report attributed the lack of legal framework on disinformation in Luxembourg to legislators’ deferral of decision-making on the matter to Brussels.
To supplement government responses, the European Digital Media Observatory for Belgium and Luxembourg (EDMO BELUX) was launched in 2021. The hub is funded by the European Union and includes a wide network of experts from different sectors to detect, monitor and analyse disinformation campaigns.
Hate Speech
Hate speech in Luxembourg is criminalised under Articles 454 (prohibition of discrimination based on protected grounds) and 457-1 (incitement to hatred based on protected grounds) of the country’s Penal Code. There have been previous prosecutions of instances of online hate speech (e.g., a 2013 case resulted in conviction for comments made on Facebook).
The Penal Code, however, has never directly addressed racist or xenophobic speech, which led the European Commission to launch infringement proceedings against Luxembourg in 2021. The specific cause of the infringement was Luxembourg’s failure to transpose the EU framework decision on combating racism and xenophobia (2008/913/JHA) into national law.
In April 2022, the Committee on the Elimination of Racial Discrimination, in its 18th to 20th combined periodic report on Luxembourg, called on the government to step up efforts to address online hate speech. The rapporteur for the country also observed that the criminal provisions on hate speech mentioned above do not apply to video-streaming platforms, a legal gap, among others, that requires attention.
Meanwhile, the government continues to engage in media literacy efforts.Under the auspices of the flagship “BEE SECURE” initiative, mentioned above, the campaign “SHARE RESPECT - Stop Online Hate Speech” was also organised to educate children and youth on discerning the differences between hate speech and free speech.
Disinformation
Lithuania is one of the few EU countries that has legal measures in place to tackle alleged sources of disinformation. Article 8.11 of the 2018 Law on Cyber Security empowers the National Cyber Security Centre to order the temporary shutdown (for up to 48 hours) of electronic communications providers, such as servers, without a court order, given the suspicion of the site mounting a “cyber incident”, such as a disinformation attack. There have been concerns that such measures can stifle public debate, a claim Lithuanian officials deny, insisting instead that the measures are employed only to stop blatant fabrications of fact.
The Lithuanian Constitution, under Article 25, guarantees the freedom of expression and the right to information, but also recognizes that incitement to hatred or discrimination through disinformation is unlawful and not protected. In line with this constitutional tradition, a 2019 amendment to the Law on Public Information makes possible the suspension of television and radio broadcasters via a court order in instances where disinformation (“intentionally disseminated false information”) and incitement of hatred can be shown to have occurred (Article 19).
Most efforts dedicated to tackling disinformation in Lithuania are focused on Russia’s manipulation of information and do not address other threats, such as pandemic-related disinformation. The bulk of such efforts come from media companies and professionals committed to building fact-checking competencies. “Lithuanian Elves”, which report false information on Facebook and other platforms, are well known for aggressively responding to Russian disinformation.
Hate Speech
The Lithuanian Constitution establishes the right to express “convictions”, except for those that constitute criminal offences. These include incitement to hatred, discrimination of protected characteristics and defamation (Article 25).
Lithuania transposed the EU Framework Decision (2008/913/JHA) on combating racism and xenophobia into national law in 2009, embedding hate crime and hate speech provisions into the Lithuanian Criminal Code. Hate speech has since been criminalised, under Article 170 of the Lithuanian Criminal Code (incitement against any national, racial, ethnic, religious, or other group of persons). The offence of inciting violence or hatred is punishable by up to three years of imprisonment, while publicly expressing such sentiments carries a sentence of up to one year of imprisonment. Article 169 further elaborates on protected characteristics. The application of these criminal provisions, however, has rarely led to successful prosecutions. According to statistics from the Lithuanian National Police, in 2020, only eight instances of hate crime were reported, with four prosecuted and one ultimately sentenced.
In January 2020, the European Court of Human Rights (ECtHR), in Beizaras and Levickas v. Lithuania, held that the Lithuanian government violated its obligation to enforce hate speech laws by refusing to launch a pre-trial investigation into Facebook comments targeting the plaintiffs, a homosexual couple. This is the first judgement by the ECtHR that directly address a state’s failure to prosecute online hate speech. There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
Meanwhile, myriad government initiatives have been launched in recent years to identify and reduce online hate speech. The Lithuanian government’s Department of National Minorities, in collaboration with researchers from Vytautas Magnus University and two CSOs, are currently developing a tool to automatically detect online hate speech. The Office of the Equal Opportunities Ombudsperson, a statutory body, oversees the implementation of anti-hate speech campaigns, such as #NoPlace4Hate and #HateFree. The 2018 #NoPlace4Hate project was created with the objective of streamlining public procedures for addressing hate speech for authorities and the criminal justice system. It also seeks to assist victims of hate speech and create awareness through sensitisation and education campaigns. The #HateFree campaign, a bilateral initiative between Lithuania and Norway, focuses on building the capacity of stakeholders specialised in and responsible for addressing hate speech and assisting victims. Expanding upon these initiatives, in 2021, the Office launched the awareness campaign “Hate Speech is a Crime”. The portal, “nepyka.lt”, was also recently launched, on the back of widescale publicity to further assist victims of hate speech and provide extra resources.
Disinformation
The 2020 COVID-19 pandemic and the full-scale Russian invasion of Ukraine in 2022 catalysed a raft of government initiatives and legislative measures to stanch the spread of disinformation in Latvia.
In March 2022, Latvia’s parliament, the Saeima, amended the longstanding Electronic Communications Law, empowering the National Electronic Media Council (NEPLP) to restrict access to websites available in Latvia that distribute content deemed a threat to national security, public order or safety, including disinformation. Once a complaint has been submitted to the authorities, the NEPLP decides whether to restrict access to the website by denying entry via the domain name or IP address. According to the law, if the merchant does not comply with the NEPLP’s decision, the Council has the authority to restrict access, without notice, until compliance is satisfied. The amendment marks the first time that the Latvian government has acted against websites promulgating disinformation or misinformation.
Those who continue to attempt to access such websites via satellite decoders or other devices will also face government resistance, following the March 2022 amendment to Latvia’s Law on Protected Services. The new regulation imposes a fine of up to EUR 700 for the installation or personal use of technologies that facilitate access to known foreign television sources of disinformation. The amendment is a direct response to Russia’s 2022 invasion of Ukraine and a subsequent uptick in Kremlin propaganda in Latvia and the greater Baltic region, according to government communications.
The Latvian Criminal Code does not list the spread of misinformation or false news as a crime per se, though the Ministry of Justice is currently working on such amendments to the Criminal Code, according to information from the State Police (VP). As such, spreading false news is often deemed by authorities as “hooliganism” or “disturbing the peace”, in line with Article 231 of the Criminal Code. The VP regularly monitors false news and disinformation on social media platforms and prosecutes accordingly; according to VP statistics, in 2021, authorities launched three serious investigations and made one prosecution.
In line with these developments, the Latvian government has sought regional cooperation in combating disinformation, which it views, first and foremost, as a national security threat, perpetrated specifically by Russia. Since 2020, Latvia, together with its Baltic neighbours Estonia and Lithuania, has met regularly with NATO’s Centre for Strategic Communication Excellence to discuss those nations’ specific challenges with disinformation. Directly following the 2022 Russian invasion of Ukraine, the Baltic leaders penned a joint communique with their Polish counterpart calling for social media platforms to be more proactive in halting the spread of Russian-backed disinformation about the war.
Given Latvia’s demographic makeup and geographical proximity to Russia, it sees its population as particularly prone to Kremlin-backed disinformation campaigns. As such, it has launched a slate of initiatives to strengthen media literacy and cultural cohesion as part of its Development Plan for a Cohesive and Civic Active Society for 2022-2023. Public media has been freed from financial dependence on advertisements, and a Public Electronic Media Council (SEPLP) has been established to ensure the protection of the Latvian information space and bolster media literacy. Meanwhile, the 2022 budget foresees a 5% value-added tax on print media and books, with the proceeds funnelled directly into financial support for local media and publishing. The goal of the measure is to strengthen cultural cohesion, national identity and media literacy.
Hate Speech
Latvia has extensive hate speech laws, codified in Articles 78 (incitement to national, ethnic and racial hatred), 150 (incitement to social hatred and discord), 74 (glorification of genocide or public denial or acquittal of genocide), and 88.2 (public incitement to terrorism) of the Penal Code. ECRI has commented that the scope of activity falling under such provisions includes any verbal or written expression, which would include online hate speech.
There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order. Section 71 of the Electronic Communications Law, however, obliges online platforms to provide personal data to relevant investigators in cases of suspected criminal offences. In 2020, the VP developed guidelines for police officers on investigating hate crimes, but these do not include online hate speech.
According to Latvian public media, in 2020, the State Security Service (SSD), one of the Latvian institutions investigating hate speech, launched four criminal cases for online hate speech, while the State Police initiated 16 such proceedings, twice as many as the previous year. In early 2021, a Zemgale District Court sentenced an individual to four months of imprisonment for posting a homophobic comment with a gun emoji, deemed as inciting hatred against same-sex couples. That same year, the Latgale Suburb Court of the Latvian capital of Riga sentenced an individual who posted violent, racialised comments on Twitter to 200 hours of forced labour.
Local rights organizations note that online hate speech against women has greatly increased in Latvia in recent years. Such speech is especially targeted towards female politicians, journalists and activists, as well as younger women between the ages of 15 and 29. Hate speech related to the COVID-19 pandemic and based on gender identity and sexual orientation has also risen in the past few years.
Disinformation
Disinformation in Italy has mostly been dealt with through non-legislative measures spearheaded by the Italian government.
In 2019, the Italian parliament led discussions on a Bill (No. 1900) to establish a Parliamentary Commission of Inquiry on false information and its subsequent dissemination. While the lower house of the Italian parliament approved the measure, it was shelved in the Senate.
Responding to the spread of pandemic-related misinformation, in April 2020, the Italian prime minister set up a task force to address online disinformation with a specific focus on social networking platforms. The task force is vested with the responsibility to monitor false information, identify its sources, and track dissemination in a collaborative effort with users and citizens, and to raise awareness through effective campaigning. The task force was set up under the Department for Information and Publishing, but it has mostly remained dormant.
Prior to that, in October 2017, the Italian Ministry of Education introduced “Enough with the Hoaxes” (Bastabufale.it), a media literacy campaign for primary and secondary schools. Before the following year’s general elections, a so-called “red button portal” was launched, where citizens could report disinformation to a special cyber police unit. The police unit was tasked with investigating the content in question, assisting citizens with reporting disinformation to social media platforms and, in cases of defamatory or otherwise illegal content, filing a lawsuit. The Italian communications regulator, AGCOM, also published guidelines prior to the 2018 elections to ensure equal treatment of all political parties and transparency in political advertisements. It also encouraged online fact-checking.
Outside of such government interventions, in September 2021, the EU-funded European Digital Media Observatory opened the affiliated Italian Digital Media Observatory. The Italian hub will house the Italian public broadcaster, RAI, universities, telecommunications service providers, fact-checking organisations and other media outlets.
Hate Speech
Hate speech is generally criminalised under Articles 604-bis (propaganda and incitement to crime based on protected characteristics) and 595 (criminal defamation) of the Italian Penal Code.
In June 2020, however, the Italian Constitutional Court recommended legislative penalty reforms in a review of the legitimacy of Article 595 of the Criminal Code and Article 13 of the Press Law (47/1948). The Court directed the Italian parliament to act on its orders within a year before ultimately deciding on the constitutionality of both provisions. In June 2021, the Constitutional Court declared in its follow-up decision that Article 13 of the Press Law, which allowed for the imprisonment of journalists for defamation, violated the Italian Constitution. Article 595 of the Criminal Code, however, was found to be consistent with constitutional provisions, given that it recommends imprisonment only in cases of “exceptional severity”. Those cases follow the 2013 decision of Italy’s Supreme Court to extend the application of Article 416 of the Penal Code (criminal conspiracy) to hate speech perpetrated within virtual communities, blogs, chats, and social networks.
In 2015, the Italian Charter of Rights on the Web was adopted. Article 13 of the Charter addresses online hate speech, providing that “no limitation of freedom of expression is allowed”, but that “the protection of the people’s dignity from abuses related to behaviours such as incitement to hatred, discrimination and violence must be guaranteed”.
Regarding sector-specific legislative interventions, Italy has fully implemented the EU Framework Decision on combating racism and xenophobia (2008/913/JHA). The Italian anti-discrimination legislation (205/1993, popularly known as the “Mancino Law”) was amended to incorporate the EU Framework Decision. The law makes it an offence to “propagate ideas based on racial superiority or racial or ethnic hatred, or to instigate to commit or commit acts of discrimination for racial, ethnic, national or religious motives.” In 2017, Law no. 71 expanded the application of “hate speech” to include cyberbullying. The law is designed to safeguard minors and defines cyberbullying as including any form of harassment, psychological pressure, identity theft or unlawful processing of the data of minors through electronic means.
The so-called Zan Bill, introduced in October 2020, sought to prohibit homophobia and transphobia, but failed to secure a majority in the Senate in October 2021. The bill would have criminalised discrimination on the basis of sex, gender and disability status, and introduced educational measures against discrimination and hatred, as well as assistance programmes for victims. The bill would have expanded the existing scope of Articles 604-bis and 604-ter of the Italian Penal Code.
Despite it being voted down, some parliamentarians are vying for the bill’s reintroduction.. Legislators are, however, pursuing separate amendments to Article 604-bis, which would punish the propagation of ideas subverting the democratic order, or those promoting fascist or Nazi ideology, with up to one-and-a-half years of imprisonment, or a fine of EUR 6,000. The bill, which does not deal with homophobic or transphobic speech, was introduced in January 2022, and is currently making its way through the parliament.
There have been a few notable cases of Italian politicians being convicted for hate speech. In 2015, the Supreme Court convicted a councillor from the district of Padua for incitement to racial violence against Cécile Kyenge, the former Minister for Integration, due to a comment posted on her Facebook profile. And, in 2016, while Kyenge was still Minister of Integration, the Court of Appeal of Trento found a district councillor guilty for posting an offensive comment on his Facebook page referring to her Congolese origin.
As to the responsibility of online platforms, there is no legislation mandating that they restrict access to hate speech materials without a prior court order or notice. Italian case law, however, paints a conflicting picture, as previous rulings have not specified whether platforms are obliged to remove illegal content upon a mere notice of its existence (e.g., reports by users) or a formal notice from the police or courts.
The Vivi Down association, for example, sued Google for failing to promptly remove defamatory content after Google had received numerous reports from users; it only removed the content upon receiving formal notice from the Postal Police. The Italian Supreme Court, in its final judgement in 2013, ruled that “the position of Google is that of a mere host provider”, and noted that Google removed the content upon receiving official communications from the competent authorities. Conversely, in a more recent case, the Supreme Court in 2016 ruled that the online blog agenziacalcio.it was complicit in defamation for having removed a user comment only after a judge had ordered a preventative sequester of the website. The website had previously been alerted via email about the post.
Aside from legislative efforts and judicial developments, national-level institutions also exist to monitor hate speech and to develop measures to respond to related challenges. The National Office Against Racial Discrimination (UNAR), established in 2004 and accountable to the Italian parliament, assists victims of discrimination, processes complaints, develops training manuals and carries out campaign efforts to dissuade discriminatory practices. The Observatory for Security against Acts of Discrimination (OSCAD) also assists victims in seeking redress for their claims and in pursuing prosecution.
Disinformation
Newly drafted legislation in Ireland seeks to build upon previous, non-legislative interventions to deal with threats posed by disinformation.
Online Safety and Media Regulation Act (OSMR), adopted in December 2022, does not cover disinformation under its definition of “harmful online content”. The Irish parliament’s Joint Committee on Tourism, Culture, Arts, Sport, and Media’s recommendation to include disinformation within the ambit of the legislation was denied. This effectively means that the regulation of disinformation has been deferred to EU-level legislation and the Code of Practice on Disinformation. The OSMR also established a new regulatory body responsible for overseeing Ireland´s regulatory framework.
In June 2021 discussions on the DSA, Ireland released a joint non-paper, together with Sweden and Finland, cautioning against the risks of over-blocking. Concerns remain over possible overlaps between the DSA and the OSMR Bill.
The “Be Media Smart” campaign, a media literacy initiative, is supported by the Broadcasting Authority of Ireland, among other patrons. The campaign was launched to equip citizens with the resources and knowledge necessary to debunk disinformation and rely on credible information. The initiative includes providing accurate resources, fact-checking tools, and tips to address online disinformation. The campaign has been widely publicised on television and radio networks.
Outside of government responses, the European Digital Media Observatory (EDMO) has established a hub in Ireland to tackle online disinformation. The hub constitutes a coordinated effort to build resilience against the challenges posed by disinformation, and includes a network of universities, CSOs, fact-checking organisations and media literacy practitioners.
Hate Speech
Legislative developments in Ireland over the past two years have sought to bolster existing measures prohibiting incitement to hatred.
In December 2022, the Irish government introduced the Online Safety & Media Regulation Act. The Act transposes into national law the revised EU Audiovisual Media Services Directive of 2018, which brings video-streaming platforms into the regulatory fold. The 2022 bill seeks to establish an online safety regime to be supervised and enforced by a new Media Commission, headed by an Online Safety Commissioner. The bill urges television and radio broadcasters, as well as providers of on-demand video services, to prohibit the spread of “harmful online content” through their services. Per the Bill, harmful online content includes certain offences under the Criminal Code and the 1989 Prohibition of Incitement to Hatred Act, and codifies three new offences: cyberbullying, the promotion of self-harm and the promotion of suicide. The Prohibition of Incitement to Hatred Act of 1989 penalises the publication and propagation of hate speech.
In 2021, Ireland’s National Police and Security Service (An Garda Síochána) established the Garda National Diversity and Integration Unit. The Unit is responsible for monitoring hate crimes, devising policies and strategies to foster diversity, and assisting in the investigation and prosecution of hate crimes. The Unit also provides a complaint redressal system, through which individuals and CSOs can report hate crimes online.
Disinformation
The COVID-19 pandemic served as a catalyst in Hungary for a slate of legislative initiatives seeking to regulate the online information space.
Already in 2013, the Hungarian government amended the Penal Code to make “scaremongering” and spreading rumours that can disturb public order punishable under Article 338 of the Penal Code. At the beginning of the COVID-19 pandemic, a YouTube vlogger was arrested for spreading false news about the closure of Budapest to entry from non-residents. Hungarian police have also launched investigations into four other cases of spreading false news online, under Article 338.
In March 2020, the government then passed the Coronavirus Control Act, which introduced Article 337 into the Penal Code. According to the Act, stating or disseminating false or misrepresented facts before the public during a state of emergency (referred to as a “special legal order” in Hungary, as was the case during the pandemic) is punishable by at least one and up to five years of imprisonment.
These legal developments have been criticised by CSOs, who claim that the measure is aimed at further weakening press freedoms. More than 100 individuals have reportedly been prosecuted based on Article 337 since its entry into force.
In 2020, the Hungarian Ministry of Justice also established the Digital Freedom Committee, with the stated purpose of monitoring the operations of technology companies against the backdrop of the rule of law and fundamental rights. There have hitherto been no concrete steps, however, towards regulating content moderation.
The Hungarian government’s comments on the DSA indicate that it does not seek to create an overlapping framework to assess tech platforms and moderate online content.
Hate Speech
Hate speech in Hungary is criminalised under Articles 332A (public incitement to hatred or violence based on protected characteristics), 333 (public denial of sins of national-socialist and communist regimes), 334 (desecration of state symbols) and 335 (the use of communist symbols) of the Penal Code. Defamation and libel provisions are present in both the Criminal (Articles 226 and 227) and Civil Codes.
There is no legislation mandating online platforms to take down hate speech materials without a prior court order or notice. The Hungarian government lost a case at the European Court of Human Rights (ECtHR) in 2016, after Hungarian courts held online news portals liable for user comments on published content. In 2018, the Hungarian government lost another case before the ECtHR, which found that Hungarian courts had overstepped their powers in holding a news site liable for content hyperlinked in an article.
The Council of Europe’s 2021 Country Memorandum on Freedom of Expression and Media Freedom for Hungary noted the deteriorating conditions of media pluralism in the country. It observed that, despite relentless rhetoric lobbed “against specific groups including the migrant and LGBTI community on the pro-government media and their portals, the Media Council has not once initiated ex officio proceedings in line with Section 14 of the Press Act”.
The Hungarian government’s attitude towards the LGBTQI+ community has significantly deteriorated in recent years: same-sex adoption and marriage rights are prohibited, and same-sex education is banned. The government has also imposed restrictions on exposing minors to programmes or content (including books) that depict homosexuality in a positive light. Following these developments, the European Commission launched infringement proceedings against Hungary in July 2021, and again in December 2021. The Hungarian government stands accused of violating the European Charter of Fundamental Rights (Articles 1,7,11 and 21), certain provisions of the EU Audiovisual Media Service Directive (free cross-border flow of audiovisual media services), and the e-Commerce Directive (the so-called “country of origin” principle).
There are no sector-specific laws criminalising xenophobic, homophobic, or racist speech in Hungary. The European Commission has condemned this lack of efforts to combat racism and xenophobia, and launched other infringement procedures against Hungary, namely for failing to transpose the Council Framework Decision (2008) on combatting racism and xenophobia, and for failing to effectively criminalise racist and xenophobic hate crimes.
Disinformation
Greece ranked 27th of 35 European countries in the 2021 Media Literacy Index’s ranking of nations’ vulnerability to disinformation, despite myriad initiatives meant to halt its propagation and to raise public awareness.
A November 2021 amendment to Article 191 of the Greek Penal Code makes the public dissemination of (online) “false news” a criminal offence. The offence for spreading information that “causes concern or fear among citizens” or “disturbs public confidence in the national economy, defence or public health” now carries a maximum sentence of five years of imprisonment. The provision on “public health” was previously not included in the penal code, and the maximum sentence was only three years.
The new regulation also penalises media outlets that own or publish such offensive content, as well as those who republish corresponding links. The amendment has come under severe scrutiny; opponents contend that it causes a “chilling effect” on journalists and media professionals.
In Greece, the National Centre for Audiovisual Media and Communication (Εθνικού Κέντρου Οπτικοακουστικών Μέσων και Επικοινωνίας, EKOME) plays a leading role in identifying disinformation. In October 2018, the Centre published a white paper on media and information literacy as a contribution towards a national strategy on media literacy.
Hate Speech
Greece has comprehensive provisions against hate speech, codified in Article 81A of the Penal Code and the 2014 Law on Combating Race Discrimination. Article 1 of the 2014 Law prohibits the incitement of hatred, discrimination or violence based on a person’s racial, religious, national, or ethnic identity. The article applies to hate speech disseminated through any medium, including the internet.
The Greek National Police recorded 171 instances of hate crimes, 34 prosecutions and zero convictions in 2020, according to official statistics. Until recently, Greece had harsh civil defamation laws on the books, which critics alleged stifled journalistic independence through self-censorship. The law was amended in December 2015, a move hailed as a much-needed improvement.
In June 2021, the European Commission called on Greece to step up efforts to transpose EU rules on combating racism and xenophobia (Council Framework Decision 2008/913/JHA) into criminal law. In December 2021, ECRI announced its evaluation of existing efforts in Greece to identify gaps in current laws against racism, and stated that it would adopt a report and make new recommendations in 2022, in view of the measures taken in the country to counter intolerance
There is no legislation mandating that online platforms take down hate speech or disinformation materials without a court order.
Disinformation
The German federal government has combined legislative initiatives with media literacy campaigns in a dual-pronged approach to protect the public from the spread of disinformation.
At the legislative level, the Interstate Media Treaty (Medienstaatsvertrag, MStV) transposed the EU’s revised Audiovisual Media Services Directive into federal law. It goes further than the Directive, however, by requiring media intermediaries (i.e., social media platforms) to keep the following information easily and permanently accessible: (1) criteria that determine the accessibility of content on a platform; and (2) the central criteria of aggregation, selection and presentation of content and platforms’ recommendation procedures, including information on the functioning of the algorithms used. Intermediaries are prohibited from discriminating against editorial/journalistic content on a systematic basis, and online platforms will also be required to identify and label “social bots”, albeit without prohibiting their use.
Measures to specifically combat election-related disinformation were taken during the 2021 federal elections. The Federal Election Commissioner (Bundeswahlleiter) was tasked with identifying and monitoring disinformation related to election procedures. Information on other aspects of elections, such as those strategies and policies taken by the federal government, electoral campaigns and political parties, are handled by different governmental departments and agencies.
Meanwhile, the Cyber Security Strategy for Germany 2021, which replaced the 2016 strategy of the same name, identifies information manipulation as a hybrid threat. It also recognizes the need for a comprehensive framework to combat disinformation.
Germany’s federal government has also launched several media literacy campaigns to prime citizens against disinformation. Examples include “Ein Netz für Kinder” (An Internet for Children), which provides guidelines for parents and caretakers on how to introduce children to the internet; and “Schau hin! Was dein Kind mit Medien macht” (Look out! How your child handles media), an online guide for parents on traditional media, the internet, social media and smartphones. The Federal Agency for Civic Education also offers resources to foster adult media competency, with a specific focus on disinformation.
Contrary to popular belief, Germany’s Network Enforcement Act (NetzDG, described in detail below) does not apply tofalse news or political disinformation unless these also constitute defamation or libel. However, as NetzDG will be replaced by DSA, the aspect of misleading information will be reflected in the new law.
Hate Speech
Germany is viewed as a European leader in legal provisions barring hate speech in both the physical and digital realms.
Section 130 of the German Criminal Code prohibits incitement to hatred (Volksverhetzung), discrimination, and the violation of human dignity based on religion, race, ethnicity, or nationality. Dozens of other provisions in the Criminal Code outlaw speech that defames, glorifies terrorist actions, or celebrates the ideology or symbolism of the National Socialist regime. Meanwhile, the NetzDG, in force since 2018, pertains specifically to online hate speech.
The NetzDG sets its sights specifically on for-profit social networks that have more than two million users in Germany. They are obliged to set up an accessible complaint mechanism for users and to ensure that “obviously unlawful content” is deleted or blocked within 24 hours. For content that is not “obviously illegal”, seven days are allocated for consultations. Failure to comply can result in fines of up to EUR 50 million.
In 2021, lawmakers proposed expanding the scope of the Act through the introduction of an amendment package. If the package had come into force, companies would have been additionally required to report the illegal content they removed, as well as the offending user’s registered IP address, to the Federal Criminal Police Office (BKA). Harmful and abusive content would have then been passed along to a new Central Reporting Unit for Criminal Content on the Internet (ZMI).
The amendment package was met, however, with staunch resistance from tech companies, including Meta, Alphabet and Twitter, which challenged its legality in the Court. They asserted that the new obligation to provide authorities with data prior to ascertaining the extent of harm was in direct contravention to data protection laws, the German Constitution and international covenants. In March 2022, the Administrative Court in Cologne overturned key provisions of the proposed amendment, ruling in favour of the tech companies.
In its current form, the NetzDG does not create new categories of illegal online content. Instead, its purpose is to better implement 22 provisions of the German Criminal Code online by holding large social media platforms responsible for their enforcement. These 22 provisions include “incitement to hatred”, “dissemination of depictions of violence”, “forming terrorist organizations”, and “the use of symbols of unconstitutional organizations”, such as the swastika. The NetzDG also applies to other categories, such as the distribution of child abuse imagery, “insults”, “defamation”, “defamation of religions, religious and ideological associations in a manner capable of disturbing public peace”, “violation of intimate privacy by making photographs”, “threats to commit a felony” and “forgery of data intended to provide proof”.
Under the law, any platform that receives more than 100 complaints per calendar year must also publish biannual reports on their moderation activities. Proponents of the law argue that platforms make such decisions anyway and prefer that a democratic legislator provides guidance. The government argued that online hate speech was not seriously addressed as a problem by platforms, as users are unable to receive meaningful redress. Civil society groups, however, are critical of the NetzDG for vesting the responsibility for tackling online hate speech with social media platforms. The Freedom House 2021 country report on Germany observed that the NetzDG’s stringent obligations on platforms have led to an over-blocking of content over the last two years, especially of content that would normally be considered satire.
The Federal Court of Justice ruled in July 2021 that Facebook’s decision to remove posts and delete user accounts due to violations of its community standards on hate speech was unconstitutional. In its judgement, the Court noted that the decision was unilateral, and that the user had the right to know the reasoning behind the removal decision, at the very least after its execution. According to the DSA proposal, users would have the right to be informed of the reasoning behind removal decisions, and platforms would be required to provide redressal mechanisms for reviewing removal decisions (Article 15).
Outside of legislative interventions, comprehensive media literacy programmes and robust redressal systems are in place across Germany’s federal system. The “German No Hate Speech Movement”, a branch of the Europe-wide “No Hate Speech Movement”, provides links to educational resources on different types of hate speech, and staffs a help desk to advise hate speech victims. “Prosecute and Delete” and “Prosecute instead of Delete” are track-and-delete initiatives working in tandem with the authorities to expedite investigations into online hate speech. Meanwhile, the federal state of Hesse’s Cyber Competence Centre (Hessen3C) operates the hotline “Hessen gegen Hetze” (Hesse against Hate) for citizens to collaborate with the police, public prosecutors and CSOs. The hotline is paired with an online forum, where aggrieved individuals can post a screenshot or link of the offending post in question.
As part of an investigation into an increase in politically motivated hate speech during the run-up to the September 2021 federal elections, German authorities conducted a series of raids on people’s homes in March 2022. Raids were conducted in 13 states after sifting through over 600 social media posts found to be “criminal” and to constitute “hate crimes”. The BKA assesses and prioritises which posts are harmful and require prosecutorial escalation.
Disinformation
Over the past four years, France has significantly stepped up both legislative and non-legislative efforts to control manipulated information disseminated online by domestic and foreign actors.
The 2018 Law on the Fight against Manipulation of Information bans the dissemination of disinformation on social media networks and news portals during election periods. Since January 2022, the authority to execute this provision now lies with the recently established Autorité de régulation de la communication audiovisuelle et numérique (ARCOM). This new regulator has a broader scope of powers and responsibilities than its predecessors, the Conseil Superior Audio-visual (CSA) and the Hadopi Online Copyright Authority, which merged to form ACROM in line with an October 2021 statue.
The 2018 law enables public authorities to regulate online political advertising and to authorise swift removal mechanisms (within 48 hours upon referral) through court injunctions during election periods (Article 1). It also creates transparency obligations for platforms (Article 11). The CSA, in its September 2021 report, reviewed the measures undertaken by 11 platforms in 2020 to comply with the law. It identified seven areas in need of improvement, including algorithmic transparency and constructing reporting mechanisms.
ARCOM was established for the “regulation and protection of access to cultural works in the digital age”. Its creation also came with an update to the authority’s internal structure to include a directorate for online platforms. ACROM is vested with the responsibility to monitor the “activity of online platforms, particularly in terms of the fight against the manipulation of information or against online hatred”, as required under the 2018 Law on the Fight Against Manipulation of Information.
In June 2021, the French government also established VigiNum, an agency to combat “foreign disinformation and fake news”. The agency is operated under the Secretariat-General for National Defence and Security (SGDSN) and is responsible for detecting and monitoring the circulation of harmful online content.
Ahead of the 2022 French elections, the Foreign Affairs Ministry launched a toolbox of “open-source software and open resources” to tackle online disinformation. Since December 2021, any internet user can access the software to identify fake Twitter accounts and political advertisements, and to review resources on best practices to counter online disinformation. The toolbox, available on GitHub, is a work in progress.
A report by the Bronner Commission, established by the French government in 2021, recommended the imposition of criminal sanctions on those actors who publish and disseminate disinformation in bad faith. Specifically, it seeks to retain the section on false news in the 1881 Law on the Freedom of the Press (Section 27), which prohibits false news propagated in bad faith or to disturb public order.
The Bronner Commission report underscores the importance of freedom of speech and expression and shies away from criminalising the conduct of social media platforms. The report recommends, inter alia, enhancing media literacy efforts and setting higher transparency benchmarks for online platforms.
Hate Speech
The French government has introduced new limitations to online freedom of speech since the Charlie Hebdo and November 2015 attacks in Paris. Terrorist content, electoral disinformation, gambling and child abuse imagery are all subject to legally mandated speedy removal by hosting services.
More broadly, Article 225-1 of the French Criminal Code prohibits offences against persons based on their place of origin, race, ethnicity, sex, economic situation, disability status, sexual orientation and gender identity. Similarly, Article 24 of the Law on the Freedom of the Press (1881) prohibits public speeches, engravings or similar public expressions that incite hatred, discrimination or violence against individuals or groups on the basis of their religion, ethnicity and race. Convicted offenders face a punishment of up to one year of imprisonment and/or a fine of up to EUR 45,000. Both provisions pertain to hate speech in general and are not specific to the online environment.
The Law on Confidence in Digital Economy (2004) stipulates that authorities may ask hosts or publishers to remove certain terrorist content or child abuse imagery and, in the absence of removal within 24 hours, may send notification about the website to internet service providers, who must then immediately prevent access to these sites. And, in February 2015, an implementing decree outlined administrative measures to make hosting websites restrict access to materials that incite or condone terrorism, as well as to sites that display child abuse imagery.
In January 2021, however, the French government proposed an amendment to the Law on Confidence in the Digital Economy to strengthen platforms’ reporting duties regarding content moderation practices. The amendment mirrors stipulations in the proposed DSA. The government presented this as a move “in anticipation” of the DSA’s enforcement. The amendment’s expiration coincides with this date.
A controversial Anti-Separatism Bill was approved by the French Parliament in 2021 and seeks to protect “secular” values at the core of the French Constitution. One of the key sections of the law seeks to tackle online hate speech by laying down procedures to block ‘’mirror sites’’ (Article 19), or hate speech content flagged on social media platforms that may end up on other sites. Article 19 implies that legal removal decisions for a piece of content on one platform could be extended to other platforms. France also has sector-specific regulations on hate speech. Holocaust denial, for example, is prohibited and is punishable under the Gayssot Act of 1990.
Despite the previously adopted Avia Law (Law on Fighting Hateful Content on the Internet, La loi visant à lutter contre les contenus haineux sur internet) having been overturned, it led to the establishment of an “Online Hate Observatory” by France’s Audiovisual Council (Conseil Superieur de l'Audiovisuel, CSA) in July 2020. The observatory aims to monitor and analyse hateful online content, in collaboration with online platforms, associations and researchers.
French courts have also made strides in applying the aforementioned regulations to offending platforms and individuals. In January 2022, the Paris Appeals Court ruled that Twitter’s measures to counter hate speech in France are inadequate, and that the company had failed to make any concrete strides towards curbing racist and discriminatory speech on its platform. The Court further directed Twitter to compensate the six plaintiffs in the case with EUR 1,500 each. A French court also found former presidential candidate Eric Zemmour guilty of hate speech for referring to immigrant children as “thieves and rapists”. Meanwhile, the European Court of Human Rights, in September 2021, upheld the Nîmes Court of Appeals judgment that politician Julien Sanchez was guilty of not deleting posts considered hate speech directed at Muslims.
Ahead of the 2021 G7 summit, President Emmanuel Macron also called upon members of the forum to build a comprehensive framework to combat all forms of online hate speech, including racist, antisemitic or any other type of speech constituting harassment.
Disinformation
Finland is often referenced as a good model for fighting disinformation through a systematic approach, primarily based on education. Finland has ranked first for media literacy among 35 European countries every year since 2017, according to an index developed by Open Society Foundations.
Following a spike in disinformation in the wake of Russia’s illegal annexation of Crimea in 2014, the Finnish government launched an anti-false-news initiative to educate citizens on disinformation campaigns beginning in primary school. As such, media literacy is a cross-departmental priority, and a key strategic aim of the Finnish Ministry of Education and Culture. In 2019, the Ministry published a set of media literacy guidelines and funding schemes for educational programmes and other initiatives, with the express aim of promoting human rights, equality and non-discrimination in the media. An uptick in cyber threats and disinformation is also a reason given for launching the National Democracy Programme 2025, a cross-sectoral legislative initiative to strengthen Finnish democracy against modern threats.
Since the outset of the coronavirus pandemic in 2020, Finland has also stepped-up efforts to target disinformation and bolster social resiliency.
In 2021, the Finish government expanded the expertise of its strategic communications team to include digital behavioural science research and psychological resilience.Bolstering media literacy and healthy digital discourse is also core tenet of Finland’s Digital Compass 2030, a transposition of the European Union’s Digital Compass, presented in 2021, which calls on member states to submit roadmaps for digitising their respective societies and addressing core challenges that come with that transition. The Finnish iteration was concluded in autumn 2022.
Hate Speech
According to the European Commission against Racism and Intolerance’s (ECRI) latest report on Finland, online hate speech incidents are dealt with under three types of crimes in Finland: incitement to hatred, criminal defamation and threats.
An Equality Act was adopted in 2014, and measures have been taken to combat hate speech, including setting up hate speech investigation teams in every police department, totalling some 900 officers nationwide. ECRI’s latest conclusions on the implementation of anti-discrimination recommendations, published in March 2022, however, indicates that reports of discrimination to the nation’s Non-Discrimination and Equality Tribunal, which deals exclusively with complaints on the grounds of gender and gender-identity discrimination, had an average processing time of 454 days, as of spring 2021.
A 2021 study commissioned by the Finnish Ministry of Justice reported nearly 300,000 examples of hate speech on Finnish-language media over a two-month period in 2020. Police statistics indicate that, while reports of hate crimes are generally on the decline, accusations of defamation from persons of diverse ethnic backgrounds have increased year-on-year since 2018. These types of crimes were also those most perpetrated against women, according to police statistics.
No obligations yet exist mandating online platforms to take down materials without a prior court order, although the government did signal in 2021 that it supports provisions in the coming DSA, which would implement tougher obligations for platform service providers to combat illegal online content. Under the Act on Measures for Preventing the Distribution of Child Pornography (1068/2006), operators have the right, but not the obligation, to block access to sites posting child sexual abuse imagery. And, according to the Act on the Exercise of Freedom of Expression in Mass Media (460/2003), a court may order the distribution of a published network message to cease, but only if it is evident that disseminating the message in question among the public constitutes a criminal offence.
New legislation and government programmes have come into effect to address online hate speech. In October 2021, the Finnish Parliament amended Chapter 25, Section 9 of the Penal Code to allow for the prosecution of individuals making threats against others because of their profession. In its reasoning, the amendment specifically listed an uptick in online threats of violence, hate speech and harassment against public officials and journalists. Meanwhile, another proposed amendment to Chapter 6, Section 5, Subsection 1, Paragraph 4 of the Penal Code (HE 7/2021 vp), would consider previous public comments of misogyny or sexual discrimination in hate crime lawsuits.
Just as with disinformation, Finland also addresses the issue of hate speech through information campaigns. In June 2022, for example, Finland’s Ministry of Education and Culture made available EUR 9 million in grants for the development of young peoples’ digital skills, the goal being to strengthen media literacy and provide youth with basic knowledge about information and communications technologies.
Online harassment has been identified as such a problem in recent years that it warranted a full government report on the matter in 2021. That report, together with the 2021 Citizens’ Panel on Freedom of Expression and the 2022 Government Action Plan for Combating Racism and Promoting Good Relations between Population Groups, have suggested dozens of legislative initiatives to come into force between 2021 and 2023 to fight online hate speech, harassment and discrimination.
Disinformation
Estonia is a heavily digitised country, which makes democratic processes more vulnerable to interference. Similar to the other Baltic states, domestic concerns centre on Russian interference, either through cyberattacks or disinformation campaigns.
Section 410 of the Estonian Penal Code prohibits radio interference or the transmission of false or misleading messages if they pose a danger to the “life or health of a large number of people or to the environment”. Such a violation is penalised with up to three years in prison. The provision’s application to online disinformation, however, has yet to be tested.
As such, Estonia’s approach to counter disinformation has consistently relied on strengthening media literacy efforts and avoiding the imposition of criminal penalties.
A key element of counterefforts is the Estonian Defence League (EDL, or Kaitseliit), a voluntary security force under the auspices of the Ministry of Defence, with responsibilities ranging from cyber defence to fighting disinformation.
Regarding disinformation, the EDL runs an anti-propaganda blog, propastop.org, which not only focuses on countering harmful narratives, but also on highlighting corporate practices related to social media, outing individuals and posts that spread disinformation (i.e., “naming and shaming”) and advocating for media literacy.
Hate Speech
Article 12 of the Estonian Constitution prohibits incitement to hatred and violence based on race, ethnicity, religion and political ideology.
Estonia also criminalises hate speech, but the provision is narrowly construed. To fall under Section 151 of Estonian Penal Code, the act in question must involve a risk of danger to a person’s life, health or property.
In February 2019, Estonian authorities publicly rejected the European Commission’s call to more stringently criminalise hate speech across the EU. The Estonian Human Rights Centre observed in its published 10th report that the government has failed to adopt measures to counter hate speech in the country and to protect minorities.
In June 2023, the Estonian government approved a new law proposal that will make hate speech policy stricter. It will make inciting to hatred, violence or discrimination against a group of people or a member of a group on the grounds of nationality, skin colour, racial background, gender identity, health and disability, language, origin, religion, sexual orientation, political opinion or property or social status a crime in the future.
Disinformation
Denmark’s approaches to tackle disinformation have been shaped by legislative efforts, by the creation of task forces and by responses to foreign interference.
The Danish government has indicated its intention to propose legislation requiring social media platforms to remove illegal content within 24 hours. The proposal, however, could overlap with the DSA.
The Danish government has an active inter-ministerial task force to counter disinformation. One of the strategic benchmarks in the updated Danish Cyber and Information Security Strategy 2021-2023 is to “defend the open, global and free internet and work to ensure that human rights are respected and promoted online and in the development of new technologies.”
Considering the rising prominence of foreign influence campaigns, a 2019 amendment criminalised the dissemination of disinformation that “aids or enables” a foreign state actor to influence public opinion in Denmark. The amendment imposes a maximum 12-year prison sentence for offenses carried out in connection with Danish or EU parliamentary elections.
Furthermore, the TechPlomacy initiative, launched in 2017, created the office of Tech Ambassador, with offices in San Francisco, Copenhagen and Beijing. In 2021, the Tech Ambassador introduced the white paper “Towards a better social contract with big tech”. Its aim is to foster a responsible and just society in which Big Tech operates in cohesion with the socio-economic and democratic values of society. The white paper outlines nine principles upon which the relationship between governments, users, citizens and workers (content moderators) should be built. It also states that the framework within which Big Tech operates should be set by governments, and not by such firms themselves.
In 2021, the Danish government also introduced the Tech for Democracy initiative, with the objective of advocating for technology that strives to foster democracy and human rights. The initiative seeks to bring together diverse stakeholders, including government representatives, tech companies, and civil society organisations (CSOs) and activists, to foster safe and democratic technological developments.
Hate Speech
Hate speech is criminalised under Article 266b of the Danish Criminal Code. If the instance of hate speech in question has elements of propaganda activity, this is considered an aggravating circumstance.
There are currently no active legislative measures or administrative regulations in Denmark aimed directly at or imposing obligations on social media platforms, search platforms and/or platform users to remove, restrict or otherwise regulate online content based on hate speech.
In its written submissions made before the OHCHR, in March 2022, the government notes that the platforms addressing hate speech must steer clear of over-blocking and not threaten freedom of expression.
Disinformation
The Czech Interior Ministry, on its website, stresses that “disinformation”, as such, is not a crime. An online post must fall into one of the following penal categories to constitute an actual crime: damage to foreign rights; defamation; false accusations; defamation of a nation, race, ethnic or other group of persons; incitement to hatred; dissemination of an alarm message; approval of or incitement to a crime; or the expression of sympathy toward a movement suppressing human rights. In 2023, the Ministry of the Interior prepared a draft of the law against disinformation, which would allow responses on online content, that could threaten Czech sovereignty, territorial integrity, democratic system or national security. In March, the draft has been sent to experts for review. As of August 2023, the Ministry has not made any announcements on the future of the law.
The Centre against Terrorism and Hybrid Threats, housed in the Czech Interior Ministry, was established in 2017. It has a mandate to counter online terrorist content and disinformation campaigns. It regularly publishes informational dossiers relating to those topics most vulnerable to disinformation, including the war in Ukraine.
On 2 March 2022, the Czech Government approved Resolution No. 153, which created the position of Commissioner for Media and Disinformation. The commissioner is primarily responsible for coordinating government ministries and officials regarding different state actions related to media and disinformation. However, in February 2023, the Czech government cancelled the position of the commissioner, and gave the responsibility over disinformation to the national security advisor.
In response to Russian propaganda spreading disinformation about Ukraine in the Czech Republic, the Czech association CZ.NIC, which oversees the operations of .cz domains and the Czech Defence Ministry´s cyber operations department, blocked access to several websites for spreading disinformation which undermines national security.
Hate Speech
As stated above, hate speech is criminalised in several sections of the Czech Criminal Code. Some of these sections provide for harsher penalties when the offence is committed online.
These penal codes have been applied to online crimes since 2012, when five young men received suspended sentences of three years for promoting Nazism on their Facebook profiles. In September 2014, former MP Otto Chaloupka was conditionally sentenced for derogatory comments about Roma people posted on his Facebook profile.
More recently, in December 2021, the Czech Supreme Court rejected a challenge to a lower court ruling finding an individual guilty of inciting hate for commenting that a “solution” existed for an ethnically diverse group of first graders, an allusion to the Holocaust. The ruling stated that “hate speech on the internet, as one of the types of hate speech, needs to be combated in a democratic society, even in serious cases through criminal law”.
Periodic data on hate crimes and hate speech is published by the Ministry of Justice’s reports on extremism. As per data shared with the OSCE Office for Democratic Institutions and Human Rights (ODIHR), in 2020, there were 32 instances of hate crime (including hate speech) reported, of which 28 were prosecuted.
There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
Disinformation
In Cyprus, selected service providers and broadcasters have an obligation to take measures to protect the public from certain types of content. Although disinformation is not directly mentioned within the recent framework transposing the EU’s Audiovisual Media Services Directive, it serves as a first step towards regulating the online information environment. Outside of this, certain provisions allow for the criminal prosecution of those who disseminate false information.
The EU Audiovisual Media Services Directive (2018/1808) was incorporated into Cyprus’ legislative framework in January 2022 through the adoption of two bills: the Cyprus Broadcasting Foundation (Amendment) Law of 2021 and the Radio and Television Organisations (Amendment) (No.2) Law of 2021. The Directive, which previously covered only television and radio broadcasters, now brings video-sharing platforms and their service providers, as well as audiovisual on-demand and media services, into its fold.
Article 50 of Cyprus’ Criminal Code criminalises the dissemination of “fake news” or “news that can potentially harm civil order or the public’s trust towards the State or its authorities or cause fear or worry among the public or harm in any way the civil peace and order”. Doing so is punishable with up to two years of imprisonment or a fine. This Article, however, is limited in scope and has not yet been successfully invoked in court.
In March 2023, the Cypriot Justice Ministry prepared a law proposal ( the “fake news” bill). This bill would criminalise personal insults online and aim to protect the public from online attacks. In May 2023, the Cypriot parliament sent back the proposal to the Ministry of Justice, after receiving many objections on the bill from the Committee on Journalistic Ethics.
In June 2021, during a joint conference of the Council of Europe and the Cypriot government, member state ministers adopted a Final Declaration that prescribes four resolutions necessary to support media and foster democracy. It explicitly states that addressing those challenges posed by disinformation and hate speech is crucial to this end.
A 2018 EU Commission survey of EU citizens’ awareness of and attitudes towards online disinformation revealed that Cyprus had the highest proportion of respondents who viewed disinformation as a problem (91%). In 2021, the country’s president echoed these views and underlined the need to address widespread disinformation in the wake of the COVID-19 pandemic.
Hate Speech
The recent transposition of the EU Audiovisual Media Services Directive (2018/1808), as well as provisions of the Cypriot Criminal Code, constitute the bulk of legal interventions on hate speech in Cyprus.
One of the primary objectives of the Directive is to combat religious and racially motivated hatred. Video-sharing platforms that allow user-generated content also fall within its purview. Platforms operating within Cypriot jurisdiction are obliged to register with the CRTA (Cyprus Radio-Television Authority) and are supervised accordingly.
Those bills adopting the Directive oblige television, radio and audiovisual service providers to take measures to protect the public from racist or xenophobic content. Platforms will also be established to create mechanisms through which users can flag dubious content.
In addition to the prohibition on the publication or dissemination of racist and xenophobic speech, a 2015 amendment to the Cypriot Criminal Code, Article 99A, criminalises homophobic and transphobic speech. The term “speech” has been interpreted broadly to include written materials or pictures through any medium, including computer systems. Prosecutions of such offences, however, are rare. In 2020, 41 hate crimes were reported, with 7 prosecutions, none of which resulted in a conviction.
Disinformation
Legislative measures in Croatia have criminalised specific aspects of disinformation and created new obligations for broadcasters.
Previously, the only legal instrument in Croatia referring to disinformation was Article 16 of the Law on Misdemeanours against Public Order and Peace, which was adopted in 1977, and subsequently amended in 1994. Under this law, spreading disinformation that disrupts public peace was punishable by a small fine or up to 30 days in prison.
Since October 2021, the Law on Electronic Media obliges television and radio broadcasters, as well as audiovisual platform providers, to disseminate only “accurate information” (Article 15). The Electronic Media Council serves as the supervisory regulatory authority under the Law. The Law does not explicitly single out social media platforms, but it does seek to regulate user-generated content (Article 94.3).
Hate Speech
The Croatian Penal Code criminalises hate speech under Articles 325.1 (public incitement to hatred based on protected grounds), 325.4 (approval or incitement to genocide and crimes against humanity), and 87.21 (protected characteristics as a motive of an aggravating circumstance). Meanwhile, Article 3.1 of the Croatian Anti-Discrimination Act classifies harassment based on protected grounds as a misdemeanour punishable by a fine.
The new Law on Electronic Media prohibits the publication on television, radio and audiovisual platforms of hate speech (Article 14), xenophobic or racist content, or content “inciting terrorist offences” (Article 14.1 and Article 94.1) in line with the Croatian Penal Code.
Furthermore, Article 21, which seeks to regulate audiovisual commercial communications, prohibits promoting hatred or discrimination based on certain protected characteristics. The obligations under Article 21 extend to video-sharing platforms that stream user generated content (Article 96.3), but the legislation does not explicitly state its application to social media platforms.
Croatia’s National Plan for Roma Inclusion 2021-2027 was adopted by the national government to provide a comprehensive framework to enhance the public participation and social and economic integration of members of the Roma community. The plan is based on the Constitutional obligation to protect minorities, and the Anti-Discrimination Act, which prohibits discrimination against minorities.
The National Plan creates a working group to implement its goals and contains specific measures to combat discrimination, hate crimes and hate speech against the Roma community. It envisions training for magistrates, law enforcement officers and agencies, and staff in public administration engaged in the fields of education, public health and employment. It also seeks to aid victims of hate crimes and discrimination-based violence.
Disinformation
The Bulgarian government continues to develop criminal provisions to penalise the spread of disinformation. It also addresses disinformation campaigns through a task force and other special initiatives.
The widespread dissemination of misinformation during the COVID-19 pandemic catalysed government-led efforts to pass disinformation legislation. As of August 2023, however, the three attempts had been unsuccessful, mostly because of vibrant political climate.
The first, called the Emergency Bill, would have made the transmission of false information about the spread of an infectious disease punishable by up to three years in prison (or five years, in cases where severe damage was inflicted). The draft law was ultimately vetoed by the president.
The second attempt was to apply the Radio and Television Act to internet platforms. The move would have granted the Council for Electronic Media, Bulgaria's media regulator, new oversight powers to charge a website with spreading disinformation and request a court order to block access to the site. The Bulgarian parliament’s Culture and Media Commission rejected the proposal.
The third came in the form of a recommendation that “disinformation in the internet environment” should be included within the provisions of Bulgaria’s Personal Data Protection Act. If passed, the owners of websites, online blogs and, in certain cases, social networks would have been held responsible for the dissemination of online disinformation. The bill was criticised by governmental and non-governmental organisations alike for defining “disinformation” too broadly, and for entrusting the Commission for Personal Data Protection with the ability to discontinue access to a website, a power outside the scope of its mandate.
During the DSA discussions in 2022, the Democratic Bulgaria party proposed an anti-disinformation law, which would require social media platforms to delete troll profiles and focus stronger on the misuse of social media platforms´ algorithms. The law was meant to complement the DSA. The proposal has however not been tabled in parliament until now.
Aside from these attempted legislative interventions, in 2021, the UNHCR Bulgaria and Refugee Advisory Board launched an online platform to counter disinformation and rumours about refugees in Bulgaria, as part of its “anti-rumour” campaign. In April 2022, the Bulgarian Coalition against Disinformation was established to deal with disinformation and false content in digital media. The task force is comprised of over 60 members, including the European Commission, national institutions, ministries and government departments, as well as Bulgarian media firms. The fact-checking platform Factcheck.bg is one of the coalition’s primary partners.
Hate Speech
Hate speech is criminalised under Article 162 of the Bulgarian Penal Code (incitement to hatred on protected grounds). The law has proven to be difficult to enforce offline, however, and nearly impossible to enforce online. There is no legislation mandating online platforms to restrict access to hate speech materials without a prior court order.
In its 2020 resolution on the rule of law and fundamental rights in Bulgaria, the European Parliament called on the Bulgarian authorities “to amend the current Criminal Code to encompass hate crimes and hate speech on grounds of sexual orientation, gender identity and expression, and sex characteristics.”There have been no concrete efforts in this direction, however.
Sectoral efforts by the Bulgarian government include the Action Plan on combating antisemitism 2021-2025, part of its commitment to the 2018 EU Council Declaration on the fight against antisemitism. The plan includes responding to hate speech and assessing its effects on victims of antisemitism.
Disinformation
The Belgian government is engaged in bolstering media literacy, and is currently establishing a task force to disrupt disinformation campaigns.
Since 2018, the government has spearheaded public diplomacy on the issue. It launched the online portal stopfakenews.be, where citizens can make their own recommendations on how to fight disinformation. They can also voice their approval (by liking) or disapproval (by disliking) of recommendations by others.
In February 2022, the Belgian government introduced the National Security Strategy, in collaboration with the Egmont Institute, a non-profit think tank. Although the strategy primarily addresses cyberattacks and bilateral conflicts between Belgium and third countries, it also identifies disinformation and the spread of extremist content from abroad as pressing threats to democracy and the rule of law in the country. To this end, its policy recommendations draw on the European Commission's Action Plan for European Democracy, which includes efforts to combat disinformation and strengthen media freedom and pluralism. Regarding disinformation, the strategy specifies that work is underway to establish an interdepartmental mechanism for detection, monitoring, analysis and reporting. However, at the moment, Belgium does not have specific law against disinformation.
Hate Speech
There is no specific law addressing hate speech in Belgium, but there are a number of statutes that deal with different specific forms of hate speech.
Both the Anti-racism Law (1981) and the Anti-discrimination Law (2007) penalise racism and xenophobia, while the Law against Negationism (1995) prohibits and punishes public denial of the Holocaust. There are no distinct laws to penalise instances of hate speech involving islamophobia or homophobia, as noted by the joint investigative report released by Unia, formerly the Interfederal Centre for Equal Opportunities, and Myria, the Belgian Federal Migration Centre.
The Belgian Police report a high number of hate crimes prosecuted every year (in 2020, they reported 1,334 prosecutions of 1,750 registered hate crime incidents), but it is not clear how many of those result in convictions.
There is no legislation mandating online platforms to restrict access to hate speech content without a prior court order. Nonetheless, Unia has a mandate to contact social media platforms that tolerate illegal content. It has reached an agreement with Facebook, for example, to remove such content within 24 hours.
In September 2020, Belgium’s state governments adopted the National Action Plan Against Racism, proposed by the Chair of the Interministerial Conference on Combating Racism. It outlines concrete measures to address racial discrimination in areas such as employment, education, the media and health services.
Disinformation
Efforts to combat disinformation in Austria have predominately taken the form of digital media literacy campaigns and criminal prosecutions for election-related misinformation.
Section 264 of Austria’s Criminal Code penalises the dissemination of false information in the period immediately before elections, when it is too late for there to be an effective correction. The false information must have the potential to deter people from voting or to influence them to vote in a specific manner. Those convicted of spreading false information are subject to fines or up to six months of imprisonment. Anyone found guilty of using falsified documents to give more credibility to the false information faces a penalty of up to three years of imprisonment.
The Austrian government also addresses disinformation indirectly through the promotion of media literacy. The Federal Ministry of Education, Science and Research, for example, regularly publishes teaching materials online. It also co-funds Saferinternet.at, a website promoting media literacy to support the safe use of digital media. Similarly, the Federal Ministry of Social Affairs, Health Care and Consumer Protection co-funds Watchlist Internet, an information platform identifying online fraud.
Hate Speech
The Austrian Criminal Code works in tandem with relevant legislation to regulate online hate speech. Sensitisation training programmes have also been developed to complement these laws.
The Communications Platform Act (Kommunikationsplattformen-Gesetz, or KoPI-G), enacted on 1 April 2021, obliges for-profit online communication platforms with more than 100,000 users in Austria, or those generating revenues in Austria of more than EUR 500,000, to establish “effective and transparent procedures” to address illegal content. Notably, video-sharing platforms are not covered by this law but, instead, by the Audiovisual Media Services Act. Online encyclopaedias, learning platforms and online market platforms also do not fall within the KoPI-G's purview.
Illegal content is defined under Section 2(8) of the KoPI-G and refers to any content falling into at least one of the 15 specified offences related to illegal speech in the Austrian Criminal Code. Such offences include hate speech (Section 283 of the Austrian Criminal Code); related offences, such as incitement to hatred and violence; online harassment, stalking or libel; defamation; national-socialist content; and religious vilification. Other offences pertain to the publishing and dissemination of images of child sexual abuse and of terrorist content.
According to the KoPI-G, online platforms must remove expressly illegal content within 24 hours of being notified of its existence. Where the legality of the content is not clear, platforms may take up to seven days to respond. Platforms are further obliged to publish periodic reports on their removal decisions. To prevent “over-blocking”, the KoPI-G also mandates that platforms create a complaint-redressal mechanism by which users can appeal removal decisions.
Legislative interventions have been complemented by various collaborative initiatives. “Dialogue instead of hate”, a sensitisation training programme spearheaded by the non-profit Neustart and the Austrian judiciary, seeks to address the uptick in hate crimes in the country. Specially trained probation officers are tasked with designing modules, such as “media expertise”, “discourse expertise”, and “awareness for discrimination”, and with conducting workshops in individualised and group settings. (See: the 5th Report by the Republic of Austria for the Protection of National Minorities, 2021).
Such initiatives build upon earlier measures, such as the Austrian government’s establishment of the National Committee on No Hate Speech in 2016. The Committee falls under the remit of the Department of Families and Youth, housed in the Austrian Federal Chancellery, and aims to counteract the normalisation and acceptance of hate speech.