Is it time for Europe to reassess internet intermediary liability in light of coronavirus misinformation?
Throughout the coronavirus pandemic, numerous false and misleading stories have thrived on digital platforms. Misleading rumours about imminent “lockdowns” and debunked claims of potential cures have led to a resurfacing of concerns about the highly efficient spread of both misinformation and disinformation online. In response, numerous social media platforms have taken steps to prevent the spread of COVID-19 misinformation. In addition, the European Commission has noted that “misinformation and disinformation in the health space are thriving, including on COVID-19,” and has urged citizens to refrain from sharing false and “unverified” content. While these are positive steps, the severity of the current crisis has highlighted that misinformation poses a threat to public health. For this reason, the current self-regulatory framework addressing this issue must be reassessed, in order to ensure compatibility with human rights, including the right to public health.
How things stand legally
Under the existing legal framework in the European Union, attempts to curtail the dissemination of false information have correctly recognised this problem as one that is exacerbated by digital platforms. In 2018, the European Commission developed the Codes of Practice on Disinformation, a voluntary self-regulatory commitment comprised of “signatories” representing multiple high profile technological companies. These codes exist under a legal framework comprised of human rights instruments, namely the Charter of Fundamental Rights of the European Union (CFREU) and the European Convention on Human Rights (ECHR). While the Codes recognise false information primarily as an electoral problem, it has become increasingly clear in recent weeks that misinformation is equally a concern from a public health perspective. This is important to acknowledge in light of the aforementioned human rights framework that guides the Codes of practice. Under Article 35 of the Charter, the “right of access to preventive health care and the right to benefit from medical treatment” is protected. In addition, numerous provisions of the ECHR including the right to freedom of expression under Article 10 can be limited on “public health” grounds.
In light of the severe threat to public health that misinformation can cause, an important legal question arises out of the current pandemic. That is, whether the current intermediary liability regime of self-regulation for misinformation can sufficiently protect fundamental rights for European citizens. The Council of Europe defines intermediaries as “a wide, diverse and rapidly evolving range of service providers that facilitate interactions on the internet between natural and legal persons.” However, many internet intermediaries now carry a pivotal duty of managing “the flow, content and accessibility of information.” This is linked to an important role that internet intermediaries have in enabling European citizens to exercise their fundamental rights. As many powerful internet intermediaries have become central to communication and information exchange, the legal framework in which they exist must be compatible with human rights standards.
Are legal changes likely in Europe?
Under the current legal framework, both member states such as Germany, and European institutions such as the European Commission, have initiated legislative measures to combat the spread of false and harmful content online, including false information that “may cause public harm.” However, from a legal perspective, this problem remains plagued by uncertainty in Europe, in particular with regard to the legal responsibilities for internet intermediaries. In Germany, social media companies can face penalties of up to €5 million for failure to remove unlawful content under the Network Enforcement Act 2017. In Hungary, legislation was passed on March 31st that criminalises the spread of false content online. Under the E Commerce Directive, intermediaries are sub divided into “mere conduit”, “cache” and “hosting” categories. However, this Directive addresses intermediary liability for “illegal activity” including third party copyright infringement under Article 14. This presents a gap for other forms of harmful content, as both misinformation and disinformation are not necessarily unlawful. Legal developments to address misinformation and disinformation began with the 2017 resolution on “Online Platforms and the Digital Single Market“. This led to the development of a High Level Group (HLG) to “advise on policy initiatives to counter fake news and the spread of disinformation online”. In turn, this facilitated the “Action Plan Against Disinformation,” and eventually the Codes of Practice. While these codes provide helpful principles and guidelines, they are ultimately voluntary and not legally binding measures. In publishing the Codes, the Commission stated that there was scope for future action “including of a regulatory nature,” depending on the Codes’ effectiveness. In the first annual assessment since the implementation of the Codes, the Commission noted that while there were “comprehensive efforts” to implement commitments, “further serious steps” were required. In particular, the Commission has noted that signatories’ actions “vary in terms of speed and scope.” This points to a position of disharmony, both in terms of how effectively digital signatories are implementing self-regulatory measures, and in terms of how different member states are legally responding to the problem of digitally spawned misinformation. In particular, there is growing concern about how governmental responses to the coronavirus have been hindered by both disinformation and misinformation on digital platforms. This is particularly important as a stated aim of the Commission is to prevent false and misleading information that “may cause public harm.” On the basis of this threat to public health, it is important to anticipate potential legal changes that could arise after COVID-19. These legal changes could take place within member states, through instruments like the Network Enforcement Act 2017 (NetzDG) and through the development of an independent statutory electoral commission, currently proposed in Ireland.
Legal changes could also take place through more direct regulation of online platforms, either by member states or under a EU instrument. This is linked to concerns about whether the current European self-regulatory framework for intermediaries is consistent with human rights. Human rights form a core part of primary European legal documents such as the Charter, and comprise the legal framework for the aforementioned Codes of Practice. For this reason, the ability for intermediaries to sufficiently protect human rights in the current legal framework, must be reassessed with respect to misinformation, as highlighted in the current pandemic
Going forward: A rights based approach to intermediaries?
The proliferation of misinformation online is inextricably linked to broader legal debates surrounding the liability of internet intermediaries. Increasingly, internet intermediaries such as Facebook and Twitter are seen not just as gatekeepers of information, but also as gatekeepers that assume human rights duties. In exercising substantial control over information, intermediaries facilitate the imparting of information and ideas. Under Article 11 of the Charter, interferences with free expression from public authorities must be strictly justified. However, states can also entail positive obligations to ensure rights are protected. In this way, the position of intermediaries in their control over vast amounts of content, and their defacto involvement in regulating expression, this is an area where platforms may require more direct regulation in order to ensure sufficient rights protections. In handling vast amounts of users’ data, intermediaries also play an important part in protecting the right to privacy and “personal data” protected under Article 8 of the Charter.
In light of the need for intermediaries to protect fundamental rights, an inconsistent and uncertain European legal approach can undermine the protection of such rights. This is particularly relevant to the issue of misinformation. Both in Germany and Hungary, human rights groups have questioned the human rights compatibility of legal measures to combat harmful and false content online. With respect to Germany, there have been concerns about how a notice and takedown regime for harmful content can lead to a potential chilling effect on free expression. With respect to Hungary, concerns have been raised about the level of power and discretion given to the state in criminalising content deemed to be false. While these approaches have been widely criticised as excessive, the future of the self-regulatory approach of the Codes of Practice on Disinformation is also uncertain. In this way, while self-regulation could be deemed insufficient in preventing misinformation online, more robust measures need to carefully avoid encroaching on fundamental rights such as free expression, protected under Article 11 of the Charter. In mediating the public interest and individual rights when approaching this problem from a legal perspective, a delicate regulatory balance will be required.
Going forward, it is critical that the effectiveness of self-regulation is reassessed when approaching this problem. In recent weeks, COVID-19 has underscored the urgency of this problem from a public health perspective. As such, more robust regulatory measures should be explored in consultation with relevant human rights stakeholders. Future measures need to safeguard fundamental rights such as free expression, privacy, and as the COVID-19 crisis highlights, public health rights. In order to ensure the protection of these rights, European legislators must reassess whether a harmonised and more robust European legal approach is needed in order to ensure compatibility between legislative measures to combat misinformation while strenuously protecting individual rights.