
The spread of the COVID19 global pandemic has recently brought to the fore a tech policy issue that has long been causing heated debate, namely online misinformation. In recent weeks, tech companies have been working together to figure out how to prevent the spread of COVID19 misinformation as the world tackles the pandemic, with social media platforms taking a tougher stance against lies and falsehoods that undermine the public health response. For example, in the last month, Twitter alone has removed over 2,000 misleading COVID19 posts. It is surprising then that simultaneous attempts to improve the only national social media content moderation law in the world have not received more attention.
In February and April of this year the German government has published a couple of amendments to the Network Enforcement Act (NetzDG), a bill that passed in 2017 and went into effect on January 1, 2018. The NetzDG was the first law of its kind to try to reign in the spread of misinformation on social media by requiring the takedown of criminal speech while attempting to protect the freedom of expression. The law was controversial from the get-go with battle lines drawn between those who wanted stricter regulation of social media platforms to curb online misinformation and manipulation and thereby protect democratic institutions that rely on public discourse rooted in truth and those who were equally concerned with protecting freedom of speech that democracies also depend on and are in fact meant to protect.
What does this novel law require? The NetzDG requires that social media platforms take down “manifestly illegal” content within 24 hours and otherwise unlawful content within 7 days. The law does not create new categories of illegal speech, rather it references Germany’s existing 22 statues of its criminal code that address everything from distribution of child pornography to broad definitions of defamation and insult. Requestors lodging complaints must be notified that their request was received and what action was taken. Social media platforms must also publish semi-annual transparency reports that inform the public on takedown requests and compliance rates. Companies face fines up to 50 million euro and uploaders of illegal content up to 5 million euro. The NetzDG applies to any social media platform that has at least 2 million registered users in Germany no matter the company’s headquarters. It does not apply to smaller or niche online platforms or marketplaces.
An April 2019 report on content moderation by the Transatlantic Working Group, a joint US and European group of academics, researchers, and public policy leaders, analysed the impact of the law since it took effect. The report finds that it is too early to tell what impact the law is having on preventing hate speech and preserving freedom of speech. However, there were a few take-aways worth noting. First, transparency reports do provide additional helpful information on social media platform practices but they need to be standardized to enable better comparison of requests and take-downs across platforms. They also need to be required to provide more detailed information on company practices related to online safety. Request rates vary considerably depending on the design of the complaint process and that companies need to make this process easier and more accessible to users. The report finds that enforcement of the NetzDG came secondary to the enforcement of the companies’ existing community standards as outlined in their terms of service. In other words, companies already had their own rules against hate speech that they simply have not been very good at enforcing, and the law, backed up by its threat of fines, became a forcing function to finally do so. Finally, once content is removed, the requestor is notified but the uploader of the removed content is not alerted and has no ability to request a substantiation of the decision or to appeal it.
The recently proposed amendments attempt to address some of these issues. They will force social media platforms to make submitting requests much easier. They will give uploaders a mechanism for contesting decisions that require the social media platform to substantiate their decision, what is called a “counter-notification mechanism”. Transparency reports will have to be more robust and include reporting on the results of counter-notification procedures. They will also have to inform the public if any automated processes are used for finding and deleting illegal content. This is an attempt to give transparency to the current “blackbox” of AI-driven automated decision-making systems that are used in content moderation. The reports must also state whether and to what extent the company has provided researchers with access to anonymized data. This is to encourage data sharing with research institutions so that the effect of the law can be better analyzed. For example, sociologists would be interested in whether certain groups of society are likely to be disproportionately targeted with illegal content.
These are all incremental steps in the right direction, but the debate will continue as it is still too early to measure the impact of the law. Is it preventing the spread of misinformation, political radicalization, and incitement to violence? Or is it having a chilling effect on the freedom of expression and other unintended consequences like elevating radical voices as controversies arise, further radicalizing existing groups as they feel unjustly targeted, or simply pushing radical groups to other platforms? This is very difficult to measure because the universe of data is vast while simultaneously difficult for researchers to access and analyze because private companies do not make their data nor decision-making processes transparent enough. There have been proposals to set up data clearinghouses or e-courts to take some of the data and decision-making out of exclusively private hands, but not much of that has come to fruition.
Even if all of this information were available to the right researchers, it would still be very difficult to attribute the impact of online hate speech and misinformation to what we actually care about and therefore want to measure which is the health of our democracies. Whether or not one agrees with Germany’s law, Germany, in part because of its turbulent history dealing with hate speech and propaganda, is at least a leading model for how action can be taken. Doing nothing is not a viable alternative. Social media platforms, civil society, and democratic lawmakers around the world need to increase collaboration on this issue as hate speech and misinformation are now a thriving, ugly bug in our online world, but they do not have to be a persistent feature.