Loading Logo

The disinformation challenge: responsibility, liability and moderation of online content

April 2021
 by Digitalis

The disinformation challenge: responsibility, liability and moderation of online content

April 2021
 By Digitalis

The constant flow of misinformation and disinformation online presents a significant challenge for regulators across the world. With no overarching and agreed worldwide regulation of online content, every government adopts its own approach. The resulting fragmented landscape causes problems when it comes to agreeing who is responsible for disinformation, and creates complexities in delivering fair and consistent outcomes in injunctions and privacy cases that span countries. There are calls for tech giants to do more to ensure disinformation cannot prosper on their platforms, but the balance of moderating content while protecting freedom of speech and expression is a difficult one to strike.

The conundrum: is there a limit to freedom of speech?

Trying to establish the fine line between the right to freedom of speech and the need to halt the spread of inaccurate and harmful information online is an incredibly complex matter, and current geopolitical considerations only heighten the sensitivities of governments to the issue. Alleged use of ‘sharp power’ and foreign interference by authoritarian states in democratic countries’ social media and online spheres has intensified societal concerns to the extent that they have become national security anxieties[1].

It is undeniable that the tech giants’ algorithms have a profound worldwide impact on individuals – the recent storming of the US Capitol is a stark reminder of the power of social media. A report by the US-based non-profit activism group Avaaz[2] suggests that Facebook could have prevented over 10 billion views of misinformation-spreading accounts relating to the US presidential election, had it acted to moderate its policies in March 2020 instead of waiting until October 2020. The delay enabled actors of misinformation to increase their online digital footprint and visibility, and the use of Facebook as a platform for spreading misinformation is undeniable – even if they contest some of the findings.

The US Congress hearing: heated discussion, but no solution

The March 2021 US Congress hearing, chaired by Mike Doyle, highlighted the fact that the responsibility for the spread of disinformation and fake news continues to be a political hot potato. Congress is considering scrapping Section 230, the existing legislation that says website owners are effectively not classed as publishers, enabling them to moderate sites without being legally liable. The CEOs of Facebook, Twitter and Google were all asked respectively for their views on Section 230 and whether they believe their platforms should bear liability. They responded by skirting around the topic, although Facebook’s CEO proposed some amendments. Most suggested that modifying Section 230 may hinder freedom of speech, and recommended their own initiatives to tackle disinformation.

Mark Zuckerberg talked of Facebook’s efforts to counter disinformation, such as working with 80 fact-checking organisations and removing more than 12 million pieces of false content relating to Covid-19. And Twitter CEO Jack Dorsey highlighted two initiatives, Birdwatch and Bluesky, launched by the platform to tackle fake news by enabling Twitter users to label false information. The message of the big tech companies was, in essence: we are doing our best to tackle disinformation, and we should not be made liable for information posted on our platforms.

The complexities of controlling online content

The problem lies in striking a balance between regulating content to eliminate harmful disinformation, and protecting freedom of speech and expression: all while ensuring there are no delays for users between posting information and seeing it appear on the platform they’re using.

The sheer volume of traffic on the most popular platforms throws light on the scale of the problem: Twitter users upload 350,000 tweets per minute, and Facebook has 2.7 billion monthly active users uploading, commenting, messaging, liking and sharing content. While algorithms and AI technologies can go some way to filtering content, they do not yet provide a perfect solution. On the other hand, manually controlling every upload would be a colossal task that would reduce the attractive instantaneity of these platforms for their users, as well as leading to censorship issues. As outsiders, it is difficult to decide which position to adopt: are big tech companies really trying as hard as they can, or would increased moderation impede on our civil liberties?  

The Congress hearing ended on a holding note, with no mention of a follow-up hearing. Whilst the hearing offered lawmakers the chance to quiz CEOs on important topics including counter-Republican bias, LGBT and African-American hate online, progress was limited. While the questions of responsibility, liability and disinformation remain unsolved, we must all take individual responsibility for being alert to the fake-news phenomenon, continually asking questions of the content presented to us and the extent to which we can trust it.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.