By April Buxton
What is the Online Safety Bill?
The Online Safety Bill is a regulatory framework that has been drafted with the intention of providing legal powers to an independent regulator to identify and regulate illegal and harmful content distributed online.
In her speech on 11 May 2021,the Queen referred to the Online Safety Bill, stating that it will allow the UK to “lead the way in ensuring internet safety for all, especially for children”. The following day the government published the first draft, detailing its plans to regulate the UK’s internet usage.
Who does the Bill apply to?
The Online Safety Bill applies to all online service providers who provide regulated user-to-user services or search services, including social media platforms, online forums and search engines. In terms of jurisdiction, the Bill is intended to extend to organisations who either operate in the UK or whose content can be accessed by users in the UK.
How will the Online Safety Bill operate?
The Bill will establish a “duty of care” between organisations and their users – something the law has yet to tackle. This duty will take the form of a legal obligation to avoid harm caused to individuals by viewing certain types of content. As a result, organisations will be required to self-regulate by preventing the distribution of, as well as removing, illegal and harmful content posted by their users.
They will also be required to undertake additional responsibilities including reporting and redress mechanisms for their users (enabling users to challenge harmful content viewed online), as well as publishing transparency reports detailing the actions they have taken regarding illegal and/or harmful content posted on their platforms.
What constitutes harmful content?
Within the Bill, guidance has been provided to determine what constitutes both illegal and harmful content. Although illegal content is easily defined, identifying content that is legal but harmful is subjective, and therefore poses a challenge.
However, the proposed Bill has sought to define harmful content, stating that a piece of content will be deemed harmful to adults if the “provider of the service has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities”.
How far will the Bill go in terms of what it deems as content?
The Bill defines content as “anything communicated by means of an internet service, whether publicly or privately, including written material or messages, oral communications, photographs, videos, visual images, music and data of any description”. Thus, the Bill is intended to apply to content that has been published publicly – for example, content posted on a public social media account – and privately.
What are the sanctions for non-compliance?
Ofcom, the elected independent regulator, will be given powers to sanction companies up to £18 million or 10% of annual global turnover (whichever is higher) for non-compliance. Ofcom will also be afforded powers to seek court orders to disrupt the activities of non-compliant providers, where it is deemed that individuals in the UK are at risk of significant harm. In addition to this, the draft Bill provides the government with deferred powers to introduce a new criminal offence for senior managers, if deemed necessary, in order to seek their compliance.
Potential issues and challenges
On examining the first draft of the Bill, it is notable that its implementation is likely to give rise to a number of issues.
Because the Bill defines content as “anything communicated by means of an internet service, whether publicly or privately”, there may be a conflict with laws concerning individual privacy and freedom of expression. Although the Bill states that certain types of content will be exempt (such as paid adverts, individual live interactions, product reviews, and content from recognised news publishers), determining when it is permissible to access private conversations could be problematic and runs a risk of infringing those rights. The line between compliance with the Bill and the right to freedom of expression is subjective, and this is therefore likely to prove challenging.
Ministers have stated that they have added measures to the Bill to prevent a legal grey area with the potential to infringe upon individuals’ human rights, ensuring that “necessary online protections do not lead to unnecessary censorship”. However, the right to freedom of expression is a qualified right and can be restricted in certain circumstances. Restricting free speech to avoid the publication of illegal content is clearly justifiable, but it remains to be seen whether the application of the Bill in respect of “legal but harmful content” will lead to claims of rights being infringed.
Furthermore, criticisms have emerged from the Financial Conduct Authority, the City of London Police, and the Investment Association concerning the Bill’s clear omission of measures to govern online fraud involving organisations that fall within its remit. With the FT Adviser, the Cabinet Office and Detica reporting that investment frauds account for 25% of all financial losses – with online financial fraud cases costing approximately £3.1 billion per year in the UK – it is difficult to understand why such a prominent issue has been omitted.
The Bill will be subject to pre-legislative scrutiny by a joint committee of MPs prior to a final version being formally introduced to Parliament for debate later this year. The proceedings will be closely observed in the hope that they will provide further clarification on the extent of the application of the Bill and the way in which it will govern interactive tech platforms. Until then, although the objectives are clear, some questions remain on the detail.