New proposed harmful communication offences and personal liability for online safety breaches | Fieldfisher
Skip to main content
Insight

New proposed harmful communication offences and personal liability for online safety breaches

Locations

United Kingdom

On 14 December last year, the Joint Committee on the draft Online Safety Bill published its landmark 193-page report setting out its views to improve the draft Online Safety Bill. The Committee was appointed to conduct pre-legislative scrutiny of the Government’s draft Bill, which was originally published in May 2021, and intends to establish a new regulatory framework to tackle harmful content online.

In considering the draft Bill, the Committee heard evidence from a range of sources including victims of online harms, a Nobel Peace prize-winning journalist, academics, experts, big tech companies, Ofcom, government departments, and Facebook whistle-blower Frances Haugen. It also received hundreds of pages of written evidence.
 
This blog discusses some of the Committee's key recommendations including the imposition of new criminal offences and personal liability for tech executives who fail to ensure tech companies' compliance with the requirements of the Bill.
 
New proposed harmful communication offences

The Committee considers that criminal law should be the starting point for regulation of potential harmful online activity and it criticises clause 11 of the draft Bill that delegates the ability to define harmful content to online service providers. In light of this, the Committee recommends that the Government adopt the Law Commission's proposed communications and hate crime offences in conjunction with the Online Safety Bill. It states that this will significantly enhance the protections provided by the Bill and ensure greater certainty on what content is targeted. The proposal involves modernising the existing communications offences contained in section 127(1) of the Communications Act 2003 (“CA 2003”) and the Malicious Communications Act 1988 (“MCA 1988”) as follows:

  1. a new “harm-based” communications offence Currently, section 127(1) of the CA 2003 criminalises the sending of a message which is “grossly offensive or of an indecent, obscene or menacing character”. The new harms based communications offence aims to shift the focus away from assessing the content of the communication and refocus towards the consequences of the communication and the potentially harmful effects. This ensures that harmful communications do not escape criminal sanction because they do not fit within one of the current proscribed categories. Furthermore, communications that lack the potential for subsequent harm are not criminalised.
  2.  a new offence of encouraging or assisting serious self-harm;This offence creates a high threshold of harm to target serious cases of encouraging or assisting self-harm without unduly criminalising vulnerable people who may share such content. The offence safeguards vulnerable people by ensuring the high threshold of intent to inflict or encourage grievous bodily harm, and the requirement to obtain consent from the Director of Public Prosecutions to prosecute.
  3.  a new offence of cyberflashingThis offence aims to amend the Sexual Offences Act 2003 to include a specific offence to target the unsolicited sending of sexual images using digital technology. The current offence criminalises sexual exposure, but is not specific in relation to technology.
  4. new offences of sending knowingly false communications, threatening communications, and making hoax calls to the emergency services, to replace section 127(2) of the CA 2003, with specific offences, to ensure clarity.

In light of these proposed offences, the Committee notes that Government will need to commit to providing the police and courts with adequate resources to tackle illegal content online. It also recommends that Ofcom be required to issue a binding Code of Practice to assist service providers in identifying, reporting, and acting on illegal content.
 
Following the Committee's report, on 4 February 2022 the Government announced an extended list of illegal content for tech firms to remove from platforms, in order to strengthen the Bill. The updated priority content list includes hate crimes, the sale of illegal drugs and weapons online, people smuggling, sexual exploitation, revenge porn, fraud, and content promoting or facilitating suicide. New criminal offences will also be added to the bill to tackle domestic violence and threats to rape and kill.
 
Personal liability for safety controllers

The Committee also supports the proposed requirement for big tech companies to appoint "safety controllers", who will be personally liable for an organisation's failures to comply with the Bill, where there is clear evidence of repeated systematic failings that result in a significant risk of serious harm to end users. Any personal liability will be a proportionate last resort for Ofcom. The proposed sanctions relate to the failure to comply with an information notice; deliberately or recklessly providing or publishing false information; and providing or publishing encrypted information with the intention of preventing Ofcom from understanding such information.

The Committee notes that sanctions against safety controllers will demonstrate the seriousness with which the Government is taking the matter of holding tech executives to account. The role of safety controllers aims to ensure enhanced compliance and cooperation of tech companies with Ofcom's regulatory requests, thus ensuring transparency and oversight. Notably, personal liability does not extend to subsequent harms caused by the design of an online platform, and this remains with tech companies.

The seriousness with which the Government is taking online safety is further demonstrated by the reduced timeframe by which criminal liability will come into force. Originally, criminal liability was proposed to come into force after two years. The Committee supports a reduced transition period of three to six months.

Concluding comments

The combination of new criminal offences and personal liability of safety controllers would strengthen the draft Online Safety Bill and improve the ability to identify, mitigate, and prevent online harms. Both recommendations ultimately aim to refocus the regulation of online service providers to Ofcom rather than allowing big tech companies to operate in a 'wild west' environment online. These proposed sanctions aim to ensure clarity on the behaviour that is expected of both tech companies and end users. However, the intention to ensure clarity may be impeded by the government's addition of further offences, which risks the bill attempting to cover too much.

Affected businesses will need to consider:

  1. Whether content on their platforms and websites falls within the categories of "illegal" or "harmful" content;
  2. Which tools will be most helpful to ensure such content is identified at scale (including, for example, Artificial Intelligence); and
  3. How will those tools adapt to the nuances of identifying harmful and illegal content, in circumstances where it may not be straight forward to do so.

Practically, it remains to be seen if Ofcom and the police will be adequately resourced to effectively enforce these new criminal offences and hold safety controllers accountable. There are also concerns about whether smaller regulated entities will be able to pay for all of this implementation. Nevertheless, as stated by the Joint Committee "a safer internet is possible and this Bill is a major step towards achieving it".

With special thanks to Trainee Solicitor, Ella Thornton, co-author of this article.
 

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE