Online Safety Act – what you need to know | Fieldfisher
Skip to main content
Publication

Online Safety Act – what you need to know

Locations

United Kingdom

Now is the time to help shape how Ofcom will enforce the Act

With the passing of the Online Safety Act 2023, this blog sets out what service providers can expect from Ofcom as the new regulator for online safety in the UK. We review Ofcom's recent report on content moderation in user-to-user services, and set out the timeframes for engaging with Ofcom's upcoming consultations on guidance and Codes of Practice under the new legislation.

Businesses will soon be able to get involved in Ofcom's consultations, helping shape how Ofcom will enforce the Act. Once in force, the Act will impose duties on providers of various online services to prevent harm to users. Service providers may be obliged to step up content moderation efforts and change how they publish content moderation data amidst a push for greater transparency.

With the Online Safety Act (OSA) recently making its way through Parliament and (at the time of writing) about to receive Royal Assent, here's what companies need to know about what Ofcom has been up to, and what to expect now that the long-awaited Act now has finally become law.

Under the OSA, Ofcom is the regulator for online safety in the UK. Ofcom has a statutory duty to adequately protect citizens from harm presented by content on regulated services.

Secondary legislation will set out the precise conditions for how some regulated services will be categorised. Categorised services will be subject to stricter duties. Category 1 will comprise the largest and highest risk user-to-user services. Categories 2A and 2B will comprise search services of a certain size, and user-to-user services with lower risk and smaller user bases than Category 1, respectively. Ofcom is required to produce a register of services that meet the criteria for each category.

What is a regulated service?

Although there are some exceptions and complexities, generally the following services with a UK userbase are in scope:

  • Internet services where content can be generated, uploaded, or shared by users to the service and is encountered by other users of the same service (user-to-user service)
  • Search engines with a service or ability enabling people to search more than one website or database (search service)
  • Internet services where pornographic content is published or displayed on the service (regulated provider pornographic content)

Throughout the OSA's progress through parliament, Ofcom has published guidance on how it intends to approach regulation. More recently, Ofcom has reported on content moderation policies, decision making, and performance measuring for user-to-user services. These publications give some insight into how Ofcom will work with service providers once the Online Safety Act is in force.

Ofcom's first steps towards its new role

Ofcom's (now out of date) Roadmap to Regulation – and its June 2023 update - set out how it plans to implement the OSA. At an early stage, they make it clear that requirements will differ for different types of service, and that the duties imposed on service providers will be limited to what is proportionate and technically feasible. We expect Ofcom will publish a further update to its roadmap in the coming weeks now that the Act has passed into law.

Ofcom anticipates that its powers will come into effect two months from Royal Assent, in November 2023. It will then publish draft guidance and Codes of Practice – consulting on each – in stages. The Codes identify how services can comply with their duties. While they are not requirements, and alternative approaches can be taken, they will offer the clearest approach to compliance with the OSB.

After commencement, Ofcom plans to consult and publish codes and guidance in three phases:

Phase one – illegal harms– shortly after commencement

  • Draft Codes on illegal content harms (including child sexual exploitation and abuse, and terrorist content)
  • Sector risk assessment for illegal content harms, with risk profiles for groups of firms and possible risks associated with services
  • Draft guidance on illegal content risk assessment
  • Draft enforcement guidelines
  • Draft guidance on record keeping and review

Phase two – child safety and pornography – within six months from commencement

  • Draft guidance on children's access assessments
  • Draft guidance on age assurance
  • Draft Codes of Practice relating to protection of children
  • Draft risk assessment guidance on children's harms

Phase three – transparency, user empowerment, other duties on categorised services – various stages

  • Draft guidance on transparency reporting – between Royal Assent and finalisation of Ofcom's register of categorised services

 Ofcom will then consult on:

  • User empowerment tools
  • Operating consistently within terms of service
  • Protecting certain types of journalist content
  • Preventing fraudulent advertising

Additionally, Ofcom will engage with the highest risk services directly. These initial stages provide ample opportunity for firms falling within the scope of the OSA to participate in consultations and ensure guidance and Codes of Practice are realistic, proportionate and technically feasible for service providers.

Regulatory approach to content moderation

While draft Codes and guidelines are yet to be published, Ofcom's report on content moderation in user-to-user online services , published on 14 September 2023, gives an insight into how those may eventually look.

The report outlines a simplified content moderation system, with a focus on social media and similar services, including policy setting and enforcement considerations. It broadly identifies two categories of content policies: standards for content that define what violates a platform's Terms of Service (ToS), and policies directed at non-violating but problematic content.
Content standards are taken to be directed at illegal, harmful or disruptive content. These standards may be more clearly defined, with the typical outcome for violation being removal of that content.

Policies for problematic content are more nebulous, with content subject to these policies including misinformation and conspiracy theories. Breach of these policies may result in interventions that fall short of removal, such as demotion or elements of friction to make content harder to discover.

Ofcom envisages platforms to enforce its content moderation rules by using some or all of the following: detailed moderation guidelines; the training and use of human staff; and the training and use of automated systems. For automated systems, Ofcom notes the important role of shared industry databases for detecting illegal content such as Child Sexual Abuse Material (CSAM) and terrorist content.

Ofcom acknowledges that content moderation is intended to limit rather than entirely prevent harmful content from being accessed. Furthermore, it notes the various balancing acts that services must contend with, such as:

  • Whether to leave content live until a moderation decision is reached, or alternatively to remove content as soon as possible, even while it is pending review
  • Prioritising review of content dependent on its likelihood of being harmful, the level of harm it might entail, or the visibility of the content itself
  • How narrowly to define certain types of content as violating ToS

The report also considers that alternative processes will often be necessary. Ofcom notes that differences will exist in when automated checks on content may take place, the role of communities and volunteers in moderation, the limitations on smaller services, and how direct messaging requires a different approach to content for broad consumption. Operating specialised human reviewers is acknowledged as an effective but expensive tool in content moderation.

Transparency

Ofcom's content moderation report questions how best to measure any reduction in harm made by moderation. While firms do publish a variety of statistics, each may consider their own particular approach the most insightful. The report suggests that while published data - and discussions around approaches to measurement - are valuable, it highlights calls for more granular data to be made available, in order to understand the potential harm of certain content. Ofcom is particularly focused on how exposure to content (and the potential harm of said content) differs across demographics, protected characteristics, location, types of violation, and type of service functionality. Ofcom also suggests it may be worthwhile services providers measuring content that is non-violating but subject to measures to reduce visibility.

What this means for service providers

Over the coming months, service providers in the scope of the OSA will have the opportunity to engage with Ofcom's consultations on the Codes of Practice and guidance that will shape how Ofcom approach its new regulatory duties. The enactment of the OSA and the subsequent consultations mean that firms should consider now what they want and need to see from the new Codes of Practice and Guidance.

Ofcom's reporting on content moderation suggests it is mindful of the difficulties involved – particularly for small providers and those operating at large scales – and the Codes may well be drafted with these challenges in mind. For firms that fall within scope of the OSA's transparency duties, the regulator may expect more granular data on how different groups are exposed to harmful content, along with data on how content that is restricted – but not removed – is consumed by users.

Drafted with assistance from Jonathan Comfort, Trainee.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE