On Monday, the Government released its "Online Harms White Paper," a joint proposal from the Department for Digital, Culture, Media and Sport, and the Home Office. While a good start, it is unclear whether the white paper adequately addresses the problem of harmful content on the Internet that it proposes to tackle.
The white paper's proposals
The white paper sets out proposals to tackle online content and activity on the Internet that is deemed harmful to individual users. It applies to companies that allow the public to share or discover user-generated content or interact with each other online (for example, social media platforms, search engines, public discussion forums etc.). The white paper introduces a "duty of care" for such companies, which will be policed by an independent regulator and under the proposals the regulator will produce codes of practice that these companies must adhere to. This regulator can also request that companies prepare "transparency reports" and it will be responsible for education and awareness activities.
The white paper also proposes that this regulator should have a number of new, much more stringent, enforcement powers. On top of the powers already commonly used by regulators (such as issuing civil fines, serving notice on companies or publishing public notices) the regulator will also be able to: (1) disrupt business activities (by forcing third party intermediaries such as search engines or social media services to assist in preventing access to sites that host harmful content, such as ordering them to turn off functionality being used to access harmful content on a linked platform); (2) ISP blocking (which the white paper does acknowledge should be an enforcement option of last resort, which will only be considered where a company has committed serious and repeated violations of the outcome requirements for eliminating illegal or harmful content); and (3) senior management liability (which involves personal liability for civil fines or even potentially criminal liability).
The consultation on the white paper opened on the 8 April 2019 and will close in 12 weeks.
There is clearly a need for some form of regulation in this area; increased dependency on the Internet and online services has come with a number of corresponding challenges and illegal and harmful, or at the very least deceptive, content is easy to find online. With people's growing mistrust of online services – particularly social media – it is important to add additional safeguards. The white paper does identify the concerns well but any affirmative action will need much more consideration than the white paper gives it.
The white paper does get some things right – it seems sensible to impose a duty of care on companies that facilitate user-generated content, particularly if this can increase the effectiveness of individuals' existing legal remedies. It also seems to be a step in the right direction in its acknowledgement of the different types of harm that can result from use of online platforms. However, on examination of the detail of the proposals, some seem not to go far enough while others likely go too far.
As the law currently stands, there are some concerns around the enforcement of the most serious offences online. The system relies on the platforms' own algorithms which are by no means infallible (noting the hundreds of thousands of Christchurch terrorist videos that were uploaded to Facebook and never detected by its algorithms) to identify bad content and on human beings to report it. This white paper suggests that more specific monitoring requirements for these tightly defined categories of illegal content will be introduced but there is very little evidence given to support this statement or even the efficacy of these measures.
The paper, rightfully, addresses material which is not illegal but is nonetheless harmful. Addressing this problem will require more thought on its public policy implications than this white paper gives it. The white paper uses the example of anti-vaccination political messaging published online that contains inaccurate information and poses a risk to public health. This concern is valid and such messaging can cause serious harm, but would it be in the public interest and general public discourse for all of this type of material to be removed from platforms? There are obvious censorship concerns here about how far this practice will go – would it also apply to conspiracy theories, for example? Government critics? It will be important for the government to be clear about the desired outcome here and what levels of precaution companies will be expected to take.
Of course, this is an initial proposal and so there is time to dive into the finer detail at a later date, but the paper also contains some wide holes on how, practically, this is to the carried out. For instance, we do not yet know who the "independent regulator" will be and this will be a significant point given the breadth of the job it will be set to do. The paper suggests that Ofcom could be an appropriate fit, given its experience in the context of TV and radio, but it also suggests that a new regulatory body could be established. There is also the question of how long it will take to implement the proposed changes. It could take at least two years for these to be made into law. Given the pace at which the internet develops and the evolving challenges that we face, will it still be relevant in 2021?
Ultimately, regulation of harmful content on the Internet seems appropriate, but this is a more complex task than the white paper makes it seem. It is the lack of certainty around how this will actually be implemented – by what regulator and under which codes of practice – which makes it hard to comment on how far any of the white paper proposals could be successful. It will be interesting to see the responses submitted to the consultation and to read the government's response in July.
Thanks to Fieldfisher Trainee Rachel Bowley for authoring this article.
Sign up to our email digest