Fieldfisher | Technology Regulation | Online harms and abuse | Fieldfisher
Skip to main content
Insight

FIFA World Cup 2022 and the future of online harms on social media

Locations

United Kingdom

Racist abuse following the last week's UEFA Euro 2020 Final, and the targeting of England players Marcus Rashford, Bukayo Saka and Jadon Sancho, has upset and angered the UK public. 
 
As the UK Government calls on social media companies to do more to tackle racist online abuse, we consider the future of regulation in the online harms and online safety space.
 
Between now and next year's 2022 FIFA World Cup in Qatar, there should be a number of developments to regulation on social media in the UK and Europe.
 
If the same racist abuse that has been making headlines in the past week were to follow the 2022 World Cup, what recourse would individuals and regulators have?
 
All going well, there will be:
 

Online Safety Bill

 
In the UK, the Government intends to 'lead the way in ensuring internet safety for all' by introducing the Online Safety Bill.
 
The UK Government published the draft Bill on 12 May 2021 and the Bill is due to be subject to pre-legislative scrutiny by a joint committee of MPs before a final version is introduced to Parliament in the autumn. The final legislation is unlikely to enter into force until at least 2023. But, by next November, and as matches begin in Qatar, we should have a much clearer idea of the wording and scope of the Bill.
 
In the meantime, the Bill has come under continued scrutiny for its failure to go far enough in relation to racial abuse online.  
 
As currently drafted, the Bill proposes to impose duties on social media companies to carry out risk assessments in relation to illegal and harmful content on their platforms. Social media companies will be required to take proportionate steps to reduce and manage the risk of harm to individuals. They will also need to have regard to the importance of protecting users’ rights to freedom of expression and protecting users from unwarranted infringements of privacy when deciding on and implementing safety policies. There will be legal obligations for social media companies to have systems in place to report illegal content.
 
Where a social media company fails to live up to these duties, Ofcom (as the independent regulator) will be able to impose fines of up to £18 million, or 10% of social media companies' annual global revenue, whichever is highest. Ofcom will also have new powers to conduct investigations and issue notices to seek information from social media companies about the processes in place to protect users from online harm.  In return, Ofcom will be required to produce codes of practice and guidance to assist businesses to comply with their online safety obligations.
 

Digital Services Act

 
On 15th December 2020, the European Commission released its highly anticipated proposal for a Digital Services Act ("DSA"), ticking one more box off its 2020 to-do list on "Europe fit for the digital age".  Whilst the DSA is still in proposal stage (which is likely to run until mid-next year), it is hoped that the DSA will improve content moderation on social media platforms and address growing concerns about the posting of illegal content. 
 
Under the proposed DSA, all providers of intermediary services would be subject to due diligence obligations regarding illegal content.  The scope of such obligations vary depending on the category of intermediary in question, however, such due diligence obligations include requirements to:
 
  • put in place notice and action mechanisms to allow third parties to notify the presence of alleged illegal content;
  • set-up an internal complaint-handling system on decisions taken in respect of abusive notices;
  • in the case of very large online platforms, conduct risk assessments on the systemic risks regarding the use of their services, conduct mandatory external audits on an annual basis, appoint one or more compliance officer(s), provide access to certain data to competent authorities etc.
 
The DSA goes further than the e-commerce Directive (the legislation it aims to build upon) in that it proposes to introduce and define the concept of 'illegal content' as any information that does not comply with Union law or the law of a Member State. It could cover information that is illegal by its nature, such as illegal hate speech or terrorist content, but also information that relates to illegal activities, such as sharing images that depict child sexual abuse or sharing revenge porn or the use of content infringing IP rights.
 
That said, and like the Online Harms Bill, the DSA has faced criticism for failing to go far enough to combat racist abuse and other online trolling.  The DSA, for example, does not contain any provisions around "harmful content" (e.g. bullying and fake news) which, if not addressed, could make it very quickly (even already) out of date.
 

The petition and draft bill on ID verification

 
Finally, and following the deluge of racist abuse received by some footballers on their social media accounts following last week's Final, there have been renewed calls for mandatory ID verification to allow people to have social media accounts.
 
Indeed, more than half a million have people have signed a petition to make verified ID a requirement for opening a social media account, which well exceeds the threshold to be considered by UK Parliament for a debate.  The rationale being to prevent anonymised harmful activity, and to provide traceability if an offence occurs.  Clearly, if the petition is ever successful and results in a new legislation being passed, this would give rise to important data protection considerations.
 
The petition has, itself, been criticised as being short-sighted.  Some critics argue that merely publishing the identities of users that post harmful content, will not deter certain individuals from posting such content, and does not address the fact that harmful views and opinions exist.  Other critics have focused on the fact that anonymity is important in some cases, for example, for domestic violence or other victims. 

Despite criticism of the measures proposed to tackle online harms, one thing is clear:  the events following last week's Final serve as yet another reminder that the regulation of technology is an issue of increasing importance, and one that will constantly need to be reviewed and evolve to address the challenges of the society in which we live. 
 

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Related Work Areas

Technology