Abuse of sports players, athletes and officials is a widespread and growing issue, affecting all levels of sport from grassroots competitions to major professional events.
Persistent exposure to personally offensive content can damage a sportsperson's mental and physical health and ultimately their performance, which for professional players or athletes can be devastating for their career and livelihood.
While criticism of players' performances comes with the territory of engaging in sport, particularly at professional level, generally the line between criticism and abuse is felt to be crossed when comments on performance spill over into personal insults – including racist and homophobic statements – and/or unrelenting denigration.
A controversial and much criticised aspect of some social media algorithms is that abusive posts tend to get more traction than positive content and are promoted to more users. This then attracts more interaction, boosting their visibility to the target of the abuse and other platform users.
Some professional clubs have tried to protect players by deploying AI technology – dubbed "hate filters" – to sift abusive social media posts, preventing players from seeing offensive content.
However, when deployed in isolation, these hate filters are at a considerable disadvantage against the masses of potentially abusive social media users who can use aliases, set up multiple accounts or even use chatbots to target their abuse and evade detection.
According to online safety experts, the efficacy of this technology is maximised by using it simultaneously to screen intended targets from seeing online hatred and to take action against those spreading it, both online and offline. It can also be a useful tool to explore education and deterrence.
Used as a form of intelligence, clubs can use online hate detection to take action against persistent or egregious abusers, with a force multiplier effect where groups of clubs or a whole league agree to take a syndicated approach, prior to and following the engagement of law enforcement.
Using sophisticated technology and investigative techniques, it is possible to unmask the real-world identities of anonymous abusers, enabling clubs and governing bodies to act.
Equally, this technology be used to identify and support positive voices online, re-balancing toxic one-way abuse with healthier community interactions.
Recognition of the seriousness of the problem of online abuse of professional sports players came in the wake of the relentless social media attacks on the England Football Team following their loss in final of European Championships in July 2021.
In response, the UK's Police, Crime, Sentencing and Courts Act 2022 was amended to significantly expand the reach of football banning orders.
Since the legislation was amended, there has been at least one notable, high-profile case of an abuser being identified, tried and sentenced for online abuse of Premiership player Ivan Toney.
However, the legislation has only focused on extending the scope of banning orders in football, leaving other sports and sports players without more dedicated legal protection beyond general statutes, such as the Malicious Communications Act 1988, the Protection from Harassment Act 1997 and the Communications Act 2003.
Another criticism levelled at the criminalisation approach is that it is slow, cumbersome and expensive – and the number of people convicted to date is negligible compared to the scale of the problem.
Whose problem is it?
One of the barriers to effectively tackling online abuse in sport is the failure to agree who should be responsible for regulating this area and taking action when abuse occurs.
Sports clubs, regulators and the police have all been criticised by the public, politicians and the media for not doing enough to protect individual sports players, while players themselves are sometimes urged to "switch off" their social media to reduce their exposure to online abuse.
For professional players, simply shutting down their social media accounts is easier said than done, especially when their clubs have a vested interest in promoting their players' online profiles to boost brand recognition.
If these online accounts become the target of abuse, this creates a tricky situation for clubs who have to balance their welfare obligations to their players against their commercial objectives.
The UK's much trailed Online Safety Bill, which seeks to put the UK's communications regulator Ofcom in charge of managing a new regulatory regime to address illegal and harmful online content, and which would make social media platforms accountable for content they publish, has been touted as one solution to controlling the problem.
The Online Safety Bill will require platforms to conduct risk assessments to mitigate and manage the risks of harm to individual users, including abusive content to sports players.
The Bill will also require all big social platforms to offer the option of identity verification for adult users, to assist in minimising the anonymous trolling environment that currently exists on many platforms.
Finally, the Bill creates new threatening communications offences, which will allow police to prosecute anyone who sends a message online with a threat of death or serious harm to another person. The Bill is currently in its final stages and is expected to enter into force before the end of this Parliamentary term.
But even as calls mount for a stiffer regulatory approach to monitoring online abuse, it is generally acknowledged that a tougher stance would require the assistance of law enforcement to be effective.
This raises questions about whether the police are sufficiently resourced to respond to this type of activity, and what measures – which could range from confiscation of devices to arrests and prosecutions – would be appropriate and proportionate policing tactics in situations of suspected abuse.
An added enforcement obstacle arises when abusers are not sports fans, but rather individuals with no connection to the sport or club, who may be located outside the UK and/or directing abuse via chatbots.
Sports bodies that are determined to root out elusive abusers have the option of hiring professional investigation teams, who work with regulatory specialists to identify culprits and devise legal solutions for clubs and organisations seeking to put in place effective resistance to abuse.
This can be particularly effective because it enables sports professionals or bodies to tackle the problem holistically; investigators can assist in removing abusers from sporting environments in the short term, while regulatory professionals can also assess the efficacy of safeguards and suggest governance improvements to better protect players, staff and other supporters from social media abuse.
At present, there are no concrete answers to how to manage the problem of online abuse in general and there is a tacit acceptance that no solution will be completely watertight.
However, with its high profile, significant financial resources and vested interests, the professional sports sector could seize the initiative in tackling the issue and may prove to be a positive test case for new regulatory, law enforcement and practical action.
This article was authored by Dispute Resolution Associate Louis Muncey and Public Regulatory Director Frankie Everitt at Fieldfisher and Jonny Harman, Senior Director, Forensic Investigations and Intelligence at Kroll.
Sign up to our email digest