Introduction
The Online Safety Bill received Royal Assent on 26 October 2023, becoming the Online Safety Act (“the Act”). Its aim is to deliver on the government’s commitment to “make the UK the safest place to be online”, by imposing statutory duties on social media companies and search engines[1] (“service providers”) to ensure that they implement measures to protect individuals from illegal and harmful content online.
The Act is significant for sport in that it has the potential to reduce online abuse that sports participants are subjected to by so-called ‘fans’ of sport. The severity of the issue of abuse towards sports participants is reflected in recent statistics tracking abuse against footballers on X (formerly known as Twitter), which reported that an abusive message is sent once every 4 minutes on average[2].
The online abuse received by footballers was brought into focus on a global scale following the UEFA European Championship Final in 2021, when Marcus Rashford, Bukayo Saka, and Jadon Sancho were racially abused online for failing to convert their respective penalty kicks. Whilst a number of individuals were convicted (and in many cases imprisoned) for committing online offences relating to the Euros’ Final[3], further studies have shown that online abuse has nevertheless continued to rise. In the football season that followed, Kick it Out[4] recorded a 65% increase in reports of discriminatory behaviour, which included a staggering 279% increase in online abuse.
So will the Act serve to fulfil its aim of making the UK the safest place online and in turn reduce the online hate and discrimination in sport?
This article examines the scope of the Act and its potential impact on reducing online abuse in sport.
Overview and scope of the Act
The Act introduces a new regulatory framework which imposes duties on service providers to mitigate and manage the risks of illegal and harmful content on their platforms. Failure to adhere to those duties can lead to financial penalties and in some cases criminal prosecution.
Whilst there is currently legislation in existence which enables action to be taken against individuals responsible for posting online abuse, the Act provides additional protection for victims by ensuring that service providers – who provide the platform for the abuse to take place – are also held accountable.
Under the Act, service providers are required to have systems and processes in place to prevent the widespread publishing of illegal and hateful content. The Act places legal responsibilities on service providers to remove such content from their platforms once it has been reported to them.
Ofcom has been appointed as the regulator to enforce the Act and it will have the power to fine service providers up to £18m, or 10% of their global revenue (whichever is greater), for non-compliance. The remit of Ofcom as the regulator is limited to ensuring that service providers have adequate systems in place to protect users; its role is not to determine the nature of any content which might be in breach of the Act, and it does not have the power to instruct service providers to remove content.
The Act has extraterritorial application which means that its provisions also apply to service providers that are based outside the UK in circumstances where the service has a significant number of UK users (or UK users form a target market), or where the service is capable of being used in the UK and there are reasonable grounds to believe that there is a material risk of harm to individuals in the UK.
One of the more punitive aspects of the Act is the prescribed power for service providers and senior managers to be held criminally liable for a service provider’s failure to comply with the specific requirements of the Act[5].
The potential impact of the Act on online abuse in sport
The Act is designed to deal with illegal content and activity. ‘Content’ for the purposes of the Act includes abuse which targets race, religion, gender, sexual orientation, disability, or gender reassignment, or where it incites hatred towards anyone who holds one or more of those protected characteristics. Notably, this definition does not extend to the more widespread and general abuse that many sports participants regularly suffer from online.
The Bill had contemplated a new offence of ‘harmful communications’ which would have been made out where (i) there was a ‘real and substantial risk’ that a message would cause harm to an audience; (ii) the sender intended to cause harm; and (iii) the sender had no reasonable excuse for sending the message. However, this offence did not make its way into the Act following concerns that it would have unintended consequences on freedom of expression and legitimate speech and would also create a juxtaposition between what was lawful online and offline.
The Act does however make specific provision for ‘threatening communications’. A person commits this offence where they send a message that coveys a threat of death or serious harm (and at the time of sending, the communication was intended to make the recipient fear that the threat would be carried out, or the sender was reckless as to whether the recipient would fear the same). This would mean that the unfortunate experiences of individuals such as Rugby World Cup final referee, Wayne Barnes, who received death threats after officiating the final, should be covered by the Act[6].
However, it is important to note that individual instances of abuse are not the focus of the Act, and therefore the presence of harmful content alone will not be decisive in determining whether a service provider is in breach.
Clean bill of health?
The Act is certainly a step in the right direction in the pursuit of reducing online abuse in sport. Following its introduction, the footballing authorities released a joint statement describing it as “a significant moment for those who participate in the game [of football][7]”.
However, the ever-changing nature of the online space and the continuous developments in technology make regulating online activity particularly challenging. It is also questionable whether Ofcom will have sufficient resources to regulate service providers of the size and scale of X and Meta. Moreover, there are obvious limitations in Ofcom’s regulation in its inability to remove content or deal with individual instances of hate. Ultimately when attempting to tackle what is a societal issue such as this, it must be accepted that there is a continuing responsibility on the sporting and police authorities to ensure they do their utmost to curtail the disturbing rise in hate crime.
It therefore remains to be seen whether this legislative framework will in fact reduce online hate abuse in sport, and remain effective and fit for purpose in an ever-evolving online landscape.
[1] The Act regulates user-to-user services and search services which include but are not limited to social media companies and search engines. The companies in scope are defined in section 3 of the Act.
[2] The Alan Turing Institute – Tracking abuse on Twitter against football players in the 2021 – 22 Premier League Season
[3] Under the Malicious Communications Act 1998 and the Communications Act 2003
[4] Kick it Out is a charitable organisation which helps to promote equality, diversity and inclusion in English football
[5] Section 109 of the Act
[6] Section 181 of the Act
[7] Premier League News – Joint Statement on the Royal Assent of the Online Safety Act
SUMMARY In 2017 a 24-year-old woman, Louella Fletcher Michie, died at the Bestival Music Festival,…
Camilla Fayed was declared not guilty of robbery after the prosecution offered no evidence. Camilla…