Crossing the Line – How to protect against online abuse
The RWC Final at the Stade de France on 28 October was perhaps not the final many rugby fans wanted. Depending, perhaps a little on who you were supporting, it was marred by the yellow and subsequent red card for New Zealand captain Sam Cane, leaving the All Blacks playing for the rest of what was a bruising final without their captain and with only 14 men. His offence was a shoulder led tackle to the head of South Africa’s Jesse Kriel in the 29th minute. Cane was subsequently suspended for three games.
In the wake of the final, Wayne Barnes who was officiating on pitch, has been the target of threats and ‘vile abuse’ made via social media ‘trolls’ against both him and his family. Barnes told the BBC:
“When people make threats against your wife and kids, they should be held to account and punished. Threats of sexual violence, threats of saying we know where you live. It crosses that line…..social media is getting worse and it’s the sad thing about the sport at the moment. It has not been a one-off.”
He called for prosecuting authorities to act and for legislation to be introduced to require action by internet providers.
This kind of abuse is sadly commonplace in sport at all levels and is beyond unacceptable. However, whilst abusing referees and players in stadiums and sports grounds can lead to substantial bans, arrests and prosecutions, the vindictive keyboard warrior, taking advantage of the apparent anonymity of the online social media world, is more difficult to deal with.
There are a number of legal approaches that can be taken to prevent or remove online abuse and, in some cases, punish the offender, although each has its own challenges.
Abuse and threats may well amount to criminal offences. Prosecutions can be brought for threats to kill, under the Protection from Harassment Act 1997 and under the Malicious Communications Act 1988 or the Communications Act 2003. Convicted defendants, alongside the sentence they receive, can be the subject of Criminal Behaviour Orders and Restraining Orders designed to prevent the abuse continuing.
The considerable advantage of reporting the conduct to the police is therefore that the abuser faces the threat of a custodial sentence, punitive action if the abuse continues and it is all but cost neutral for the complainant.
Against that, the police and the CPS must be prepared to investigate and prosecute and how such a prosecution is progressed is outside the complainant’s control. Further, as with self-help remedies (see below) the offender may be difficult to identify and there may be multiple possible defendants.
Private prosecutions will give the complainant more control, but they can be expensive and time consuming.
Whilst defamation proceedings may initially seem to be an attractive way of countering online abuse, they are complicated, expensive and can take considerable time. A cheaper and often more effective ‘self-help’ remedy is an application for a civil injunction under the Protection from Harassment Act 1997.
Harassment is defined under the 1997 Act as course of conduct, being on two or more occasions, which amounts to harassment and which the person undertaking the conduct knows or ought to know amounts to harassment. The action itself can be brought under section 3 or section 3A of the Act. Where the harassment is of two or more persons, section 3A permits any ‘person’ whose behaviour the harassment is intended to influence to bring proceedings. This permit clubs, teams and associations to pursue an action on behalf of individuals founded on the 1997 Act where multiple players or officials are being abused.
Whilst damages can be claimed under the 1997 Act, most claims are issued to obtain injunctive relief only and, hopefully, bring the abusive behaviour to an end. Applications can be issued, and injunctions obtained without notice and return dates are often listed quickly and in many cases not attended by the defendant(s). Breaches of the terms of the injunction can lead to applications for contempt of court, punishable by up to two years imprisonment in the most serious cases.
More simply still, threats of physical violence amount to an assault and as such an injunction can be obtained in an action in tort. The behaviour does not need to be repeated, a single threat is enough for an action to be commenced.
The difficulties with self-help remedies are that the complainant needs to be able to identify the online abuser and where the abuse comes from multiple sources it can become impractical and financially prohibitive to injunct all those involved. Further, consideration must be given to the fundamental right to freedom of expression protected by way of Article 10 of the ECHR and the court will refuse injunct defendants where to do so will infringe that right.
Take Down Notices
Under regulation 19 the Electronic Commerce (EC Directive) Regulations 2002 internet providers are protected from liability and criminal sanction where they are unaware of the unlawful activity, such as harassment, being undertaken on their platform. However, this protection does not continue to apply when that provider becomes or is made aware of that activity.
This has led to the use of ‘take down’ notices, whereby the victim of online defamation, threats or abuse serves a notice on the internet/website provider requiring it to act expeditiously to remove the offending material or risk being a defendant in any subsequent proceedings. The relevant regulation reads as follows:
19. Where an information society service is provided which consists of the storage of information provided by a recipient of the service, the service provider (if he otherwise would) shall not be liable for damages or for any other pecuniary remedy or for any criminal sanction as a result of that storage where—
(a) the service provider—
(i) does not have actual knowledge of unlawful activity or information and, where a claim for damages is made, is not aware of facts or circumstances from which it would have been apparent to the service provider that the activity or information was unlawful; or
(ii) upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information, and
(b) the recipient of the service was not acting under the authority or the control of the service provider.
The notice need not identify the abuser, but rather the post which it is asserted is unlawful and the responsibility then rest with the provider to remove that post or face the possibility of proceedings. That said, the relief given by such action may only be temporary as the abuser is not prevented from simply repeating the conduct requiring further notices.
Online Safety Act 2023
Wayne Barnes, in calling for further legislation, could have been referring to the Online Safety Act 2023 which gained Royal Assent last month. Part of the purpose of the legislation is to place the responsibility for not only removing abusive and threatening posts onto the internet provider, but requiring them to take positive action to prevent it occurring.
The 2023 Act creates new criminal offences, including false communication (section 179) and threatening communications (section 181) but, as indicated, the real change is regulatory, with legal requirements placed on internet service and search engine providers to, amongst other matters, undertake suitable and sufficient risk assessments in respect of ‘priority illegal content’; such content includes posts which amount to offences under the 1997 Act and the Public Order Act 1986.
Following such a risk assessment there is an obligation to take or use proportionate measures relating to the design or operation of the service to:
- prevent individuals from encountering priority illegal content by means of the service,
- effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence, as identified in the most recent illegal content risk assessment of the service, and
- effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service.
Further these duties include the provider having processes designed to:
- minimise the length of time for which any priority illegal content is present;
- where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.
Therefore the 2013 Act provides a further proactive remedy for the victims of online abuse, to seek the removal of abusive, harassing and threatening content by simply alerting the provider to its existence.
The providers therefore need to facilitate what is termed a “triple shield’: remove all illegal content, remove content that is banned by their own terms and conditions, and allow adult internet users to tailor the type of content they see and can avoid potentially harmful content.
Ofcom will regulate the providers and have powers to enforce compliance and impose fines of up to £18 million or 10% of global annual turnover (whichever is higher) or apply to courts for ‘business disruption measures’ such as blocking non-compliant services. More worryingly, for individuals at a senior level within such providers, Ofcom can bring criminal sanctions against those who obstruct its investigatory powers.
The 2023 Act has come in for significant criticism regarding its potential effect upon the right to freedom of expression enshrined in Article 10. Nevertheless, if applied robustly, it will place the onus on internet providers to proactively safeguard the victims and potential victims of online abuse and enable them to require the removal of the offending material. How successful it can be in preventing or limiting the barrage of abuse and threats, such as that suffered by Wayne Barnes and his family, is yet to be seen.
 Section 10 of the 2023 Act