Internet

As a teenager, I was a victim of online ‘revenge porn’. Here’s how Britain should protect people like me | Isabel Brooks


Reports of non-consensual intimate image (NCII) abuse, or “revenge porn”, in the UK have increased tenfold over the past few years. But the true scale of the problem is probably larger, as many victims do not come forward.

I didn’t. When I was 17, my Instagram account was hacked by someone I knew, and a graphic image of my body was posted on my profile. Despite being up for only five minutes, it was screenshotted and shared around by boys in my year. The worst part was the powerlessness I felt. I had to seek out these people and beg them to remove the screenshots, and when they didn’t, I went to my school to ask them to intervene. Going to the police felt like it would escalate the situation when I just wanted everyone to forget it. My experience, like those of so many others, never showed up in the official statistics.

Recently there has been a rise in legislation meant to combat NCII in the UK. The Online Safety Act 2023 has made it easier to punish individuals without the difficult bar of having to prove “intent to cause distress”. It places some liability on platforms for user safety, and now covers “deepfakes” too. Given my experience, I’m glad the government is taking steps, but they have arrived too slowly and piecemeal. It was only in 2015 that NCII abuse became illegal in the UK (in many countries, it still isn’t). Since then, technology, regulations, legislation and reporting systems haven’t caught up with the pace and intricacy of online harms.

By 2016, the ability to share images over large social platforms had been around for years, so I can’t help but think other avenues should have been available to me. I wish the screenshot had been blocked, but instead, it was passed around on Facebook, which, five years later in 2021, partnered with StopNCII – a technology that allows victims to create a hash of their content (a kind of digital footprint) then share this hash with other platforms to rapidly block it.

Back then, I couldn’t do this. I assume the screenshot is still on people’s phones, buried in old chats, perhaps posted on some shady corner of the internet. Years later, I still feel ashamed: if I ever go viral, work in politics or education, or come under the public eye, this image could resurface and cause me continued grief. But I’m also frustrated – not with the individuals as much as with the environment that allowed it to happen.

The Online Safety Act is a good step forward, but it is not enough. As it stands, it is the sharing of NCII that is illegal, not the images themselves. This is blamed on complexity around consent; NCII must first be verified as such, then removed. The powers granted to Ofcom are not sophisticated enough to process this efficiently.

So, for instance, if a perpetrator is convicted for NCII abuse, this does not take the harmful content down. There is no immediate takedown system. And of course, the recourse of seeking a court order to take something down is beyond most victims.

Because it is only the sharing of harmful content, not the content itself that is technically illegal, this can also result in bizarre cases where victims go to the police, and a criminal conviction is obtained, but the devices that contain the harmful content are handed back to the perpetrator. These gaps in appropriate procedure go hand in hand with a police force that is uneducated in online harms, what NCII abuse is and what the victim needs in terms of protection. I do not blame my younger self for not wanting to go to the police – there is unfortunately evidence that many victims do not feel supported when they do so.

Also, I was lucky to be able to remove the initial image myself, as it had been posted on my own account. But in the majority of cases, NCII images are shared online without the knowledge of the victim, on websites that are unregulated. Ofcom cannot take down these unregulated platforms immediately; the process for removal takes months, by which time the images or footage would have been dispersed even more widely.

So, what could be the solution? In the UK there is at least a growing awareness of this issue.

A new report from the parliamentary women and equalities committee (WEC) has recommended creating a new commission to flag NCII and support individuals pursuing legal action. And Ofcom recently published its illegal harms codes, which states that “sites and apps must also take down non-consensual intimate images (or “revenge porn”) when they become aware of it”, but does little to explain how this would be practically enforced.

There is a mention of using hash technology to combat NCII, with further consultations planned in 2025. Improved legislation could make it mandatory for platforms to partner with existing, effective systems such as StopNCII. It is currently partnered with 13 regulated platforms, but thousands more platforms need to partner to create a more hostile environment for non-compliant websites. Google, for example, refuses to partner with StopNCII for reasons that are not entirely clear. In evidence given to the WEC committee, it claimed to be only a “window” on to the internet, and not a host – hard to believe, given its overwhelming monopoly as a search engine.

It is true that the scale of online content and potentially harmful images is enormous. It is snowballing out of control: in 2023 there were 36.2m reports made to the US National Center for Missing and Exploited Children cyber helpline. But we saw during the pandemic that tech companies were capable of rapid change to prevent the spread of misinformation.

Tech companies don’t have the impetus to partner with StopNCII, because legislation has not yet made it mandatory. Tech platforms will only do as much as they have to, so legislation needs to be more comprehensive and specific, not more severe on the individual.

There are countries that are more advanced in their approach: Canada has online processes that act swiftly – a straightforward and inexpensive court order to get material removed. We need a similarly bespoke process. Australia also has a centralised eSafety commissioner who can take action on behalf of individuals who cannot go through criminal or civil courts.

The impact on victims can be huge: 51% report thoughts of suicide, and there are numerous recorded cases of people following through with these thoughts. StopNCII was not available in 2016 when this happened to me. I wish that it had been.

  • Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.