Home

The UK’s Online Safety Act Is Not Enough To Address Non-Consensual Deepfake Pornography

Manasa Narayanan / Mar 13, 2024

What if I told you it only takes as many clicks to get to a site offering tools and tutorials on how to make non-consensual deepfake pornography as it would to get to a page on how to make an omelette. Surely that can’t be true? Unfortunately, it is.

Despite all the recent furor about pornographic deepfakes after Taylor Swift was targeted, search engines continue to actively serve up sites and forums used to create and circulate deepfake pornography. In fact, if you simply search ‘deepfake pornography’ on Google (without even saying ‘watch’ or ‘create’), the top 3 or even top 5 results would likely be non-consensual deepfake pornography sites; most times, even the Wikipedia entry on this issue does not figure in this list, much less news articles or resources that help victims.

There is a “cottage industry based on violating consent,” said Sophie Compton, the British co-director of the documentary film on non-consensual deepfakes, Another Body, speaking at a recent event on intimate image abuse. Only, I am not sure it is cottage-sized anymore, but something much larger.

A 2019 study from Deeptrace Labs that trawled through top deepfake porn sites and YouTube channels estimated that 96% of the deepfake videos were porngraphic and non-consensual, and were almost entirely of women. A more recent report by Home Security Heroes from 2023, puts pornographic deepfakes as now forming 98% of total deepfake content, with 99% of it targeting women. From 2019 to 2023, they found a 550% rise in total deepfake videos online. So not only is deepfake content rising exponentially, but it is clear that most of it is pornographic, and the abuse of such content is gendered, overwhelmingly targeting women and girls.

“The first news coverage on deepfake technology… was only focusing on at that point this sort of speculative threat to politics. It was completely ignoring the fact that this technology was purpose-made as a tool for violence against women," said Another Body co-director Reuben Hamlyn.

A few years ago, Hamlyn started to notice a rise of deepfake content on platforms like Reddit and 4Chan, mostly pornographic content targeting women. But the conversations happening in the public domain around deepfakes were mostly about political deepfakes, which represent very little of deepfake content in sum. Even the limited conversations on the issue focused on celebrity women who were prey to such abuse. This compelled Hamlyn and Compton to look at ordinary victims of deepfake abuse and expose the shortcomings of the legal systems which are not equipped to deal with abuses caused by emerging technologies.

Referencing Taylor, one of the main survivors the documentary revolves around, Hamlyn said that when she approached the police, they did not really understand the nature of deepfake abuse. This was followed by the police victim blaming her. She eventually had to receive “the news that the perpetrator had a right to do this act as there were no laws prohibiting it,” which “made her feel very lost and very powerless to do anything about it,” he added.

In many countries around the world, creating and sharing non-consensual deepfake porn is not an explicit offense. In the US, which is where Taylor is from, there is no federal law that tries to tackle deepfake abuse. This has meant that women and girls have a tough time not only pursuing a legal recourse, but also simply getting the content taken down by social media platforms and websites is a battle.

Amendments to the Online Safety Act

In the UK, however, this is changing with the latest amendments to the Online Safety Act 2023.

The Online Safety Act governs regulation of the online world, including social media platforms. Amendments tabled in June 2023 and passed early this year with regard to intimate image abuse make sharing non-consensual deepfakes is now an offence in the country. The amendments assure that:

  • Victims no longer have the burden to prove ‘intent to distress’ for sharing of non-consensual intimate images to be considered a criminal offense;
  • It is a serious offense to share non-consensual intimate images with the intent to cause distress;
  • It is a further serious offense if this is done for purposes of sexual gratification;
  • And the law now explicitly includes deepfakes as a form of intimate image abuse.

While this is a positive development, one that would remove the legal ambiguity around non-consensual deepfakes, experts and activists I have spoken to point out that it is nowhere near enough to tackle the issue.

“In reality, I don't think that there will be many convictions,” Hamlyn said. “The perpetrators of deepfake abuse are smarter than local police forces when it comes to virtual crimes… Most local police forces haven't had adequate training on how to track down the identities of people who commit crimes online. It’s very simple to use a VPN which sort of blocks your IP address. And if a perpetrator does use that, as was the case with Taylor, the police forces are very limited at what they're able to do.”

But alongside the obvious issues around training of police officers and actual implementation of this law, experts have also criticized it for only addressing crimes that have already been committed. They point out that it does not put in place any concrete mechanisms to prevent the creation and circulation of abusive deepfakes in the first place.

Fake Image, Real Consequences

With political deepfakes, fact-checking and establishing that a media artifact is a fake image or clip is important to correct the record and even try to reverse any political implications caused by the deepfake. But with a pornographic deepfake, it is a sexual violation. Once it has occurred, the labelling of it as a ‘deepfake’ does little to help the victim who has already experienced a gross violation.

“We may label the image as fake, but it feels real to them. So the image becomes real because it's on their phone. They're being sent the image, they're seeing it so often,” said Clare McGlynn, a professor of law at Durham University in the UK who studies the legalities around pornography, sexual violence and online abuse, and has worked with victims of image-based abuse.

“It doesn't stop the humiliation of someone. It doesn't stop that first violation that someone has done this to,” she added.

This is what has driven experts and activists in the UK to call for a greater focus on preventive and systemic measures to be adopted.

The Online Safety Act’s band aid approach won’t work

“The law needs to definitely be broad enough... [which] applies to all tech companies [and] would mean that any AI tool which is developed going forward should ensure that you've got safety by design, meaning that it should not even be possible to create a sexually exploitative or harmful deepfake,” said Amanda Manyame, who is a Digital Rights Advisor with Equality Now, and studies the global legal landscape surrounding deepfake abuse.

The Online Safety Act, in contrast, focuses mostly on content takedown measures, which is a half-measure at best. That too only if the regulator in charge, Ofcom, puts in place strong enough and binding guidelines, and social media corporations actually comply with the requirements. But, the approach that Ofcom has taken so far does not look promising.

In fact, last month, more than forty civil society organizations and individuals working towards women and childrens’ safety signed a letter stating that the Online Safety Act will do little to tackle intimate image abuse, owing to weak guidelines drawn up by Ofcom.

They wrote that, “the consultation reflects a business-centric approach... reflected in the disproportionate focus on the ‘costs’ and perceived burdens for tech companies, with no equivalent consideration given to the cost and resources associated with the harms to individual women and girls and wider society.”

“The consultation does not adequately reflect a systems-based approach which prioritizes safety by design, and has a disproportionate focus on content takedown,” they added.

Women and girls, always an afterthought

In the Online Safety Act, the one section related to women and girls came in after much campaigning from the End Violence Against Women Coalition along with efforts from various other civil society groups.

Given this precedent, Professor McGlynn is not very optimistic that Ofcom will take intimate image abuse seriously. “All we do know is that at the moment [in] the guidance… violence against women and girls is not prioritized. It wasn't prioritized in the Act itself. And it doesn't seem to be being prioritized now.”

Other than criminalizing deepfake abuse, the Act itself does not demand any responsibilities from social media companies and other digital services associated with deepfake abuse. It also only asks Ofcom that its guidelines “contain advice and examples for best practice for assessing risks of harm to women and girls.”

So, not only does it look like intimate image abuse is not enough of a priority for Ofcom, it also looks like the regulator may not have much teeth in actually getting platforms to take any actions to prevent abuse or provide redressal even if they decided to focus their energies to the cause.

What would work?

This leads us to consider what indeed would be needed then to tackle the issue of deepfake abuse. For starters, in the UK we would need a law that not just regulates the distribution of deepfake, but targets the points of origin.

Explaining the law as it currently stands in the UK, professor McGlynn explained that, “having a website that gives you the tools to create a deepfake porn image is not itself unlawful. It’s only unlawful when you're then distributing it. That's what makes it harder to actually get to these particular sites and apps.”

This is a major loophole, since along with sites that allow you to create deepfake porn, the law also does not cover the readily available apps that enable people to decloth women to create nudes, or swap and superimpose their faces on others’ bodies. So we need the government to act on not just social media platforms and websites, but also app stores that let people float and monetize from abusive apps.

We also need laws that compel search engines like Google to de-rank harmful and violating sites and make them far less accessible. In fact, Google is the single largest driver of traffic to deepfake porn websites, and so laws that demand the company tweak its ranking system would have a huge effect on the accessibility of deepfake sites; far more than any content takedown measures would have.

On top of this, there are also major payment providers like Visa and Mastercard whose services are used by deepfake creators to monetize on their content. The law should also bring these online services into the fold to further deter deepfake actors.

At this point, none of this seems to be in place under the Online Safety Act. With the actual protocol around content removal, we will have to wait and see if the guidelines drawn up by Ofcom are any good. We will also have to see if Ofcom is able to work with social media and search platforms to do anything about this content in practice.

As for law enforcement, it is to be seen how police are trained and sensitized around this issue, and if they actually prosecute perpetrators under this Act in any effective manner. All in all, the law in the UK currently remains deficient, and a lot of questions still remain on the limited measures that are on the table.

Authors

Manasa Narayanan
Manasa Narayanan works for the news non-profit the Citizens, reporting on data, democracy and disinformation. She's also a researcher and contributor to the Real Facebook Oversight Board, and has also written for outlets like VICE World News and Byline Times.

Topics