Deepfakes and altered image abuse: How the misuse of synthetic media poses a new digital threat for women

Ten years ago, when 18-year-old Australian law student Noelle Martin reverse image searched herself out of pure curiosity, nothing could have prepared her for the nauseating results that Google would empty before her. 

Scores of pornographic images of her stretched on the screen before her, images she had never consented to being shared online. In fact, she had never consented to them being taken at all. Because they hadn’t been taken: they had been faked. Anonymous individual(s) had stolen Martin’s likeness and superimposed it onto the bodies of pornography actresses before sharing them online.

Martin had become the victim of image-based abuse, which was not an offence in Australia at the time. The abuse was done using synthetic media – colloquially referred to as a “deepfake”.

A deepfake is a still or moving image in which an individual’s likeness has been superimposed over another person’s face using synthetic media. The former can then be made to appear to be doing or saying things they never did. In 2019, a report by Sensity, a company that monitors and detects deepfakes, found 96 per cent of this synthetic media at the time targeted people by placing their faces over porn actors. A highly gendered threat – 100 per cent of victims were women. 

“Life-shattering”

Martin, only a teenager when she found out about the abuse, said in a TED Talk she had only wanted to feel “pretty and confident” when she posted selfies online, only for them to be stolen by faceless abusers.

“It’s completely life-shattering and career destroying and it’s an attack on a person’s human dignity,” Martin told the Kingston Courier.

“You can’t escape it. Every employer – forever – can see this. Every friend, colleague, every potential relationship. Dating is difficult because they see things that could cloud their judgement of you. It’s an inescapable, misappropriation of your identity and it robs you of your self-determination in a way people don’t realise.”

Her words reverberate after the BBC’s report in December that 17-year-old Egyptian Basant Khaled took her own life after doctored sexual images of her were shared online.

In another high profile case, Indian journalist Rana Ayyub said she had to be hospitalised after she became the victim of deepfake image abuse in 2018 as part of a harassment campaign against her. At the time, she had been invited to speak on BBC and Al Jazeera after an eight-year-old girl had been raped, she told Huffington Post, and there was outrage across India. Fake tweets, appearing to come from Ayyub’s account, emerged on social media, and she was flooded with abuse. She had met with a friend for coffee when a source from the Bharatiya Janata Party sent her a link to the faked pornographic video. It was shared more than 40,000 times on social media.

As a result, she went from having been an outspoken journalist to “self-censoring” and stopped posting on Facebook out of fear. Despite the impacts on victims/survivors globally, both within and outside of the news cycle, it is reflected neither in legislation or efforts to curb the abuse.

Fighting back

Martin still does not know the identity of her perpetrator(s). She tried to get the images taken down, but it was impossible, with no legislation for image-based abuse in place in Australia at the time, and nameless perpetrators exploiting the anonymity that the internet provides.

So, she decided to try to change the law. Collaborating with policymakers, cyber safety experts and researchers, she embarked on a successful campaign to make image-based abuse a crime in Australia. This came into force in 2018. The Western Australian of the Year 2019 has spent much of her adulthood fighting for justice and raising awareness for other victims of (deepfake) image abuse.

Martin Delivered a TEDTalk in 2018 on her experience with image-based abuse and changing the law in Australia.

UK needs “comprehensive criminal law”

Image-based abuse is an offence in England and Wales, but it is classed as a communications offence and not a sexual offence. This means that victims of image-based abuse are not guaranteed anonymity as is the case with victims of sexual offences.

Furthermore, deepfakes are not specifically mentioned in image-based abuse legislation. Maria Miller MP, who called an adjournment debate in December, has called for it to be made a sex crime.

Professor Clare McGlynn, expert on image-based sexual abuse based at Durham University (UK) said: “We need a comprehensive criminal law that includes all forms of image-based sexual abuse, including altered images and deepfakes.

“The experience of victims is the same as those whose intimate images have been taken or shared without consent. Many more victims have now spoken out and shared their experiences of the devastating impacts of altered images being created and shared of them online.

“It is important that image-based sexual abuse is recognised as a sexual offence as this is how many victims experience the abuse.”

Even if the UK did impose a specific legal instrument for deepfake image abuse, perpetrators are largely anonymous and can abuse women globally. Martin argued there should be a major and coherent international reponse for deepfake image abuse, with governments and law enforcements collaborating to hold perpetrators accountable.

“It’s not good enough to have Australia introduce laws or the UK introduce laws when the perpetrators could be in another country,” she said

Photo: Pixabay

“Just a bit of fun”

Henry Ajder, head of policy and partnerships at Metaphysic and leading expert on synthetic media, co-authored the report with Sensity (formerly Deeptrace) in 2019 which entailed what he described as painstaking data collection to determine the extent of deepfake image abuse.

The landscape was very different three years ago, he said, with much less content. The technology was not as accessible at the time, required a strong PC and was not user-friendly. Today, he points out, popular face-swapping tools have scaled the number of victims because “it’s as simple as tapping a button and returning an image”. 

“Image based abuse has gone a lot more global. It’s much harder to map unless you have an extensive knowledge of the landscape,” he said, adding that it would be unlikely that the 2019 report could be replicated today, due to the scale of content.

“Deepfakes in the image abuse context first started off targeting celebrities. Now, it’s no longer about just celebrities. Many more private women have been targeted. It fundamentally changes the way that women feel safe on digital spaces.”

Many platforms have banned deepfakes, including Twitter which banned deepfakes that can cause harm in February 2020 ahead of the US election. Despite this, the platform still hosts deepfake image abuse.

“Some people still think of this as a joke or a meme or “shitposting”, that they’re not hurting anyone,” said Ajder. 

Legal frameworks, he said, are valuable in changing the public perception of non-consensual deep fake pornography.

“Not that there’s a good public perception now – people aren’t dismissing its significance and the horror of what it is, but I think making it very clear that you’re committing a crime at least removes that excuse for people doing this that they just thought they were doing something that’s just a bit of fun,” said Ajder.

“Having clear legal frameworks in place that make people know that you’re not creating something that is just a bit of fun, a harmless thing that guys share in their little circles.”

Photo: Pixabay

Ban or manage?

The tech is not employed solely for malicious purposes, and since Sensity’s 2019 report on deepfakes, accessible apps, such as Reface, have gone mainstream, with generated pieces of content estimated by some as over 5 billion. The vastly improving and often seamless tech has been used professionally for satire and even films – see the late Paul Walker’s appearance in Furious 7 after he had died.

Despite the non-hostile usages of synthetic media, Martin thinks there is a case for policymakers to consider banning deepfakes altogether.

“It’s inherently deceptive. You are creating something that’s not there. In this world of misinformation and disinformation, I think it’s a real problem,” she said.

Ajder said that banning the tech outright wouldn’t necessarily serve victims of its malicious uses. Synthetic media is used on social media, photography editing apps, games and music videos, he points out, thus it would mean “banning the foundations of modern digital media”. 

“The kinds of synthetic media that would be easier to ban would actually be the ones that don’t cause any harm. If you ban deepfakes or synthetic media you would end up stopping the positive uses whilst not being able to do much about the malicious uses,” he said.

Instead, he wants to see companies working together on the technology and engaging in good faith on how to responsibly develop synthetic media, and for better processes to work with internet service providers and hosting platforms to take down content. Despite being highly pessimistic on what he describes as “the tip of the iceberg” of deepfakes image abuse, he said that the more difficult it becomes for predators seeking to abuse these tools, the better.

Refusing to be silenced

Martin said that after being victimised, she has received criticism from people telling her to get off social media, that if she controlled what she posted, it wouldn’t have happened to her.

“Social media is used as an economic opportunity, a way to participate in societal, political and economic discourse. If you put the onus on women who don’t want this to happen to them, women will be further disenfranchised and miss out on those opportunities and other people can dominate that space.”

Ajder added: “I understand that women are increasingly worried about how their images are being abused or misused online [but] it shouldn’t be a woman’s responsibility to avoid completely unacceptable harassment and abuse. 

“If you want to avoid any chance of being targeted, sure – don’t post anything online. But living in fear and giving into these people would be a very sad place to be.”

Martin, who is working as a legal researcher exploring the Metaverse says that despite the feats she has made for other survivors of image-based abuse, there are days where she doesn’t do anything because it’s too much. She has even seriously considered changing her name to escape the abuse. However, she feels a huge amount of responsibility not to back down.

“I will do anything I can to tackle this issue. I’m not stopping anytime soon,” she asserts.

Of her perpetrators, Martin assures me: “They didn’t win. Nice try – but you didn’t succeed in trying to bring me down.”

+ posts

Leave a Reply

Verified by ExactMetrics