“Deepfakes are being weaponised against women – we need to take action”

woman's hands typing on laptop technology AI

Credit: Adobe

News


“Deepfakes are being weaponised against women – we need to take action”

By Nell Watson

11 months ago

4 min read

As AI becomes commonplace and the use of deepfakes runs rampant, we need urgent action to protect women, explains AI expert Nell Watson. 


As AI becomes more sophisticated and accessible, it’s being weaponised for malicious purposes, particularly in online harassment. The rise of deepfakes and synthetic media has unlocked a Pandora’s box of abuse, providing bad actors with the tools to create and disseminate deceptive content with terrifying ease.

Deepfakes use AI to replace one person’s likeness with another’s in a highly realistic manner. While this technology has legitimate applications in film and gaming, it’s also being used to create non-consensual pornography, fake news and other forms of harassment. Victims, often women and marginalised communities, find their faces plastered onto explicit content or their voices mimicked in derogatory ways.

Imagine the sickening horror of waking up to find your face grafted onto a stranger’s body, engaging in explicit acts you never consented to – a violation so intimate and humiliating that it defies comprehension. Even if the video is exposed as a fabrication, the emotional scars and potential for reputational damage remain. Public figures such as Taylor Swift, Scarlett Johansson, and Alexandria Ocasio-Cortez have been subjected to this, but it’s not only celebrities who are being targeted. Deepfake porn is a grim reality faced by an increasing number of people, predominantly women.

Victims may feel a loss of control over their identities

Fighting deepfake harassment can be exhausting and demoralising. Victims often face scepticism, victim-blaming and even further harassment when they speak out. The onus falls on them to prove the content is fake and to plead for its removal, a Kafkaesque obligation when AI-generated content can be endlessly replicated and redistributed.

The same AI techniques used for deepfakes are also being applied to generate synthetic text, audio and entire personas. Generative language models can churn out convincing fake reviews, comments and messages at an unprecedented scale, overwhelming targets with abuse. AI-generated voices can impersonate individuals in misleading audio clips, while chatbots can engage in relentless, personalised harassment campaigns.

Compounding the problem is the difficulty of attribution and accountability in AI-generated content. When harassment comes from a legion of AI chatbots or synthetic personas, it can be nearly impossible to trace it back to a human culprit. This anonymity emboldens abusers and frustrates attempts at legal recourse. Even when perpetrators are identified, outdated laws and a lack of legal precedent around AI-assisted harassment can leave victims with little protection.

woman using phone

Credit: Getty

The psychological impact of AI-powered harassment is uniquely insidious. The knowledge that one’s likeness or voice can be so easily co-opted and manipulated can lead to a pervasive sense of vulnerability and mistrust. Victims may feel a loss of control over their identities, constantly wondering if the content they see online is real or synthetic. This uncertainty can breed anxiety, paranoia and a reluctance to engage online at all.

As AI continues to evolve at a breakneck pace, the challenges posed by deepfakes and synthetic media will only intensify. More sophisticated deepfakes will become increasingly difficult to detect, while the AI systems designed to identify them often harbour biases that can further marginalise already vulnerable groups.

To combat this rising tide of AI-powered harassment, we must mobilise on multiple fronts. In the realm of technology, we need more advanced tools for detecting and flagging synthetic media, coupled with robust content moderation systems that can keep pace with the ever-evolving tactics of harassers. Legal frameworks must be updated to account for the unique challenges posed by AI, providing clear avenues for holding perpetrators accountable and safeguarding the rights of victims.

Education and public awareness will be critical in empowering individuals to navigate the treacherous waters of an AI-enhanced digital landscape. By promoting digital literacy skills, such as how to identify deepfakes and synthetic content, and advocating for best practices in online safety and privacy, we can help build resilience against the onslaught of AI-assisted harassment.

To collaborate for the future, we need tech companies, policymakers, educators and advocacy groups to come together to address this issue. The misuse of AI for harassment is not just a technological problem but a societal one. It demands a coordinated, proactive response that prioritises the safety and wellbeing of online communities.

In an increasingly AI-driven world, the fight against harassment and abuse must also evolve. By confronting the dark side of AI head-on – through technological innovation, legal reform, education and collaboration – we can strive to create a safer, more equitable digital future for all. For the women and marginalised communities bearing the brunt of AI-assisted harassment, this effort is not just important, but desperately needed. Their voices, their experiences and their right to exist online without fear and humiliation must be at the forefront of our efforts to tame the darker side of our ability to create with magic-like ease.

Nell Watson is an AI expert, ethicist and author of Taming The Machine: Ethically Harness The Power Of AI (Kogan Page).

Share this article

Login To Favourite

Sign up for the latest news and must-read features from Stylist, so you don’t miss out on the conversation.

By signing up you agree to occasionally receive offers and promotions from Stylist. Newsletters may contain online ads and content funded by carefully selected partners. Don’t worry, we’ll never share or sell your data. You can opt-out at any time. For more information read Stylist’s Privacy Policy

Thank you!

You’re now subscribed to all our newsletters. You can manage your subscriptions at any time from an email or from a MyStylist account.