Member-only story
From Medium to Real-Life Discussions
Are AI-Generated Nudes a Problem of Culture or Legislation?
AI, The Law, Women

Except you’re not on social media, you must have heard that AI-generated nude photos of global superstar, Taylor Swift were released on X, in January. And, for the first time, America’s White House acknowledged the problem AI manipulation poses, stating “There should be legislation, obviously, to deal with this issue.” But, there isn’t strict Federal legislation protecting women and girls (mostly the victims) against this right now in the United States, and around the world.
What are AI-generated deep fakes?
Deepfakes use AI to generate completely new video or audio, with the end goal of portraying something that didn’t actually occur in reality. The term “deep fake” comes from the underlying technology — deep learning algorithms — which teach themselves to solve problems with large sets of data and can be used to create fake content of real people. — Business Insider
According to Wikipedia, Deepfakes have garnered widespread attention for their potential use in creating child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying, and financial fraud. The spreading of disinformation and hate speech through deepfakes has the potential to undermine core functions and norms of democratic systems by interfering with people’s ability to participate in decisions that affect them, determine collective agendas, and express political will through informed decision-making.
Women have been the worst hit since AI deep fakes became more mainstream, and I wrote about all the stats in the article — No One Listened Until It Happened to Taylor Swift Too.
As expected, I had a very interesting comment section from which I also had detailed conversations with my readers about who is to blame for the popularization of AI deep fakes. Let’s take a brief look at some of the…