A survey of over 7000 people in Australia, the United Kingdom, and the United States found that 3.2% of the population reports engaging in creating, sharing and/or threatening to share sexual deepfakes. Men, younger adults, non-white respondents, and individuals with a disability were more likely to engage in this behavior. 18% of people deliberately viewed these images, most often out of curiosity. The research was published in Computers in Human Behavior.
Sexual deepfakes are synthetic sexual images, videos, or audio recordings created or altered with AI or other digital tools. They are usually created to make it appear that a real person is naked, engaged in sexual activity, or saying sexual things, even though this did not actually happen. A sexual deepfake can use a real person’s face, body, voice, or likeness, and combine it with fabricated sexual content.
Many sexual deepfakes are nonconsensual, meaning the person depicted did not agree to the creation or sharing of the material. Nonconsensual sexual deepfakes can be used for harassment, humiliation, blackmail, revenge, or sexual exploitation. They can harm a person’s reputation, privacy, safety, relationships, and mental health, even when viewers know the content is fake.
Although sexual content shown in such deepfakes is not real, the harm they cause can still be very real when the deepfake represents a real and identifiable person. Because of this, sexual deepfakes are increasingly treated as a serious legal and ethical problem in many jurisdictions. In research or policy writing, they are typically described as synthetic sexual media depicting an identifiable person without that person’s consent.
Study author Rebecca Umbach and her colleagues wanted to examine how often people engage in what they call AI-generated image-based sexual abuse. This behavior includes the nonconsensual creation of AI-generated intimate images (i.e., sexual deepfakes), the nonconsensual sharing of AI-generated intimate images, and threats to share AI-generated intimate images. Study authors also examined how many people view such images and how often. More specifically, they were interested in content generated using a range of platforms—from those using AI to digitally remove clothing and generate explicit synthetic content, to more sophisticated deepfake generators or custom-built models.
They conducted an online survey. Study participants were 7,231 respondents from Australia, the United Kingdom, and the United States, recruited by Sago, a large market research company with proprietary online panels. There were approximately 2,400 respondents from each of the three countries. Study authors state that they selected these countries based on evidence of high “deepfake porn” traffic. Around 50-51% of study participants were women. 12-13% of participants identified as LGBTQ+. 18-20% were people with a disability.
The survey directly asked participants whether they engage in the nonconsensual creation, sharing, or threatening to share of digitally altered sexual images. For example, a question asked participants how many times, since they turned 18, they had “posted, sent, or shown a fake or digitally altered nude/sexual image (photo or video) of someone (who was also over 18) without their permission?”.
The survey also asked about participants’ demographic data, their relationship with the person in the sexual content they targeted (e.g., “former sexual partner”, “family member”, “acquaintance”), and the motivation for that behavior. Participants were also asked whether they had ever deliberately watched or viewed AI-generated nude or sexual photos or videos of celebrities or famous people, influencers, and ordinary people. Those that answered that they did deliberately watch or view such images were asked why they watched them, why they thought the images were AI-generated, and how they felt when they viewed the images.
Results showed that 3.2% of participants engaged in at least one of the three behaviors study authors considered to be AI-generated image-based sexual abuse. In other words, 3.2% of people reported creating, sharing, or threatening to share sexual deepfakes. This percentage varied by country—it was 6.1% in the U.K., 3.5% in Australia, and 2.6% in the U.S.
In addition to this, 1.4% of respondents reported creating, sharing, or threatening to share digitally altered sexual images that did not use AI, and 0.5% reported being unsure about AI involvement in the images they were manipulating. A further 0.3% of participants reported threatening to share digitally altered images that did not actually exist.
Further analyses showed that men, younger individuals, non-white participants, and those with a disability were more likely to engage in these behaviors. (While less educated individuals initially appeared more likely to engage in AI-IBSA, statistical modeling showed this relationship disappeared when researchers adjusted for other demographic factors. Similarly, the racial gap between white and non-white respondents disappeared among U.K. participants when the data was adjusted.) Most often, participants reported creating sexual deepfakes because they wanted to experiment with the technology and because they were showing off. Sharing was most often explained as being done “for fun/as a joke”.
26% of people who shared images and 22% of those who created them said they wanted to destroy the target’s reputation. 12% of creators and 20% of sharers reported doing it for financial gain. Most often, the perpetrators targeted current or former sexual partners. Interestingly, participants more often reported sharing deepfake sexual images of men (56%) than of women (41%).
18% of participants reported deliberately viewing sexual deepfake images. Men were 3.6 times more likely than women to deliberately view sexual deepfake images (29% vs 8%). Similarly, younger people, LGBTQ+ individuals, non-white participants, and participants with disabilities were more likely to deliberately view sexual deepfakes. Curiosity was the leading motivation for viewing such images, followed by sexual gratification and amusement.
However, the study revealed a stark gender divide in emotional reactions to the content: men were significantly more likely to report feeling amused or aroused, while women were far more likely to feel empathy for the depicted person, sadness about the world, and disgust toward the creator.
“These findings suggest that, in addition to working to prevent the creation of nonconsensual AI-generated sexual images, sociotechnical interventions are needed to address the seeming normalization of consuming these images,” the study authors concluded.
The study contributes to the scientific understanding of behaviors involving sexual deepfake images. However, all data used in the study came from self-reports, leaving room for reporting bias to have affected the findings.
The paper, “AI-generated image-based sexual abuse: Perpetration and consumption across three regions,” was authored by Rebecca Umbach, Nicola Henry, Renee Shelby, Gemma Stevens, and Kwynn Gonzalez-Pons.
