Researchers warn of rise in AI-created non-consensual explicit images

UF cybersecurity experts leads study into the risks, ease and lack of regulation around AI-generated “nudification” tools

Jan 20, 2026

4 min

Kevin ButlerPatrick Traynor


A team of researchers, including Kevin Butler, Ph.D., a professor in the Department of Computer and Information Science and Engineering at the University of Florida, is sounding the alarm on a disturbing trend in artificial intelligence: the rapid rise of AI-generated sexually explicit images created without the subject’s consent.


With funding from the National Science Foundation, Butler and colleagues from UF, Georgetown University and the University of Washington investigated a growing class of tools that allow users to generate realistic nude images from uploaded photos — tools that require little skill, cost virtually nothing and are largely unregulated.


“Anybody can do this,” said Butler, director of the Florida Institute for Cybersecurity Research. “It’s done on the web, often anonymously, and there’s no meaningful enforcement of age or consent.”



The team has coined the term SNEACI, short for synthetic non-consensual explicit AI-created imagery, to define this new category of abuse. The acronym, pronounced “sneaky,” highlights the secretive and deceptive nature of the practice.


“SNEACI really typifies the fact that a lot of these are made without the knowledge of the potential victim and often in very sneaky ways,” said Patrick Traynor, a professor and associate chair of research in UF's Department of Computer and Information Science and Engineering and co-author of the paper.


In their study, which will be presented at the upcoming USENIX Security Symposium this summer, the researchers conducted a systematic analysis of 20 AI “nudification” websites. These platforms allow users to upload an image, manipulate clothing, body shape and pose, and generate a sexually explicit photo — usually in seconds.


Unlike traditional tools like Photoshop, these AI services remove nearly all barriers to entry, Butler said.


“Photoshop requires skill, time and money,” he said. “These AI application websites are fast, cheap — from free to as little as six cents per image — and don’t require any expertise.”



According to the team’s review, women are disproportionately targeted, but the technology can be used on anyone, including children. While the researchers did not test tools with images of minors due to legal and ethical constraints, they found “no technical safeguards preventing someone from doing so.”





Only seven of the 20 sites they examined included terms of service that require image subjects to be over 18, and even fewer enforced any kind of user age verification.


“Even when sites asked users to confirm they were over 18, there was no real validation,” Butler said. “It’s an unregulated environment.”


The platforms operate with little transparency, using cryptocurrency for payments and hosting on mainstream cloud providers. Seven of the sites studied used Amazon Web Services, and 12 were supported by Cloudflare — legitimate services that inadvertently support these operations.


“There’s a misconception that this kind of content lives on the dark web,” Butler said. “In reality, many of these tools are hosted on reputable platforms.”


Butler’s team also found little to no information about how the sites store or use the generated images.


“We couldn’t find out what the generators are doing with the images once they’re created” he said. “It doesn’t appear that any of this information is deleted.”


High-profile cases have already brought attention to the issue. Celebrities such as Taylor Swift and Melania Trump have reportedly been victims of AI-generated non-consensual explicit images. Earlier this year, Trump voiced support for the Take It Down Act, which targets these types of abuses and was signed into law this week by President Donald Trump.


But the impact extends beyond the famous. Butler cited a case in South Florida where a city councilwoman stepped down after fake explicit images of her — created using AI — were circulated online.


“These images aren’t just created for amusement,” Butler said. “They’re used to embarrass, humiliate and even extort victims. The mental health toll can be devastating.”


The researchers emphasized that the technology enabling these abuses was originally developed for beneficial purposes — such as enhancing computer vision or supporting academic research — and is often shared openly in the AI community.


“There’s an emerging conversation in the machine learning community about whether some of these tools should be restricted,” Butler said. “We need to rethink how open-source technologies are shared and used.”


Butler said the published paper — authored by student Cassidy Gibson, who was advised by Butler and Traynor and received her doctorate degree this month — is just the first step in their deeper investigation into the world of AI-powered nudification tools and an extension of the work they are doing at the Center for Privacy and Security for Marginalized Populations, or PRISM, an NSF-funded center housed at the UF Herbert Wertheim College of Engineering.


Butler and Gibson recently met with U.S. Congresswoman Kat Cammack for a roundtable discussion on the growing spread of non-consensual imagery online. In a newsletter to constituents, Cammack, who serves on the House Energy and Commerce Committee, called the issue a major priority. She emphasized the need to understand how these images are created and their impact on the mental health of children, teens and adults, calling it “paramount to putting an end to this dangerous trend.”


"As lawmakers take a closer look at these technologies, we want to give them technical insights that can help shape smarter regulation and push for more accountability from those involved," said Butler. “Our goal is to use our skills as cybersecurity researchers to address real-world problems and help people.”
Connect with:
Kevin Butler

Kevin Butler

Professor

Kevin Butler directs the Florida Institute for Cybersecurity Research and studies the security of computers and the privacy of tech users.

At-Risk UsersCybersecurityEmbedded Systems SecurityMobile SecurityPrivacy
Patrick Traynor

Patrick Traynor

Professor

Patrick Traynor is an expert in cybersecurity.

Nonconsensual image sharingMobile PaymentsComputer NetworksCybersecurityCellular Networks
Powered by

You might also like...

Check out some other posts from University of Florida

4 min

AI in the classroom: What parents need to know

As students return to classrooms, Maya Israel, professor of educational technology and computer science education at the University of Florida, shares insights on best practices for AI use for students in K-12. She also serves as the director of CSEveryone Center for Computer Science Education at UF, a program created to boost teachers’ capabilities around computer science and AI in education. Israel also leads the Florida K-12 Education Task Force, a group committed to empowering educators, students, families and administrators by harnessing the transformative potential of AI in K-12 classrooms, prioritizing safety, privacy, access and fairness. How are K–12 students using AI in classrooms? There is a wide range of approaches that students are using AI in classrooms. It depends on several factors including district policies, student age and the teacher’s instructional goals. Some districts restrict AI to only teacher use, such as creating custom reading passages for younger students. Others allow older students to use tools to check grammar, create visuals or run science simulations. Even then, skilled teachers frame AI as one tool, not a replacement for student thinking and effort. What are examples of age-appropriate tools that enhance learning? AI tools can be used to either enhance or erode learner agency and critical thinking. It is up to the educators to consider how these tools can be used appropriately. It is critical to use AI tools in a manner that supports learning, creativity and problem solving rather than bypass critical thinking. For example, Canva lets students create infographics, posters and videos to show understanding. Google’s Teachable Machine helps students learn AI concepts by training their own image-recognition models. These types of AI-augmented tools work best when they are embedded into activities such as project-based learning, where AI supports learning and critical thinking. How do teachers ensure AI supports core skills? While AI can be incredibly helpful in supporting learning, it should not be a shortcut that allows students to bypass learning. Teachers should design learning opportunities that integrate AI in a manner that encourages critical thinking. For example, if students are using AI to support their mathematical understanding, teachers should ask them to explain their reasoning, engage in discussions and attempt to solve problems in different ways. Teachers can ask students questions like, “Does that answer make sense based on what you know?” or “Why do you think [said AI tool] made that suggestion?” This type of reflection reinforces the message that learning does not happen through getting fast answers. Learning happens through exploration, productive struggle and collaboration. Many parents worry that using AI might make students too dependent on technology. How do educators address that concern? This is a very valid concern. Over-reliance on AI can erode independence and critical thinking, that’s why teachers should be intentional in how they use AI for teaching and learning. Educators can address this concern by communicating with parents their policies and approaches to using AI with students. This approach can include providing clear expectations of when AI is used, designing assignments that require critical thinking, personal reflection and reasoning and teaching students the metacognitive skills to self-assess how and when to use AI so that it is used to support learning rather than as a crutch. How do schools ensure that students still develop original thinking and creativity when using AI for assignments or projects? In the age of AI, there is the need to be even more intentional designing learning experiences where students engage in creative and critical thinking. One of the best practices that have shown to support this is the use of project-based learning, where students must create, iterate and evaluate ideas based on feedback from their peers and teachers. AI can help students gather ideas or organize research, but the students must ask the questions, synthesize information and produce original ideas. Assessment and rubrics should emphasize skills such as reasoning, process and creativity rather than just focusing on the final product. That way, although AI can play a role in instruction, the goal is to design instructional activities that move beyond what the AI can do. How do educators help students understand when it’s appropriate to use AI in their schoolwork? In the age of AI, educators should help students develop the skills to be original thinkers who can use AI thoughtfully and responsibly. Educators can help students understand when to use AI in their school work by directly embedding AI literacy into their instruction. AI literacy includes having discussions about the capabilities and limitations of AI, ethical considerations and the importance of students’ agency and original thoughts. Additionally, clear guidelines and policies help students navigate some of the gray areas of AI usage. What guidance should parents give at home? There are several key messages that parents should give their children about the use of AI. The most important message is that even though AI is powerful, it does not replace their judgement, creativity or empathy. Even though AI can provide fast answers, it is important for students to learn the skills themselves. Another key message is to know the rules about AI in the classroom. Parents should speak with their students about the mental health implications of over-reliance on AI. When students turn to AI-augmented tools for every answer or idea, they can gradually lose confidence in their own problem-solving abilities. Instead, students should learn how to use AI in ways that strengthen their skills and build independence.

3 min

Is writing with AI at work undermining your credibility?

With over 75% of professionals using AI in their daily work, writing and editing messages with tools like ChatGPT, Gemini, Copilot or Claude has become a commonplace practice. While generative AI tools are seen to make writing easier, are they effective for communicating between managers and employees? A new study of 1,100 professionals reveals a critical paradox in workplace communications: AI tools can make managers’ emails more professional, but regular use can undermine trust between them and their employees. “We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the University of Florida's Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance." In the study published in the International Journal of Business Communication, Coman and his co-author, Peter Cardon, Ph.D., of the University of Southern California, surveyed professionals about how they viewed emails that they were told were written with low, medium and high AI assistance. Survey participants were asked to evaluate different AI-written versions of a congratulatory message on both their perception of the message content and their perception of the sender. While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a “perception gap” in messages that were written by managers versus those written by employees. “When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating other’s use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.” While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring and competency of their manager. The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools. The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities. “In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust. The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention.

2 min

MedPage Today: Ozzy Osbourne shined a light on Parkinson’s stigma

Ozzy Osbourne was best known for two things: his shape-shifting resilience as a pioneer of heavy metal music and, most recently, his remarkable authenticity during his public journey with Parkinson's disease. Osbourne, who passed away on July 22, possessed a unique ability to connect directly with people who were suffering. He was an honest and transparent voice for what it was like to live with a neurodegenerative disease. He was willing to go where others would not, and he took on the stigma of a Parkinson's diagnosis. Stigma remains one of the most underrecognized yet pervasive challenges in Parkinson's disease. Far too often, individuals are made to feel ashamed of their visible symptoms like tremors, facial masking, or soft speech. This reality can lead to social withdrawal, depression, and even delayed medical care. Research has shown that perceived stigma is not only linked to reduced quality of life, but it also correlates with worse outcomes. That's why, when someone like Osbourne rises up and speaks out, it matters. It sends a powerful message that Parkinson's does not define a person, and that no one should suffer in silence. Many people with Parkinson's disease choose to conceal their diagnosis from those closest to them. A recent study published in Scientific Reports found that nearly 23% of participants kept their condition hidden, even from family members. Broader surveys have suggested that more than half of individuals with Parkinson's disease may conceal symptoms, mask tremors, or avoid public situations due to stigma and fear of judgment. People who hide their diagnosis frequently report lower social support, reduced engagement in physical activity, and significantly worse emotional well-being. These findings underscore how pervasive and harmful disclosure avoidance can be.

View all posts