Researchers warn of rise in AI-created non-consensual explicit images

UF cybersecurity experts leads study into the risks, ease and lack of regulation around AI-generated “nudification” tools

Jan 20, 2026

4 min

Kevin ButlerPatrick Traynor


A team of researchers, including Kevin Butler, Ph.D., a professor in the Department of Computer and Information Science and Engineering at the University of Florida, is sounding the alarm on a disturbing trend in artificial intelligence: the rapid rise of AI-generated sexually explicit images created without the subject’s consent.


With funding from the National Science Foundation, Butler and colleagues from UF, Georgetown University and the University of Washington investigated a growing class of tools that allow users to generate realistic nude images from uploaded photos — tools that require little skill, cost virtually nothing and are largely unregulated.


“Anybody can do this,” said Butler, director of the Florida Institute for Cybersecurity Research. “It’s done on the web, often anonymously, and there’s no meaningful enforcement of age or consent.”



The team has coined the term SNEACI, short for synthetic non-consensual explicit AI-created imagery, to define this new category of abuse. The acronym, pronounced “sneaky,” highlights the secretive and deceptive nature of the practice.


“SNEACI really typifies the fact that a lot of these are made without the knowledge of the potential victim and often in very sneaky ways,” said Patrick Traynor, a professor and associate chair of research in UF's Department of Computer and Information Science and Engineering and co-author of the paper.


In their study, which will be presented at the upcoming USENIX Security Symposium this summer, the researchers conducted a systematic analysis of 20 AI “nudification” websites. These platforms allow users to upload an image, manipulate clothing, body shape and pose, and generate a sexually explicit photo — usually in seconds.


Unlike traditional tools like Photoshop, these AI services remove nearly all barriers to entry, Butler said.


“Photoshop requires skill, time and money,” he said. “These AI application websites are fast, cheap — from free to as little as six cents per image — and don’t require any expertise.”



According to the team’s review, women are disproportionately targeted, but the technology can be used on anyone, including children. While the researchers did not test tools with images of minors due to legal and ethical constraints, they found “no technical safeguards preventing someone from doing so.”





Only seven of the 20 sites they examined included terms of service that require image subjects to be over 18, and even fewer enforced any kind of user age verification.


“Even when sites asked users to confirm they were over 18, there was no real validation,” Butler said. “It’s an unregulated environment.”


The platforms operate with little transparency, using cryptocurrency for payments and hosting on mainstream cloud providers. Seven of the sites studied used Amazon Web Services, and 12 were supported by Cloudflare — legitimate services that inadvertently support these operations.


“There’s a misconception that this kind of content lives on the dark web,” Butler said. “In reality, many of these tools are hosted on reputable platforms.”


Butler’s team also found little to no information about how the sites store or use the generated images.


“We couldn’t find out what the generators are doing with the images once they’re created” he said. “It doesn’t appear that any of this information is deleted.”


High-profile cases have already brought attention to the issue. Celebrities such as Taylor Swift and Melania Trump have reportedly been victims of AI-generated non-consensual explicit images. Earlier this year, Trump voiced support for the Take It Down Act, which targets these types of abuses and was signed into law this week by President Donald Trump.


But the impact extends beyond the famous. Butler cited a case in South Florida where a city councilwoman stepped down after fake explicit images of her — created using AI — were circulated online.


“These images aren’t just created for amusement,” Butler said. “They’re used to embarrass, humiliate and even extort victims. The mental health toll can be devastating.”


The researchers emphasized that the technology enabling these abuses was originally developed for beneficial purposes — such as enhancing computer vision or supporting academic research — and is often shared openly in the AI community.


“There’s an emerging conversation in the machine learning community about whether some of these tools should be restricted,” Butler said. “We need to rethink how open-source technologies are shared and used.”


Butler said the published paper — authored by student Cassidy Gibson, who was advised by Butler and Traynor and received her doctorate degree this month — is just the first step in their deeper investigation into the world of AI-powered nudification tools and an extension of the work they are doing at the Center for Privacy and Security for Marginalized Populations, or PRISM, an NSF-funded center housed at the UF Herbert Wertheim College of Engineering.


Butler and Gibson recently met with U.S. Congresswoman Kat Cammack for a roundtable discussion on the growing spread of non-consensual imagery online. In a newsletter to constituents, Cammack, who serves on the House Energy and Commerce Committee, called the issue a major priority. She emphasized the need to understand how these images are created and their impact on the mental health of children, teens and adults, calling it “paramount to putting an end to this dangerous trend.”


"As lawmakers take a closer look at these technologies, we want to give them technical insights that can help shape smarter regulation and push for more accountability from those involved," said Butler. “Our goal is to use our skills as cybersecurity researchers to address real-world problems and help people.”
Connect with:
Kevin Butler

Kevin Butler

Professor

Kevin Butler directs the Florida Institute for Cybersecurity Research and studies the security of computers and the privacy of tech users.

At-Risk UsersCybersecurityEmbedded Systems SecurityMobile SecurityPrivacy
Patrick Traynor

Patrick Traynor

Professor

Patrick Traynor is an expert in cybersecurity.

Nonconsensual image sharingMobile PaymentsComputer NetworksCybersecurityCellular Networks
Powered by

You might also like...

Check out some other posts from University of Florida

Study finds most cancer patients exposed to misinformation; UF researchers pilot 'information prescription' featured image

3 min

Study finds most cancer patients exposed to misinformation; UF researchers pilot 'information prescription'

Ninety-three percent of patients with a new cancer diagnosis were exposed to at least one type of misinformation about cancer treatments, a UF Health Cancer Center study has found. Most patients encountered the misinformation — defined as unproven or disproven cancer treatments and myths or misconceptions — even when they weren’t looking for it. The findings have major implications for cancer treatment decision-making. Specifically, doctors should assume the patient has seen or heard misinformation. “Clinicians should assume when their patients are coming to them for a treatment discussion that they have been exposed to different types of information about cancer treatment, whether or not they went online and looked it up themselves,” said senior author Carma Bylund, Ph.D., a professor and associate chair of education in the UF Department of Health Outcomes and Biomedical Informatics. “One way or another, people are being exposed to a lot of misinformation.” Working with oncologists, Bylund and study first author Naomi Parker, Ph.D., an assistant scientist in the UF Department of Health Outcomes and Biomedical Informatics, are piloting an “information prescription” to steer patients to sources of evidence-based information like the American Cancer Society. The study paves the way for other similar strategies. Most notably, the study found the most common way patients were exposed to misinformation was second hand. “Your algorithms pick up on your diagnosis, your friends and family pick up on it, and then you’re on Facebook and you become exposed to this media,” Parker said. “You’re not necessarily seeking out if vitamin C may be a cure for cancer, but you start being fed that content.” And no, vitamin C does not cure cancer. Health misinformation can prevent people from getting treatment that has evidence behind it, negatively affect relationships between patients and physicians, and increase the risk of death, research has shown. People with cancer are particularly vulnerable to misinformation because of the anxiety and fear that comes with a serious diagnosis, not to mention the overwhelming amount of new information they have to suddenly absorb. While past research has studied misinformation by going directly to the source — for instance, studying what percentage of content on a platform like TikTok is nonsense — little research has looked at its prevalence or how it affects people. The team first developed a way to identify the percentage of cancer patients exposed to misinformation. UF researchers collaborated with Skyler Johnson, M.D., at Huntsman Cancer Institute, an internationally known researcher in the field. The survey questions were based on five categories of unproven or disproven cancer treatments — vitamins and minerals, herbs and supplements, special diets, mind-body interventions and miscellaneous treatments — and treatment misconceptions. The myths and misconceptions were adapted from National Cancer Institute materials and included statements like “Will eating sugar make my cancer worse?” The team surveyed 110 UF Health patients diagnosed with prostate, breast, colorectal or lung cancer within the past six months, a time when patients typically make initial treatment decisions. Most had heard of a potential cancer treatment beyond the standard of care, and most reported they had heard of at least one myth or misconception. The most common sources were close friends or family and websites, distant friends/associates or relatives, social media and news media. The findings mark a shift in misinformation research, with major implications for the doctor-patient relationship, said Bylund, a member of the Cancer Control and Population Sciences research program at the UF Health Cancer Center. “I still think media and the internet are the source and why misinformation can spread so rapidly, but it might come to a cancer patient interpersonally, from family or friends,” she said. Most patients rarely discussed the potential cancer treatments they had heard about with an oncologist, the study also found. Next, the researchers plan to survey a wider pool of patients, then study the outcomes of interventions designed to decrease misinformation exposure, like the information prescription.

New AI tool matches students with high-impact internships featured image

2 min

New AI tool matches students with high-impact internships

Finding the right internship can be an important step for students, but it’s not always clear which opportunities will lead to the strongest growth. To help solve that problem, University of Florida researchers have developed an AI-powered tool that helps students identify internships most likely to accelerate their technical and professional development. Unlike traditional recommendation engines, Pro-CaRE not only predicts which opportunities will lead to stronger outcomes, it also explains why each suggestion is a good fit. In testing data collected from the students, Pro-CaRE’s predictions proved highly accurate, accounting for more than 72% of the differences in learning gains among participants. While the pilot is being tested in engineering, the tool could be adopted for other disciplines. “Internships are one of the most critical parts of an engineering education, but students often struggle to know which experiences will actually help them grow,” said Jinnie Shin, assistant professor of research and evaluation methodology in the UF College of Education. “What makes Pro-CaRE unique is that it doesn’t just offer a list of options. It provides personalized recommendations backed by data and it tells students clearly why an opportunity is a good match for them.” Pro-CaRE creates matches by analyzing each student’s coursework, major, background and self-reported interest, confidence and self-efficacy in engineering skills. It then compares that profile with a carefully chosen set of similar peers to refine suggestions. The result is more precise guidance that adapts to students at different stages of their degree programs. “Students shouldn’t have to guess or hope that an internship will be worthwhile,” Shin said. “With Pro-CaRE, they can approach opportunities knowing they’re backed by evidence, whether the role is onsite, hybrid or remote and whether it’s at a startup or a Fortune 500 company.” The system is designed to work across a wide range of companies and contexts, giving students flexibility while ensuring their choices align with their personal and professional goals. Each recommendation comes with a clear “why this?” explanation, so students can make confident decisions and discuss options more effectively with advisors. Pro-CaRE was developed by a cross-disciplinary UF team combining expertise in education and engineering. Alongside Shin, the project’s co-principal investigators include Kent Crippen in the College of Education and Bruce Carroll in the Herbert Wertheim College of Engineering. The team is exploring external funding opportunities to expand the usage and test the efficacy on a larger scale. “Ultimately, our goal is to empower students to invest their time in experiences that will have the greatest impact,” Shin said. “Pro-CaRE bridges the gap between what students hope to gain and what internships can truly deliver.”

Using AI tools empowers and burdens users in online Q&A communities featured image

2 min

Using AI tools empowers and burdens users in online Q&A communities

Whether you’ve searched for cooking tips on Reddit, troubleshooted tech problems on community forums or asked questions on platforms like Quora, you’ve benefited from online help communities. These digital spaces rely on people across the world to contribute their knowledge for free, and have become an essential tool for solving problems and learning new skills. New research reveals that generative artificial intelligence tools like ChatGPT are creating a double-edge effect on users in these communities, simultaneously making them more helpful while potentially overwhelming them to the point of decreasing their responses. “On the positive side, AI helps users learn to write more organized and readable answers, leading to a noticeable increase in the number of responses,” explained Liangfei Qiu, Ph.D., study coauthor and PricewaterhouseCoopers Professor at the University of Florida Warrington College of Business. “However, when users rely too heavily on AI, the mental effort required to process and refine AI outputs can actually reduce participation. In other words, AI both empowers and burdens contributors: it enables more engagement and better readability, but too much reliance can slow people down.” The study examined Stack Overflow, one of the world’s largest question-and-answer coding platforms for computer programmers, to investigate the impact of generative AI on both the quality and quantity of user contributions. Qiu and his coauthor Guohou Shan of Northeastern University’s D’Amore-McKim School of Business measured the impact of AI on users’ number of answers generated per day, answer length and readability. Specifically, they found that users who used AI tools to generate their responses contributed almost 17% more answers per day compared to those who didn’t use AI. The answers generated with AI were both shorter by about 23% and easier to read. However, when people relied too heavily on AI tools, their participation decreased. Qiu and Shan noted that the additional cognitive burden associated with heavier AI usage negatively affected the impact on a user’s answer quality. For online help communities grappling with AI policies, this research provides valuable insight into how these policies can be updated in the current AI environment. While some communities, like Stack Overflow, have banned AI tools, this research suggests that a more nuanced approach could be a better solution. Instead of banning AI entirely, the researchers suggest striking a balance between allowing AI usage while promoting responsible and moderated use. This approach, they argue, would enable users to benefit from efficiency and learning opportunities, while not compromising quality content and user cognition. “For platform leaders, the takeaway is clear: AI can boost participation if thoughtfully integrated, but its cognitive demands must be managed to sustain long-term user contributions,” Qiu said.

View all posts