Patrick Traynor

Professor University of Florida

  • Gainesville FL

Patrick Traynor is an expert in cybersecurity.

Contact

University of Florida

View more experts managed by University of Florida

Biography

Patrick Traynor is the John and Mary Lou Dasburg Preeminent Chair in Engineering and a professor in the Department of Computer and Information Science and Engineering at the Herbert Wertheim College of Engineering. His research focuses on the security of mobile systems, with a concentration on telecommunications infrastructure and mobile devices. His research has uncovered critical vulnerabilities in cellular networks, developed techniques to find credit card skimmers that have been adopted by law enforcement and created robust approaches to detecting and combating Caller-ID scams. He is also interested in internet security and the systems challenges of applied cryptography.

Areas of Expertise

Nonconsensual image sharing
Mobile Payments
Computer Networks
Cybersecurity
Cellular Networks
Deepfakes

Media Appearances

Award-winning UF team targets nonconsensual nude photo apps to protect privacy

UF News  online

2026-01-05

A University of Florida research team with an eye on privacy and real-world impacts is working with government leaders and other universities to combat AI-based platforms that turn personal images into nude photos without consent.

View More

Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices

The Conversation  online

2022-09-20

Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it. She gives him the wire transfer information, and with the money transferred, the crisis has been averted. The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss. In fact, it wasn’t even a human.

View More

Traynor awarded $1.7 million from Department of Homeland Security to secure cellular networks

UF Computer & Information Science & Engineering  online

2021-04-06

Patrick Traynor, Ph.D., the John H. and Mary Lou Dasburg Preeminent Chair in Engineering, was recently awarded a $1.7 million grant from the Department of Homeland Security (DHS) Science and Technology Directorate. The research and development project, titled “Deploying Defenses for Cellular Networks Using the AWARE Testbed,” will focus on securing mobile networks. Dr. Traynor is also the associate chair for research in the Department of Computer & Information Science & Engineering (CISE).

View More

Social

Media

Spotlight

4 min

Researchers warn of rise in AI-created non-consensual explicit images

A team of researchers, including Kevin Butler, Ph.D., a professor in the Department of Computer and Information Science and Engineering at the University of Florida, is sounding the alarm on a disturbing trend in artificial intelligence: the rapid rise of AI-generated sexually explicit images created without the subject’s consent. With funding from the National Science Foundation, Butler and colleagues from UF, Georgetown University and the University of Washington investigated a growing class of tools that allow users to generate realistic nude images from uploaded photos — tools that require little skill, cost virtually nothing and are largely unregulated. “Anybody can do this,” said Butler, director of the Florida Institute for Cybersecurity Research. “It’s done on the web, often anonymously, and there’s no meaningful enforcement of age or consent.” The team has coined the term SNEACI, short for synthetic non-consensual explicit AI-created imagery, to define this new category of abuse. The acronym, pronounced “sneaky,” highlights the secretive and deceptive nature of the practice. “SNEACI really typifies the fact that a lot of these are made without the knowledge of the potential victim and often in very sneaky ways,” said Patrick Traynor, a professor and associate chair of research in UF's Department of Computer and Information Science and Engineering and co-author of the paper. In their study, which will be presented at the upcoming USENIX Security Symposium this summer, the researchers conducted a systematic analysis of 20 AI “nudification” websites. These platforms allow users to upload an image, manipulate clothing, body shape and pose, and generate a sexually explicit photo — usually in seconds. Unlike traditional tools like Photoshop, these AI services remove nearly all barriers to entry, Butler said. “Photoshop requires skill, time and money,” he said. “These AI application websites are fast, cheap — from free to as little as six cents per image — and don’t require any expertise.” According to the team’s review, women are disproportionately targeted, but the technology can be used on anyone, including children. While the researchers did not test tools with images of minors due to legal and ethical constraints, they found “no technical safeguards preventing someone from doing so.” Only seven of the 20 sites they examined included terms of service that require image subjects to be over 18, and even fewer enforced any kind of user age verification. “Even when sites asked users to confirm they were over 18, there was no real validation,” Butler said. “It’s an unregulated environment.” The platforms operate with little transparency, using cryptocurrency for payments and hosting on mainstream cloud providers. Seven of the sites studied used Amazon Web Services, and 12 were supported by Cloudflare — legitimate services that inadvertently support these operations. “There’s a misconception that this kind of content lives on the dark web,” Butler said. “In reality, many of these tools are hosted on reputable platforms.” Butler’s team also found little to no information about how the sites store or use the generated images. “We couldn’t find out what the generators are doing with the images once they’re created” he said. “It doesn’t appear that any of this information is deleted.” High-profile cases have already brought attention to the issue. Celebrities such as Taylor Swift and Melania Trump have reportedly been victims of AI-generated non-consensual explicit images. Earlier this year, Trump voiced support for the Take It Down Act, which targets these types of abuses and was signed into law this week by President Donald Trump. But the impact extends beyond the famous. Butler cited a case in South Florida where a city councilwoman stepped down after fake explicit images of her — created using AI — were circulated online. “These images aren’t just created for amusement,” Butler said. “They’re used to embarrass, humiliate and even extort victims. The mental health toll can be devastating.” The researchers emphasized that the technology enabling these abuses was originally developed for beneficial purposes — such as enhancing computer vision or supporting academic research — and is often shared openly in the AI community. “There’s an emerging conversation in the machine learning community about whether some of these tools should be restricted,” Butler said. “We need to rethink how open-source technologies are shared and used.” Butler said the published paper — authored by student Cassidy Gibson, who was advised by Butler and Traynor and received her doctorate degree this month — is just the first step in their deeper investigation into the world of AI-powered nudification tools and an extension of the work they are doing at the Center for Privacy and Security for Marginalized Populations, or PRISM, an NSF-funded center housed at the UF Herbert Wertheim College of Engineering. Butler and Gibson recently met with U.S. Congresswoman Kat Cammack for a roundtable discussion on the growing spread of non-consensual imagery online. In a newsletter to constituents, Cammack, who serves on the House Energy and Commerce Committee, called the issue a major priority. She emphasized the need to understand how these images are created and their impact on the mental health of children, teens and adults, calling it “paramount to putting an end to this dangerous trend.” "As lawmakers take a closer look at these technologies, we want to give them technical insights that can help shape smarter regulation and push for more accountability from those involved," said Butler. “Our goal is to use our skills as cybersecurity researchers to address real-world problems and help people.”

Patrick TraynorKevin Butler