Kevin Butler

Professor

  • Gainesville FL UNITED STATES

Kevin Butler directs the Florida Institute for Cybersecurity Research and studies the security of computers and the privacy of tech users.

Contact

Biography

Kevin Butler is the director of the Florida Institute for Cybersecurity Research. Kevin's research focuses on the security of computing devices, systems and networks. His group has recently worked on securing embedded systems and protocols, mobile device security and privacy, establishing the trustworthiness of data and maintaining its provenance, protection of Internet traffic and the SSL infrastructure and attacks and defenses against the cloud infrastructure. Some of Kevin's other research areas of interest include securing Internet routing, malware propagation, applied cryptosystems, adversarial machine learning, cyber-physical systems and trustworthy computing.

Areas of Expertise

At-Risk Users
Cybersecurity
Embedded Systems Security
Mobile Security
Privacy
Trustworthy Computing

Social

Articles

Side eye: characterizing the limits of POV acoustic eavesdropping from smartphone cameras with rolling shutters and movable lenses

IEEE Computer Society

Yan Long, et. al

2022-12-28

Our research discovers how the rolling shutter and movable lens structures widely found in smartphone cameras modulate structure-borne sounds onto camera images, creating a point-of-view (POV) optical-acoustic side channel for acoustic eavesdropping. The movement of smartphone camera hardware leaks acoustic information because images unwittingly modulate ambient sound as imperceptible distortions.

View more

HallMonitor: A framework for identifying network policy violations in software

IEEE

Daniel Olszewski, et. al

2022-11-18

Debloating helps to remove unused and potentially vulnerable code from software. While such techniques are becoming more mature and practical, they focus on the features that are unwanted by users, and not on a wealth of functionality that is disallowed by administrative policy. For instance, while an administrator may use a firewall to block certain types of traffic, hosts readily interact with such traffic when the firewall is bypassed (e.g., via an encrypted tunnel).

View more

Blue's clues: practical discovery of non-discoverable bluetooth devices

IEEE Computer Society

Tyler Tucker, et. al

2022-10-06

Bluetooth is overwhelmingly the protocol of choice for personal area networking, and the Bluetooth Classic standard has been in continuous use for over 20 years. Bluetooth devices make themselves discoverable to communicate, but best practice to protect privacy is to ensure that devices remain in non-discoverable mode. This paper demonstrates the futility of protecting devices by making them non-discoverable.

View more

Spotlight

2 min

UF works with Gainesville-based Peaceful Paths to educate the public about domestic abuse and cybersecurity

Domestic abuse affects millions of people every year, often in unseen and deeply personal ways, and online threats toward victims can be particularly harmful. To address this reality locally, the University of Florida’s Center for Privacy and Security for Marginalized and Vulnerable Populations, or PRISM, works with Gainesville-based domestic abuse support center Peaceful Paths to help people stay safe in the digital world. Kevin Butler, Ph.D., the director of PRISM and the Florida Institute for Cybersecurity Research at UF, has been researching issues related to security and privacy of technologies that affect survivors of intimate partner violence for years. He and his graduate students connected with Peaceful Paths in 2022, presenting their findings on cybersecurity and demonstrating how their research may help improve online safety for vulnerable populations. They developed a pilot study, a survey and interview protocols that are now helping those in need at the center. “[We aim to] develop principles of design that will allow for a robust technology design that really mitigates harms and improves benefits for all,” Butler said about PRISM. Educating abuse survivors has been a key component of the collaboration between UF and Peaceful Paths. For example, PRISM’s team has conducted research on the effects of stalkerware, also known as spyware, which is a type of software or app designed to be installed secretly on people’s devices to monitor their activities without their consent. Abusers may use this tool to track and harass victims, and stalkerware is regularly linked to domestic violence – a fact that is not widely known. "Even the first presentation [UF] gave enhanced our advocates' knowledge of security pieces, which helps them safety plan with survivors," said Peaceful Paths CEO Crystal Sorrow. “It actually increases the safety of everyone in the community we work with when we talk about red flags, digital dating abuse and healthy relationships.” While PRISM, which is supported by the National Science Foundation, is making an impact on the local community, its overall reach is much broader. PRISM was the first academic partner in the Coalition Against Stalkerware, which includes groups such as the National Network to End Domestic Violence, the Electronic Frontier Foundation, and law enforcement agencies throughout the United States and the world.

Kevin Butler

4 min

Researchers warn of rise in AI-created non-consensual explicit images

A team of researchers, including Kevin Butler, Ph.D., a professor in the Department of Computer and Information Science and Engineering at the University of Florida, is sounding the alarm on a disturbing trend in artificial intelligence: the rapid rise of AI-generated sexually explicit images created without the subject’s consent. With funding from the National Science Foundation, Butler and colleagues from UF, Georgetown University and the University of Washington investigated a growing class of tools that allow users to generate realistic nude images from uploaded photos — tools that require little skill, cost virtually nothing and are largely unregulated. “Anybody can do this,” said Butler, director of the Florida Institute for Cybersecurity Research. “It’s done on the web, often anonymously, and there’s no meaningful enforcement of age or consent.” The team has coined the term SNEACI, short for synthetic non-consensual explicit AI-created imagery, to define this new category of abuse. The acronym, pronounced “sneaky,” highlights the secretive and deceptive nature of the practice. “SNEACI really typifies the fact that a lot of these are made without the knowledge of the potential victim and often in very sneaky ways,” said Patrick Traynor, a professor and associate chair of research in UF's Department of Computer and Information Science and Engineering and co-author of the paper. In their study, which will be presented at the upcoming USENIX Security Symposium this summer, the researchers conducted a systematic analysis of 20 AI “nudification” websites. These platforms allow users to upload an image, manipulate clothing, body shape and pose, and generate a sexually explicit photo — usually in seconds. Unlike traditional tools like Photoshop, these AI services remove nearly all barriers to entry, Butler said. “Photoshop requires skill, time and money,” he said. “These AI application websites are fast, cheap — from free to as little as six cents per image — and don’t require any expertise.” According to the team’s review, women are disproportionately targeted, but the technology can be used on anyone, including children. While the researchers did not test tools with images of minors due to legal and ethical constraints, they found “no technical safeguards preventing someone from doing so.” Only seven of the 20 sites they examined included terms of service that require image subjects to be over 18, and even fewer enforced any kind of user age verification. “Even when sites asked users to confirm they were over 18, there was no real validation,” Butler said. “It’s an unregulated environment.” The platforms operate with little transparency, using cryptocurrency for payments and hosting on mainstream cloud providers. Seven of the sites studied used Amazon Web Services, and 12 were supported by Cloudflare — legitimate services that inadvertently support these operations. “There’s a misconception that this kind of content lives on the dark web,” Butler said. “In reality, many of these tools are hosted on reputable platforms.” Butler’s team also found little to no information about how the sites store or use the generated images. “We couldn’t find out what the generators are doing with the images once they’re created” he said. “It doesn’t appear that any of this information is deleted.” High-profile cases have already brought attention to the issue. Celebrities such as Taylor Swift and Melania Trump have reportedly been victims of AI-generated non-consensual explicit images. Earlier this year, Trump voiced support for the Take It Down Act, which targets these types of abuses and was signed into law this week by President Donald Trump. But the impact extends beyond the famous. Butler cited a case in South Florida where a city councilwoman stepped down after fake explicit images of her — created using AI — were circulated online. “These images aren’t just created for amusement,” Butler said. “They’re used to embarrass, humiliate and even extort victims. The mental health toll can be devastating.” The researchers emphasized that the technology enabling these abuses was originally developed for beneficial purposes — such as enhancing computer vision or supporting academic research — and is often shared openly in the AI community. “There’s an emerging conversation in the machine learning community about whether some of these tools should be restricted,” Butler said. “We need to rethink how open-source technologies are shared and used.” Butler said the published paper — authored by student Cassidy Gibson, who was advised by Butler and Traynor and received her doctorate degree this month — is just the first step in their deeper investigation into the world of AI-powered nudification tools and an extension of the work they are doing at the Center for Privacy and Security for Marginalized Populations, or PRISM, an NSF-funded center housed at the UF Herbert Wertheim College of Engineering. Butler and Gibson recently met with U.S. Congresswoman Kat Cammack for a roundtable discussion on the growing spread of non-consensual imagery online. In a newsletter to constituents, Cammack, who serves on the House Energy and Commerce Committee, called the issue a major priority. She emphasized the need to understand how these images are created and their impact on the mental health of children, teens and adults, calling it “paramount to putting an end to this dangerous trend.” "As lawmakers take a closer look at these technologies, we want to give them technical insights that can help shape smarter regulation and push for more accountability from those involved," said Butler. “Our goal is to use our skills as cybersecurity researchers to address real-world problems and help people.”

Kevin ButlerPatrick Traynor