hero image
Zico Kolter - Carnegie Mellon University. Pittsburgh, PA, US

Zico Kolter

Associate Professor and Director of Machine Learning | Carnegie Mellon University

Pittsburgh, PA, UNITED STATES

Zico Kolter researches how to make deep learning algorithms more robust, safer, and understand how data impacts how models function.

Biography

Zico Kolter is a Professor of Computer Science and the head of the Machine Learning Department at Carnegie Mellon University, where he has been a key figure for 12 years. Zico completed his Ph.D. in computer science at Stanford University in 2010, followed by a postdoctoral fellowship at MIT from 2010 to 2012. Throughout his career, he has made significant contributions to the field of machine learning, authoring numerous award-winning papers at prestigious conferences such as NeurIPS, ICML, and AISTATS.

Zico's research includes developing the first methods for creating deep learning models with guaranteed robustness. He pioneered techniques for embedding hard constraints into AI models using classical optimization within neural network layers. More recently, in 2023, his team developed innovative methods for automatically assessing the safety of large language models (LLMs), demonstrating the potential to bypass existing model safeguards through automated optimization techniques. Alongside his academic pursuits, Zico has worked closely within the industry throughout his career, formerly as Chief Data Scientist at C3.ai, and currently as Chief Expert at Bosch and Chief Technical Advisor at Gray Swan, a startup specializing in AI safety and security.

Areas of Expertise (5)

AI Models

Machine Learning

Deep Learning

Neural Networks

Large Language Models, Generative AI

Media Appearances (5)

Pittsburgh’s AI-Powered Renaissance

CMU News  online

2024-10-09

"Pittsburgh has positioned itself as a worldwide leader in AI, led of course by Carnegie Mellon's long-time leadership and dedication to the field. Starting with Allen Newell and Herb Simon's inspiration and initiative, to the founding of departments dedicated to AI like the Machine Learning Department and Robotics Institute, and continued with today's influence on Generative AI and creation of AI startups, CMU has been a driving force in AI since the field's inception. With the recent continued expansion and public awareness of AI, in addition to continually welcoming numerous AI focused businesses, startups and research facilities to the city, Pittsburgh itself is well-positioned to capitalize on our lasting contributions."

view more

Zico Kolter Joins OpenAI’s Board of Directors

Bloomberg  online

2024-08-08

“I think part of my value is being deeply involved and integrated in research, and at the forefront of what’s happening in the field of not just the deployment of AI but the academic research into AI,” he said.

view more

Can smart solutions be artificial? They sure can!

Kosch  online

2024-03-11

In our interview series “Thought leaders in AI”, we had the opportunity to talk to Zico Kolter, Chief Scientist for AI at Bosch, about his personal view on various topics in the field of artificial intelligence. An AI system played the moderator and asked him questions on various exciting topics: How does he see the differences in AI development between Europe and the USA? Which celebrities would he like to meet one day? And finally: Which Bosch product does he particularly like?

view more

LLMs Pose Major Security Risks, Serving As ‘Attack Vectors’

C3.ai  online

2023-11-06

Zico Kolter, an associate professor of Computer Science at Carnegie Mellon and author of the report, Universal and Transferable Adversarial Attacks on Aligned Language Models, put it bluntly: “These tools are attack vectors,” he said.

view more

How researchers broke ChatGPT and what it could mean for future AI development

ZDNET  online

2023-07-27

"There is no obvious solution," Zico Kolter, a professor at Carnegie Mellon and author of the report, told the Times. "You can create as many of these attacks as you want in a short amount of time."

view more

Spotlight

Media

Publications:

Documents:

Photos:

loading image

Videos:

Balancing Innovation and Regulation in AI with Zico Kolter | Regulating AI Podcast Thought leaders in AI: Zico Kolter | Presented by Bosch Dr. Zico Kolter: Energy & Data: The Personal and The Global

Audio/Podcasts:

Social

Education (2)

Stanford University: Ph.D., Computer Science 2010

Georgetown University: B.S., Computer Science 2005

Event Appearances (2)

Moderator: AI in Financial Services: Transforming the Sector for a Better World

AI Horizons Pittsburgh Summit  Pittsburgh, PA

2024-10-14

Speaker: AI Horizons Keynote: AI for a Better World – Navigating Truth in the AI Era

AI Horizons Pittsburgh Summit  Pittsburgh, PA

2024-10-14

Articles (5)

Rethinking LLM Memorization through the Lens of Adversarial Compression

arXiv preprint

2024 Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major question is whether these models "memorize" all their training data or they integrate many data sources in some way more akin to how a human would learn and synthesize information. The answer hinges, to a large degree, on how we define memorization. In this work, we propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs. A given string from the training data is considered memorized if it can be elicited by a prompt (much) shorter than the string itself -- in other words, if these strings can be "compressed" with the model by computing adversarial prompts of fewer tokens. The ACR overcomes the limitations of existing notions of memorization by (i) offering an adversarial view of measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute.

view more

Forcing Diffuse Distributions out of Language Models

arXiv preprint

2024 Despite being trained specifically to follow user instructions, today's instructiontuned language models perform poorly when instructed to produce random outputs. For example, when prompted to pick a number uniformly between one and ten Llama-2-13B-chat disproportionately favors the number five, and when tasked with picking a first name at random, Mistral-7B-Instruct chooses Avery 40 times more often than we would expect based on the U.S. population. When these language models are used for real-world tasks where diversity of outputs is crucial, such as language model assisted dataset construction, their inability to produce diffuse distributions over valid choices is a major hurdle. In this work, we propose a fine-tuning method that encourages language models to output distributions that are diffuse over valid outcomes. The methods we introduce generalize across a variety of tasks and distributions and make large language models practical for synthetic dataset generation with little human intervention.

view more

Massive Activations in Large Language Models

arXiv preprint

2024 We observe an empirical phenomenon in Large Language Models (LLMs) -- very few activations exhibit significantly larger values than others (e.g., 100,000 times larger). We call them massive activations. First, we demonstrate the widespread existence of massive activations across various LLMs and characterize their locations. Second, we find their values largely stay constant regardless of the input, and they function as indispensable bias terms in LLMs. Third, these massive activations lead to the concentration of attention probabilities to their corresponding tokens, and further, implicit bias terms in the self-attention output. Last, we also study massive activations in Vision Transformers.

view more

Tofu: A task of fictitious unlearning for llms

arXiv preprint

2024 Large language models trained on massive corpora of data from the web can memorize and reproduce sensitive or private data raising both legal and ethical concerns. Unlearning, or tuning models to forget information present in their training data, provides us with a way to protect private data after training. Although several methods exist for such unlearning, it is unclear to what extent they result in models equivalent to those where the data to be forgotten was never learned in the first place. To address this challenge, we present TOFU, a Task of Fictitious Unlearning, as a benchmark aimed at helping deepen our understanding of unlearning. We offer a dataset of 200 diverse synthetic author profiles, each consisting of 20 question-answer pairs, and a subset of these profiles called the forget set that serves as the target for unlearning.

view more

Scaling Laws for Data Filtering–Data Curation cannot be Compute Agnostic

Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2024 Vision-language models (VLMs) are trained for thousands of GPU hours on carefully curated web datasets. In recent times, data curation has gained prominence with several works developing strategies to retain 'high-quality' subsets of 'raw' scraped data. For instance, the LAION public dataset retained only 10% of the total crawled data. However, these strategies are typically developed agnostic of the available compute for training. In this paper, we first demonstrate that making filtering decisions independent of training compute is often suboptimal: the limited high-quality data rapidly loses its utility when repeated, eventually requiring the inclusion of 'unseen' but 'lower-quality' data. To address this quality-quantity tradeoff (QQT), we introduce neural scaling laws that account for the non-homogeneous nature of web data, an angle ignored in existing literature.

view more