Biography
Vincent Conitzer is Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.
Previous to joining CMU, Conitzer was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University.
Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC). With Jana Schaich Borg and Walter Sinnott-Armstrong, he authored "Moral AI: And How We Get There."
Areas of Expertise (4)
Ethics in AI
Machine Learning
Artificial Intelligence
Computer Science
Media Appearances (8)
How Forbes Compiled The 2024 AI 50 List
Forbes online
2024-07-18
Expert Judge: Vincent Conitzer is a professor of computer science at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab, which studies foundations of game theory for advanced, autonomous AI agents. He is also a professor of computer science and philosophy at the University of Oxford, where he is the head of technical AI engagement at the Institute for Ethics in AI.
The Excerpt podcast: AI has been unleashed. Should we be concerned?
USAToday online
2024-06-04
The unleashing of powerful Artificial Intelligence into the world, with little to any regulation or guardrails, has put many people on edge. It holds tremendous promise in all sorts of fields from healthcare to law enforcement, but it also poses many risks. How worried should we be? To help us dig into it, we're joined by Vince Conitzer, Head of Technical AI Engagement at the Institute for Ethics in AI at the University of Oxford.
Deepfakes Are Evolving. This Company Wants to Catch Them All
Wired online
2024-06-27
Vincent Conitzer, a computer scientist at Carnegie Mellon University in Pittsburgh and coauthor of the book Moral AI, expects AI fakery to become more pervasive and more pernicious. That means, he says, there will be growing demand for tools designed to counter them. “It is an arms race,” Conitzer says. “Even if you have something that right now is very effective at catching deepfakes, there's no guarantee that it will be effective at catching the next generation. A successful detector might even be used to train the next generation of deepfakes to evade that detector.”
How the University of Michigan Is Selling Student Data to Train AI
MSN online
2024-02-15
“My first reaction is one of skepticism,” Vincent Conitzer, an AI ethics researcher at Carnegie Mellon University, told The Daily Beast. “Also, even taking this message mostly at face value, I suppose it may just all be based on recordings and papers that are anyway in the public domain.”
The Metaverse Flopped, So Mark Zuckerberg Is Pivoting to Empty AI Hype
MSN online
2024-01-21
As for what this hypothetical AGI would look like, Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of technical AI engagement at the University of Oxford's Institute for Ethics in AI, speculates that Meta could start with something like Llama and expand from there. "I imagine that they will focus their attention on large language models, and will probably be going more in the multimodal direction, meaning making these systems capable with images, audio, video," he says, like Google‘s Gemini, released in December
AI automated discrimination. Here’s how to spot it.
Vox online
2023-06-14
For many Americans, AI-powered algorithms are already part of their daily routines, from recommendation algorithms driving their online shopping to the posts they see on social media. Vincent Conitzer, a professor of computer science at Carnegie Mellon University, notes that the rise of chatbots like ChatGPT provides more opportunities for these algorithms to produce bias. Meanwhile, companies like Google and Microsoft are looking to generative AI to power the search engines of the future, where users will be able to ask conversational questions and get clear, simple answers.
AI Chat Bots Are Running Amok — And We Have No Clue How to Stop Them
Rolling Stone online
2023-02-14
“One common thread” in these incidents, according to Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of technical AI engagement at the University of Oxford’s Institute for Ethics in AI, “is that our understanding of these systems is still very limited.”
Could AI swamp social media with fake accounts?
BBC News online
2023-02-13
"Something like ChatGPT can scale that spread of fake accounts on a level we haven't seen before," says Vincent Conitzer, a professor of computer science at Carnegie Mellon University, "and it can become harder to distinguish each of those accounts from human beings."
Media
Documents:
Photos:
Audio/Podcasts:
Accomplishments (4)
Honorable Mention for Best Paper Award, HCOMP 2022 (professional)
2022
Oxford University Press’ “Best of Philosophy” (professional)
2021
IFAAMAS Influential Paper Award (professional)
2022
ACM/SIGAI Autonomous Agents Research Award (professional)
2021
Education (2)
Carnegie Mellon University: Ph.D., Computer Science 2006
Harvard University: A.B., Applied Mathematics 2001
Affiliations (1)
- Cooperative AI Foundation : Advisor
Links (3)
Event Appearances (3)
Social choice for AI ethics and safety
July 2024 | 17th Meeting of the Society for Social Choice and Welfare (SSCW-24) Paris, France
Social Choice for AI Alignment
June 2024 | 14th Oxford Workshop on Global Priorities Research Oxford, UK
Special Session on Alternative Models for Fairness in AI Social Choice for AI Alignment
January 2024 | International Symposium on AI and Mathematics (ISAIM) Fort Lauderdale, FL
Research Grants (3)
Foundations of Cooperative AI Lab at Carnegie Mellon
Center for Emerging Risk Research (CERR) $3,000,000
Started January 2022, for 3-5 years.
Foundations of Cooperative AI Lab at Carnegie Mellon
Cooperative AI Foundation (CAIF) $500,000
Started January 2022, for 3-5 years.
Information Networks: RESUME: Artificial Intelligence, Algorithms, and Optimization for Responsible Reopening
ARO Grant W911NF2110230 $99,821
Start Date: June, 2021. Projected Duration: 1 year.
Patents (3)
Items Ratio Based Price/Discount Adjustment in a Combinatorial Auction
US 8195524
June 5, 2012
Overconstraint Detection, Rule Relaxation and Demand Reduction in Combinatorial Exchange
US 8190490
May 29, 2012
Bid Modification Based on Logical Connections between Trigger Groups in a Combinatorial Exchange
US 8190489
May 29, 2012
Articles (5)
Should Responsibility Affect Who Gets a Kidney?
Responsibility and Healthcare2024 Most people have two kidneys and can live comfortably with only one, if it functions well enough. If both kidneys completely fail, however, patients will die quickly unless they receive kidney dialysis or transplant. Dialysis has severe costs in money, time and discomfort, and patients face approximately a 40 percent chance of mortality after five years on dialysis (USRDS 2020: chapter 5). For these reasons, many dialysis patients eventually need a kidney transplant, after which they have a survival rate of 90 percent after 3–5 years (Briggs, 2001). Unfortunately, there are about 98,000 people in the USA waiting for a kidney transplant, but only around 20,000 kidneys become available each year (OPTN, nd).
Computing optimal equilibria and mechanisms via learning in zero-sum extensive-form games
Advances in Neural Information Processing Systems2024 We introduce a new approach for computing optimal equilibria via learning in games. It applies to extensive-form settings with any number of players, including mechanism design, information design, and solution concepts such as correlated, communication, and certification equilibria. We observe that optimal equilibria are minimax equilibrium strategies of a player in an extensive-form zero-sum game. This reformulation allows to apply techniques for learning in zero-sum games, yielding the first learning dynamics that converge to optimal equilibria, not only in empirical averages, but also in iterates. We demonstrate the practical scalability and flexibility of our approach by attaining state-of-the-art performance in benchmark tabular games, and by computing an optimal mechanism for a sequential auction design problem using deep reinforcement learning.
Similarity-based cooperative equilibrium
Advances in Neural Information Processing Systems2024 As machine learning agents act more autonomously in the world, they will increasingly interact with each other. Unfortunately, in many social dilemmas like the one-shot Prisoner’s Dilemma, standard game theory predicts that ML agents will fail to cooperate with each other. Prior work has shown that one way to enable cooperative outcomes in the one-shot Prisoner’s Dilemma is to make the agents mutually transparent to each other, ie, to allow them to access one another’s source code (Rubinstein, 1998; Tennenholtz, 2004)–or weights in the case of ML agents. However, full transparency is often unrealistic, whereas partial transparency is commonplace. Moreover, it is challenging for agents to learn their way to cooperation in the full transparency setting. In this paper, we introduce a more realistic setting in which agents only observe a single number indicating how similar they are to each other. We prove that this allows for the same set of cooperative outcomes as the full transparency setting. We also demonstrate experimentally that cooperation can be learned using simple ML methods.
Pacing equilibrium in first price auction markets
Management Science2022 Mature internet advertising platforms offer high-level campaign management tools to help advertisers run their campaigns, often abstracting away the intricacies of how each ad is placed and focusing on aggregate metrics of interest to advertisers. On such platforms, advertisers often participate in auctions through a proxy bidder, so the standard incentive analyses that are common in the literature do not apply directly. In this paper, we take the perspective of a budget management system that surfaces aggregated incentives—instead of individual auctions—and compare first and second price auctions. We show that theory offers surprising endorsement for using a first price auction to sell individual impressions.
Safe Pareto improvements for delegated game playing
Autonomous Agents and Multi-Agent Systems2022 A set of players delegate playing a game to a set of representatives, one for each player. We imagine that each player trusts their respective representative’s strategic abilities. Thus, we might imagine that per default, the original players would simply instruct the representatives to play the original game as best as they can. In this paper, we ask: are there safe Pareto improvements on this default way of giving instructions? That is, we imagine that the original players can coordinate to tell their representatives to only consider some subset of the available strategies and to assign utilities to outcomes differently than the original players. Then can the original players do this in such a way that the payoff is guaranteed to be weakly higher than under the default instructions for all the original players?
Social