Vincent Conitzer

Professor Carnegie Mellon University

  • Pittsburgh PA

Vincent Conitzer is an expert in ethics and AI.

Contact

Carnegie Mellon University

View more experts managed by Carnegie Mellon University

Biography

Vincent Conitzer is Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.

Previous to joining CMU, Conitzer was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University.

Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC). With Jana Schaich Borg and Walter Sinnott-Armstrong, he authored "Moral AI: And How We Get There."

Areas of Expertise

Ethics in AI
Machine Learning
Artificial Intelligence
Computer Science

Media Appearances

Gen AI's Accuracy Problems Aren't Going Away Anytime Soon, Researchers Say

CNET  online

2025-03-24

Vincent Conitzer (School of Computer Science) says the industry is still far from developing reliable and trustworthy models, with many researchers doubting that artificial general intelligence is on the horizon anytime soon. "An AI system, it might just claim to be very confident about something that's completely nonsense," said Conitzer.

View More

DeepMind claims its AI performs better than International Mathematical Olympiad gold medalists

TechCrunch  online

2025-02-07

Google DeepMind’s AI system AlphaGeometry2 has outperformed the average gold medalist in solving geometry problems from the International Mathematical Olympiad. “It is striking to see the contrast between continuing, spectacular progress on these kinds of benchmarks, and meanwhile, language models, including more recent ones with ‘reasoning,’ continuing to struggle with some simple commonsense problems,” said Vince Conitzer (School of Computer Science.

View More

Two misuses of popular AI tools spark the question: When do we blame the tools?

Fortune  online

2025-01-09

Two recent incidents highlight concerns about AI misuse - a man used ChatGPT to plan an attack in Las Vegas, and AI video tools were exploited to create harmful content. These events sparked debate about regulating AI and holding developers accountable for potential harm caused by their technology. Carnegie Mellon University professor Vincent Conitzer explained that “our understanding of generative AI is still limited" and that we can't fully explain its success, predict its outputs, or ensure its safety with current methods.

View More

Show All +

Spotlight

1 min

When do we blame the tools?

Two recent incidents highlight concerns about AI misuse a man used ChatGPT to plan an attack in Las Vegas, and AI video tools were exploited to create harmful content. These events sparked debate about regulating AI and holding developers accountable for potential harm caused by their technology. Carnegie Mellon University professor Vincent Conitzer explained that “our understanding of generative AI is still limited" and that we can't fully explain its success, predict its outputs, or ensure its safety with current methods.

Vincent Conitzer

Social

Accomplishments

Honorable Mention for Best Paper Award, HCOMP 2022

2022

Oxford University Press’ “Best of Philosophy”

2021

IFAAMAS Influential Paper Award

2022

Show All +

Education

Carnegie Mellon University

Ph.D.

Computer Science

2006

Harvard University

A.B.

Applied Mathematics

2001

Affiliations

  • Cooperative AI Foundation : Advisor

Event Appearances

Social choice for AI ethics and safety

July 2024 | 17th Meeting of the Society for Social Choice and Welfare (SSCW-24)  Paris, France

Social Choice for AI Alignment

June 2024 | 14th Oxford Workshop on Global Priorities Research  Oxford, UK

Special Session on Alternative Models for Fairness in AI Social Choice for AI Alignment

January 2024 | International Symposium on AI and Mathematics (ISAIM)  Fort Lauderdale, FL

Research Grants

Foundations of Cooperative AI Lab at Carnegie Mellon

Center for Emerging Risk Research (CERR)

Started January 2022, for 3-5 years.

Foundations of Cooperative AI Lab at Carnegie Mellon

Cooperative AI Foundation (CAIF)

Started January 2022, for 3-5 years.

Information Networks: RESUME: Artificial Intelligence, Algorithms, and Optimization for Responsible Reopening

ARO Grant W911NF2110230

Start Date: June, 2021. Projected Duration: 1 year.

Patents

Items Ratio Based Price/Discount Adjustment in a Combinatorial Auction

US 8195524

June 5, 2012

Overconstraint Detection, Rule Relaxation and Demand Reduction in Combinatorial Exchange

US 8190490

May 29, 2012

Bid Modification Based on Logical Connections between Trigger Groups in a Combinatorial Exchange

US 8190489

May 29, 2012

Articles

Should Responsibility Affect Who Gets a Kidney?

Responsibility and Healthcare

2024

Most people have two kidneys and can live comfortably with only one, if it functions well enough. If both kidneys completely fail, however, patients will die quickly unless they receive kidney dialysis or transplant. Dialysis has severe costs in money, time and discomfort, and patients face approximately a 40 percent chance of mortality after five years on dialysis (USRDS 2020: chapter 5). For these reasons, many dialysis patients eventually need a kidney transplant, after which they have a survival rate of 90 percent after 3–5 years (Briggs, 2001). Unfortunately, there are about 98,000 people in the USA waiting for a kidney transplant, but only around 20,000 kidneys become available each year (OPTN, nd).

View more

Computing optimal equilibria and mechanisms via learning in zero-sum extensive-form games

Advances in Neural Information Processing Systems

2024

We introduce a new approach for computing optimal equilibria via learning in games. It applies to extensive-form settings with any number of players, including mechanism design, information design, and solution concepts such as correlated, communication, and certification equilibria. We observe that optimal equilibria are minimax equilibrium strategies of a player in an extensive-form zero-sum game. This reformulation allows to apply techniques for learning in zero-sum games, yielding the first learning dynamics that converge to optimal equilibria, not only in empirical averages, but also in iterates. We demonstrate the practical scalability and flexibility of our approach by attaining state-of-the-art performance in benchmark tabular games, and by computing an optimal mechanism for a sequential auction design problem using deep reinforcement learning.

View more

Similarity-based cooperative equilibrium

Advances in Neural Information Processing Systems

2024

As machine learning agents act more autonomously in the world, they will increasingly interact with each other. Unfortunately, in many social dilemmas like the one-shot Prisoner’s Dilemma, standard game theory predicts that ML agents will fail to cooperate with each other. Prior work has shown that one way to enable cooperative outcomes in the one-shot Prisoner’s Dilemma is to make the agents mutually transparent to each other, ie, to allow them to access one another’s source code (Rubinstein, 1998; Tennenholtz, 2004)–or weights in the case of ML agents. However, full transparency is often unrealistic, whereas partial transparency is commonplace. Moreover, it is challenging for agents to learn their way to cooperation in the full transparency setting. In this paper, we introduce a more realistic setting in which agents only observe a single number indicating how similar they are to each other. We prove that this allows for the same set of cooperative outcomes as the full transparency setting. We also demonstrate experimentally that cooperation can be learned using simple ML methods.

View more

Show All +