
Hoda Heidari
Assistant Professor Carnegie Mellon University
- Pittsburgh PA
Hoda Heidari is broadly interested in the Ethical and Societal Aspects of Artificial Intelligence and Machine Learning.
Biography
Hoda is broadly interested in the Ethical and Societal Aspects of Artificial Intelligence and Machine Learning. In particular, her research has addressed issues of Fairness and Accountability.
Hoda's work has been generously supported by the NSF Program on Fairness in AI in Collaboration with Amazon, PwC, CyLab, Meta, and J. P. Morgan. Hoda is a senior personnel at AI-SDM: the NSF AI Institute for Societal Decision Making.
Areas of Expertise
Media Appearances
Introduction to AI in Municipal Government
Technically online
2022-04-05
Meanwhile, Hoda Heidari, an assistant professor in the CMU Machine Learning Department and the Institute for Software Research, shared her experience in research around using machine learning methods to address discrimination and bias. While there have recently been more efforts to make AI system development more participatory for all stakeholders, “I would say that it is these kinds of participatory frameworks are limited in scope,” Heidari said. Often, the system architects are asking for input from communities they haven’t established communication-based relationships with yet. “So the question should be, how do we build those relationships?”
Responsible AI Initiative launches at Carnegie Mellon University following panel discussion including government, industry leaders
PittsburghInno online
2022-04-04
As artificial intelligence systems become more prevalent throughout all ways of life, Carnegie Mellon University wants to be at the forefront of ensuring that such AI technologies are being deployed in an ethical manner, limiting the potential for types of negligence and prejudice that have come to exist from the adoption of some automated systems.
CMU Launches Responsible AI Initiative To Direct Technology Toward Social Responsibility
Carnegie Mellon University News online
2022-04-01
Housed at the Block Center for Technology and Society, the Responsible AI Initiative is spearheaded by faculty in the School of Computer Science (SCS) and the Heinz College of Information Systems and Public Policy. The initiative's leaders include Jodi Forlizzi, the Herbert A. Simon Professor in Computer Science and Human-Computer Interaction and the associate dean for diversity, equity and inclusion in SCS; Rayid Ghani, a professor in the Machine Learning Department (MLD) and the Heinz College; and Hoda Heidari, an assistant professor in MLD and the Institute for Software Research.
CMU Researchers Win NSF-Amazon Fairness in AI Awards
Carnegie Mellon University News online
2021-02-16
Fair AI in Public Policy — Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health & Human Services. Led by Hoda Heidari, an assistant professor in the Machine Learning Department (MLD) and Institute for Software Research, researchers in MLD and the Heinz College of Information Systems and Public Policy will help translate fairness goals in public policy into computationally tractable measures. They will focus on factors along the development life cycle, from data collection through evaluation of tools, to identify sources of unfair outcomes in systems related to education, child welfare and justice.
Social
Industry Expertise
Accomplishments
Facebook Research Award
2021
To build “A Tool to Study the Efficacy of Fairness Algorithms on Specific Bias Types”
J. P. Morgan and Chase Individual Faculty Award
2021
Best Paper Award
2021
ACM Conference on Fairness, Accountability, and Transparency (FAccT)
Exemplary Track Award
2021
The ACM Conference on Economics and Computation (EC)
Education
University of Pennsylvania
Ph.D.
Computer and Information Science
2017
Wharton School of Business
M.Sc.
Statistics
2017
Sharif University of Technology
B.Sc.
Computer Engineering
2011
Links
Event Appearances
On human-AI collaboration
IDEAS Summer Program
Roundtable on Data Privacy in Black Communities
Joint Center for Political and Economic Studies
Foundations of Algorithmic Fairness
ELLIS
Research Grants
Robust and Fair AI Systems in Dynamic Environments
PwC Research Grant $300000
2022
Fair AI in Public Policy: Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health and Human Services
NSF FAI grant $600000
2021
On the Impact of Algorithmic Fairness Metrics and Methods on Trust in Machine Learning Systems
CMU CyLab grant $50000
2021
Articles
Perspectives on incorporating expert feedback into model updates
Patterns2023
Machine learning (ML) practitioners are increasingly tasked with developing models that are aligned with non-technical experts’ values and goals. However, there has been insufficient consideration of how practitioners should translate domain expertise into ML updates. In this review, we consider how to capture interactions between practitioners and experts systematically. We devise a taxonomy to match expert feedback types with practitioner updates.
Moral Machine or Tyranny of the Majority?
arXiv:2305.173192023
With Artificial Intelligence systems increasingly applied in consequential domains, researchers have begun to ask how these systems ought to act in ethically charged situations where even humans lack consensus. In the Moral Machine project, researchers crowdsourced answers to "Trolley Problems" concerning autonomous vehicles. Subsequently, Noothigattu et al. (2018) proposed inferring linear functions that approximate each individual's preferences and aggregating these linear models by averaging parameters across the population.
Local Justice and Machine Learning: Modeling and Inferring Dynamic Ethical Preferences toward Allocations
Proceedings of the AAAI Conference on Artificial Intelligence2023
We consider a setting in which a social planner has to make a sequence of decisions to allocate scarce resources in a high-stakes domain. Our goal is to understand stakeholders' dynamic moral preferences toward such allocational policies. In particular, we evaluate the sensitivity of moral preferences to the history of allocations and their perceived future impact on various socially salient groups.
A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms
2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)2023
Recent research increasingly brings to question the appropriateness of using predictive tools in complex, real-world tasks. While a growing body of work has explored ways to improve value alignment in these tools, comparatively less work has centered concerns around the fundamental justifiability of using these tools. This work seeks to center validity considerations in deliberations around whether and how to build data-driven algorithms in high-stakes domains.
Strategic Instrumental Variable Regression: Recovering Causal Relationships From Strategic Responses
Proceedings of the 39th International Conference on Machine Learning2022
In settings where Machine Learning (ML) algorithms automate or inform consequential decisions about people, individual decision subjects are often incentivized to strategically modify their observable attributes to receive more favorable predictions. As a result, the distribution the assessment rule is trained on may differ from the one it operates on in deployment.