Ken Holstein

Assistant Professor Carnegie Mellon University

  • Pittsburgh PA

Ken Holstein's research focuses broadly on AI-augmented work and improving how we design and evaluate AI systems for real-world use.

Contact

Carnegie Mellon University

View more experts managed by Carnegie Mellon University

Biography

Ken Holstein is an Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon University, where he directs the CMU CoALA Lab. In addition to his position at CMU, Ken is an inaugural member of the Partnership on AI’s Global Task Force for Inclusive AI. He is also part of Northwestern’s Center for Advancing Safety of Machine Intelligence (CASMI) and the Jacobs Foundation’s CERES network.

Ken's research focuses broadly on AI-augmented work and improving how we design and evaluate AI systems for real-world use. Ken draws on approaches from human–computer interaction (HCI), AI, design, cognitive science, learning sciences, statistics, and machine learning, among other areas.

Ken is deeply interested in: (1) understanding the gaps between human and artificial intelligence across a range of contexts, and (2) using this knowledge to design systems that respect human work, elevating human expertise and on-the-ground knowledge rather than diminishing it. To support these goals, Ken's research develops new approaches and tools that support better incorporation of diverse human expertise across the AI development lifecycle.

Ken's work has been generously supported by the National Science Foundation (NSF), CMU’s Block Center for Technology and Society, Northwestern’s CASMI & UL Research Institutes, Institute for Education Sciences (IES), Cisco Research, Jacobs Foundation, Amazon Research, CMU’s Metro21 Smart Cities Institute, and Prolific.

Areas of Expertise

Elections
Intelligence Augmentation
Applied Machine Learning
Artificial Intelligence
Human-Computer Interaction
Worker-Centered Design

Media Appearances

‘Smart’ glasses for teachers help pupils learn

Tes Magazine  online

2018-06-27

“By alerting teachers in real-time to situations, the ITS [intelligent tutoring system] may be ill-suited to handle on its own. Lumilo facilitates a form of mutual support or co-orchestration between the human teacher and the AI tutor,” said Ken Holstein, lead author on the study, together with Bruce M. McLaren and Vincent Aleven.

View More

These glasses give teachers superpowers

The Hechinger Report  online

2018-10-04

Lumilo is the brainchild of a team at Carnegie Mellon University. Ken Holstein, a doctoral candidate at the university, designed the app with significant input from teachers like Mawhinney who use cognitive tutors in their classrooms. The project treads new ground for the use of artificial intelligence in schools.

View More

Funding New Research to Operationalize Safety in Artificial Intelligence

Northwestern Engineering  online

2023-02-17

Kenneth Holstein, assistant professor in the Human-Computer Interaction Institute at Carnegie Mellon University, will study how to support effective AI-augmented decision-making in the context of social work. In this domain, predictions regarding human behavior are fundamentally uncertain and ground truth labels upon which an AI system is trained — for example, whether an observed behavior is considered socially harmful — often represent imperfect proxies for the outcomes human decision-makers are interested in modeling.

View More

Show All +

Social

Industry Expertise

Research
Education/Learning
Computer Software

Accomplishments

CMU Teaching Innovation Award

2022

Prototyping Algorithmic Experiences (PAX)

Graduate Student Poster Grand Prize

2022

Grefenstette Tech Ethics Symposium

Best Paper Award

2023

IEEE Conference on Secure and Trustworthy Machine Learning (SaTML’23)

Show All +

Education

University of Pittsburgh

B.S.

Psychology (Cognitive focus)

2014

Carnegie Mellon University

M.S.

Human–Computer Interaction

2019

Carnegie Mellon University

Ph.D.

Human–Computer Interaction

2019

Affiliations

  • Association for Computing Machinery (ACM) : Member
  • Design Justice Network (DJN) : Member

Event Appearances

Supporting Effective AI-Augmented Decision-Making in Social Contexts

Toward a Safety Science of AI  Northwestern University, Evanston, IL

Fostering Critical AI Literacy Among Frontline Workers, the Public, & AI Developers

HCI + Design Thought Leaders Lecture  Northwestern University, Evanston, IL

Designing for Complementarity in AI-Augmented Work

UCI Informatics Seminar Series  University of California Irvine (UCI), Irvine, CA

Research Grants

Supporting Effective AI-Augmented Decision-Making in Content Moderation

Block Center

2022 - 2023

Supporting Effective AI-Augmented Decision-Making in Social Contexts

Center for Advancing Safety of Machine Intelligence (CASMI) and UL

2023 - 2025

Bridging Policy Gaps in the Life Cycle of Public Algorithmic Systems

Block Center

2022 - 2023

Show All +

Articles

A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms

2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

2023

Recent research increasingly brings to question the appropriateness of using predictive tools in complex, real-world tasks. While a growing body of work has explored ways to improve value alignment in these tools, comparatively less work has centered concerns around the fundamental justifiability of using these tools. This work seeks to center validity considerations in deliberations around whether and how to build data-driven algorithms in high-stakes domains.

View more

Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning

CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

2023

Machine learning models with high accuracy on test data can still produce systematic failures, such as harmful biases and safety issues, when deployed in the real world. To detect and mitigate such failures, practitioners run behavioral evaluation of their models, checking model outputs for specific types of inputs. Behavioral evaluation is important but challenging, requiring that practitioners discover real-world patterns and validate systematic failures.

View more

Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice

FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency

2023

An emerging body of research indicates that ineffective cross-functional collaboration – the interdisciplinary work done by industry practitioners across roles – represents a major barrier to addressing issues of fairness in AI design and development. In this research, we sought to better understand practitioners’ current practices and tactics to enact cross-functional collaboration for AI fairness, in order to identify opportunities to support more effective collaboration.

View more

Show All +