hero image
Ken Holstein - Carnegie Mellon University. Pittsburgh, PA, US

Ken Holstein

Assistant Professor | Carnegie Mellon University

Pittsburgh, PA, UNITED STATES

Ken Holstein's research focuses broadly on AI-augmented work and improving how we design and evaluate AI systems for real-world use.

Biography

Ken Holstein is an Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon University, where he directs the CMU CoALA Lab. In addition to his position at CMU, Ken is an inaugural member of the Partnership on AI’s Global Task Force for Inclusive AI. He is also part of Northwestern’s Center for Advancing Safety of Machine Intelligence (CASMI) and the Jacobs Foundation’s CERES network.

Ken's research focuses broadly on AI-augmented work and improving how we design and evaluate AI systems for real-world use. Ken draws on approaches from human–computer interaction (HCI), AI, design, cognitive science, learning sciences, statistics, and machine learning, among other areas.

Ken is deeply interested in: (1) understanding the gaps between human and artificial intelligence across a range of contexts, and (2) using this knowledge to design systems that respect human work, elevating human expertise and on-the-ground knowledge rather than diminishing it. To support these goals, Ken's research develops new approaches and tools that support better incorporation of diverse human expertise across the AI development lifecycle.

Ken's work has been generously supported by the National Science Foundation (NSF), CMU’s Block Center for Technology and Society, Northwestern’s CASMI & UL Research Institutes, Institute for Education Sciences (IES), Cisco Research, Jacobs Foundation, Amazon Research, CMU’s Metro21 Smart Cities Institute, and Prolific.

Areas of Expertise (6)

Elections

Intelligence Augmentation

Applied Machine Learning

Artificial Intelligence

Human-Computer Interaction

Worker-Centered Design

Media Appearances (4)

In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT

WIRED  online

2023-03-29

Others working in tech also expressed misgivings about the letter's focus on long-term risks, as systems available today including ChatGPT already pose threats. “I find recent developments very exciting,” says Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, who asked his name be removed from the letter a day after signing it as debate emerged among scientists about the best demands to make at this moment.

view more

Funding New Research to Operationalize Safety in Artificial Intelligence

Northwestern Engineering  online

2023-02-17

Kenneth Holstein, assistant professor in the Human-Computer Interaction Institute at Carnegie Mellon University, will study how to support effective AI-augmented decision-making in the context of social work. In this domain, predictions regarding human behavior are fundamentally uncertain and ground truth labels upon which an AI system is trained — for example, whether an observed behavior is considered socially harmful — often represent imperfect proxies for the outcomes human decision-makers are interested in modeling.

view more

These glasses give teachers superpowers

The Hechinger Report  online

2018-10-04

Lumilo is the brainchild of a team at Carnegie Mellon University. Ken Holstein, a doctoral candidate at the university, designed the app with significant input from teachers like Mawhinney who use cognitive tutors in their classrooms. The project treads new ground for the use of artificial intelligence in schools.

view more

‘Smart’ glasses for teachers help pupils learn

Tes Magazine  online

2018-06-27

“By alerting teachers in real-time to situations, the ITS [intelligent tutoring system] may be ill-suited to handle on its own. Lumilo facilitates a form of mutual support or co-orchestration between the human teacher and the AI tutor,” said Ken Holstein, lead author on the study, together with Bruce M. McLaren and Vincent Aleven.

view more

Media

Publications:

Documents:

Photos:

loading image loading image loading image loading image

Videos:

New Faculty Introduction Webinar: Ken Holstein and Sarah Fox New Faculty Meet & Greet: Ken Holstein Stanford Seminar - Designing for Human - AI Complementarity [LAK'18] March 7 - Session 2A1 - Kenneth Holstein

Audio/Podcasts:

Social

Industry Expertise (3)

Research

Education/Learning

Computer Software

Accomplishments (5)

Best Paper Award (professional)

2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT’23)

Best Paper Award (professional)

2023 ACM CHI Conference on Human Factors in Computing Systems (CHI’23)

Best Paper Award (professional)

2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML’23)

Graduate Student Poster Grand Prize (professional)

2022 Grefenstette Tech Ethics Symposium

CMU Teaching Innovation Award (professional)

2022 Prototyping Algorithmic Experiences (PAX)

Education (3)

Carnegie Mellon University: Ph.D., Human–Computer Interaction 2019

Carnegie Mellon University: M.S., Human–Computer Interaction 2019

University of Pittsburgh: B.S., Psychology (Cognitive focus) 2014

Affiliations (2)

  • Association for Computing Machinery (ACM) : Member
  • Design Justice Network (DJN) : Member

Event Appearances (3)

Designing for Complementarity in AI-Augmented Work

UCI Informatics Seminar Series  University of California Irvine (UCI), Irvine, CA

Fostering Critical AI Literacy Among Frontline Workers, the Public, & AI Developers

HCI + Design Thought Leaders Lecture  Northwestern University, Evanston, IL

Supporting Effective AI-Augmented Decision-Making in Social Contexts

Toward a Safety Science of AI  Northwestern University, Evanston, IL

Research Grants (5)

AI-Augmented Illustration through Conversational Interaction

Prolific $10,000

2023

Scaffolding Responsible AI Practice at the Earliest Stages of Ideation, Problem Formulation and Project Selection

PwC $350,193

2023 - 2024

Bridging Policy Gaps in the Life Cycle of Public Algorithmic Systems

Block Center $80,000

2022 - 2023

Supporting Effective AI-Augmented Decision-Making in Social Contexts

Center for Advancing Safety of Machine Intelligence (CASMI) and UL $275,000

2023 - 2025

Supporting Effective AI-Augmented Decision-Making in Content Moderation

Block Center $80,000

2022 - 2023

Articles (5)

Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making

FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency

2023 A growing literature on human-AI decision-making investigates strategies for combining human judgment with statistical models to improve decision-making. Research in this area often evaluates proposed improvements to models, interfaces, or workflows by demonstrating improved predictive performance on “ground truth’’ labels. However, this practice overlooks a key difference between human judgments and model predictions.

view more

Understanding Frontline Workers’ and Unhoused Individuals’ Perspectives on AI Used in Homeless Services

CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

2023 Recent years have seen growing adoption of AI-based decision-support systems (ADS) in homeless services, yet we know little about stakeholder desires and concerns surrounding their use. In this work, we aim to understand impacted stakeholders’ perspectives on a deployed ADS that prioritizes scarce housing resources. We employed AI lifecycle comicboarding, an adapted version of the comicboarding method, to elicit stakeholder feedback and design ideas across various components of an AI system’s design.

view more

Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice

FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency

2023 An emerging body of research indicates that ineffective cross-functional collaboration – the interdisciplinary work done by industry practitioners across roles – represents a major barrier to addressing issues of fairness in AI design and development. In this research, we sought to better understand practitioners’ current practices and tactics to enact cross-functional collaboration for AI fairness, in order to identify opportunities to support more effective collaboration.

view more

Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning

CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

2023 Machine learning models with high accuracy on test data can still produce systematic failures, such as harmful biases and safety issues, when deployed in the real world. To detect and mitigate such failures, practitioners run behavioral evaluation of their models, checking model outputs for specific types of inputs. Behavioral evaluation is important but challenging, requiring that practitioners discover real-world patterns and validate systematic failures.

view more

A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms

2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

2023 Recent research increasingly brings to question the appropriateness of using predictive tools in complex, real-world tasks. While a growing body of work has explored ways to improve value alignment in these tools, comparatively less work has centered concerns around the fundamental justifiability of using these tools. This work seeks to center validity considerations in deliberations around whether and how to build data-driven algorithms in high-stakes domains.

view more