Emma Strubell

Assistant Professor, Language Technologies Institute Carnegie Mellon University

  • Pittsburgh PA

Emma Strubell's research focuses on efficient and equitable natural language processing (NLP).

Contact

Carnegie Mellon University

View more experts managed by Carnegie Mellon University

Biography

Emma Strubell is an Assistant Professor at Carnegie Mellon University's Language Technologies Institute (LTI). Their research lies at the intersection of natural language processing (NLP) and machine learning, and their broad research objective is bridging the gap between state-of-the-art NLP methods, and the wide variety of users who stand to benefit from that technology, but for whom that technology does not yet work in practice.

Their work has been recognized with a Madrona AI Impact Award, best paper awards at ACL and EMNLP, and in 2024 they were named one of the most powerful people in AI by Business Insider.

Areas of Expertise

Green AI
Natural Language Processing (NLP)
Machine Learning
Artificiial Intelligence

Media Appearances

Emma Strubell | AI Power List

Business Insider  online

2024-10-24

In front of a whiteboard from a classroom at Carnegie Mellon University, Strubell explains the Jevons effect, in which the gains from increased efficiency of a technological tool could be negated as use increases. The concept has come into focus as the conversation shifts to efficiency, AI, and the environment. Though they are a proponent for the advancement of AI, Strubell told Business Insider. "GenAI training is a nightmare for energy providers." Their work as an assistant professor at Carnegie Mellon's Language Technologies Institute asks students and researchers to examine the systems that power AI to discover more efficient and environmentally friendly raw materials that power AI.

View More

Greater, newer AI models come with environmental impacts

Marketplace  online

2024-06-13

Emma Strubell of Carnegie Mellon University explains why carbon emissions increase with more AI data centers and more powerful AI features.

View More

An AI's Carbon Footprint Is 5 Times Bigger Than a Car's

Popular Mechanics  online

2019-06-06

The act of training a neural network, according to the study led by Emma Strubell of the University of Massachusetts Amherst, creates a carbon dioxide footprint of 284 tonnes—five times the lifetime emissions of an average car.

View More

Social

Industry Expertise

Education/Learning

Education

UMass Amherst

Ph.D.

University of Maine

B.S.

Computer Science

Event Appearances

AI and the Environment: Sustaining the Common Good

2024 | Markkula Center for Applied Ethics and Next 10  Santa Clara University

Articles

A view of the sustainable computing landscape

Patterns

2025

This article presents a holistic research agenda to address the significant environmental impact of information and communication technology (ICT), which accounts for 2.1%–3.9% of global greenhouse gas emissions. It proposes several research thrusts to achieve sustainable computing: accurate carbon accounting models, life cycle design strategies for hardware, efficient use of renewable energy, and integrated design and management strategies for next-generation hardware and software systems. If successful, the research would flatten and reverse growth trajectories for computing power and carbon, especially for rapidly growing applications like artificial intelligence. The research takes a holistic approach because strategies that reduce operational carbon may increase embodied carbon, and vice versa.

View more

Light bulbs have energy ratings—so why can’t AI chatbots?

Nature

2024

The rising energy and environmental cost of the artificial-intelligence boom is fuelling concern. Green policy mechanisms that already exist offer a path towards a solution.

View more

Making scalable meta learning practical

Advances in Neural Information Processing Systems

2023

Despite its flexibility to learn diverse inductive biases in machine learning programs, meta learning (ie,\learning to learn) has long been recognized to suffer from poor scalability due to its tremendous compute/memory costs, training instability, and a lack of efficient distributed training support. In this work, we focus on making scalable meta learning practical by introducing SAMA, which combines advances in both implicit differentiation algorithms and systems. Specifically, SAMA is designed to flexibly support a broad range of adaptive optimizers in the base level of meta learning programs, while reducing computational burden by avoiding explicit computation of second-order gradient information, and exploiting efficient distributed training techniques implemented for first-order gradients.

View more

Show All +