Artificial Intelligence Makes Energy Demand More Complex — And More Achievable

Artificial intelligence, a field known for its expanding uses across society, is also increasingly notorious for the massive amount of energy it needs to function.

Jul 2, 2025

1 min

Costa Samaras



In a 2024 paper, researchers from Carnegie Mellon University and machine learning development corporation Hugging Face found that generative AI systems could use as much as 33 times more energy to complete a task than task-specific software would.


“The climate and sustainability challenge can be overwhelming in the amount of new clean technology that we have to deploy and develop, and the ways that the energy system has to evolve,” said Costa Samaras, head of the university-wide Wilton E. Scott Institute for Energy Innovation. “The scale of the challenge alone can be overwhelming to folks.”


However, Carnegie Mellon University’s standing commitment to the United Nations' Sustainable Development Goals and its position as a nationally recognized leader in technologies like artificial intelligence mean that it is uniquely positioned to address growing concerns around energy demand, climate resilience and social good.



Connect with:
Costa Samaras

Costa Samaras

Trustee Professor of Civil and Environmental Engineering

Costa Samaras's research focuses on the pathways to clean, climate-safe, equitable, and secure energy and infrastructure systems.

Transportation SystemsAutonomous DrivingClimate and Energy Decision MakingEngineering and Public PolicyEnvironmental Engineering

You might also like...

Check out some other posts from Carnegie Mellon University

2 min

Can we separate our work and home memories, 'Severance' style?

The hit Apple TV show 'Severance' offers a tempting alternative to balancing work and home life by using neural implants to entirely split the memories. But according to Carnegie Mellon University neuroscientist Dr. Alison Barth, this work-life separation is somewhat possible even without an implant. In an interview, Dr. Barth explains: "We all experience some compartmentalization between our private and our work lives. Having a different location where you work and play makes that easier, but the cues for 'life' and 'work' can be as simple as time of day, or what your computer screen looks like."  In addition, she says humans can "easily move in and out" of our work and personal worlds, and that there are many examples of people whose work and private lives are completely 'severed'. CMU neuroscientist Alison Barth shares her thoughts on the TV thriller Severance As far as the feasibility of technology to control our memories for us, Dr. Barth says: "I don't think that it is possible to program people so that they simply cannot access memories outside of a particular space and time."  And she further warns of the dangers of such a separation: "The potential for abuse and lack of accountability are horrifying. In Severance, the office workers have little notion of what their work is. It would be hard to hold them accountable in a court of law. Severance is perfectly suited to corporate malfeasance," she explained. Watch Alison Barth's CMU Experts video below to learn more about her research seeking to understand how experience transforms the properties of neurons to encode memory.

1 min

Secretary Buttigeg makes one of his final DOT stops at CMU's Safety 21

U.S. Secretary of Transportation Pete Buttigieg visited Carnegie Mellon University in one of his final stops as Transportation Secretary. Raj Rajkumar, director of Safety21 and George Westinghouse Professor in the Department of Electrical and Computer Engineering, with Ph.D. candidates Nishad Sahu and Gregory Su, demonstrated research on the safe navigation of autonomous driving systems in designated work zones, leveraging high-definition mapping, computer perception and vehicle connectivity. “The sophistication of the safety work that’s going on goes well beyond any commercially available automated or advanced driver assistance system is really inspiring,” Buttigieg said. “We’ve got to make sure it develops the right way, we’ve got to be cautious about how it’s deployed, but you can tell a lot of thought and, of course, a lot of incredibly sophisticated research is going into that.”

1 min

When do we blame the tools?

Two recent incidents highlight concerns about AI misuse - a man used ChatGPT to plan an attack in Las Vegas, and AI video tools were exploited to create harmful content. These events sparked debate about regulating AI and holding developers accountable for potential harm caused by their technology. Carnegie Mellon University professor Vincent Conitzer explained that “our understanding of generative AI is still limited" and that we can't fully explain its success, predict its outputs, or ensure its safety with current methods.

View all posts