Why generative AI 'hallucinates' and makes up stuff

University of Rochester’s Christopher Kanan says current iterations of AI lack human-like self-awareness and reasoning abilities.

Apr 10, 2025

2 min

Christopher Kanan

Generative artificial intelligence tools, like OpenAI’s GPT-4, are sometimes full of bunk.


Yes, they excel at tasks involving human language, like translating, writing essays, and acting as a personalized writing tutor. They even ace standardized tests. And they’re rapidly improving.


But they also “hallucinate,” which is the term scientists use to describe when AI tools produce information that sounds plausible but is incorrect. Worse, they do so with such confidence that their errors are sometimes difficult to spot.


Christopher Kanan, an associate professor of computer science with an appointment at the Goergen Institute for Data Science and Artificial Intelligence at the University of Rochester, explains that the reasoning and planning capabilities of AI tools are still limited compared with those of humans, who excel at continual learning.


“They don’t continually learn from experience,” Kanan says of AI tools. “Their knowledge is effectively frozen after training, meaning they lack awareness of recent developments or ongoing changes in the world.”


Current generative AI systems also lack what’s known as metacognition.


“That means they typically don’t know what they don’t know, and they rarely ask clarifying questions when faced with uncertainty or ambiguous prompts,” Kanan says. “This absence of self-awareness limits their effectiveness in real-world interactions.”


Kanan is an expert in artificial intelligence, continual learning, and brain-inspired algorithms who welcomes inquiries from journalists and knowledge seekers. He recently shared his thoughts on AI with WAMC Northeast Public Radio and with the University of Rochester News Center.


Reach out to Kanan by clicking on his profile.


Connect with:
Christopher Kanan

Christopher Kanan

Associate Professor of Computer Science

Christopher Kanan's research focuses on deep learning and Artificial Intelligence (AI)

AI and Machine LearningApplied Machine Learning (e.g. Medical Computer Vision)Language-guided Scene UnderstandingArtificial IntelligenceDeep Learning
Powered by

You might also like...

Check out some other posts from University of Rochester

2 min

Research Matters: 'Unsinkable' Metal Is Here

What if boats, buoys, and other items designed to float could never be sunk — even when they’re cracked, punctured, or tossed by an angry sea? If you think unsinkable metal sounds like science fiction. Think again. A team of researchers at the University of Rochester led by professor Chunlei Guo has devised a way to make ordinary metal tubes stay afloat no matter how much damage they sustain. The team chemically etches tiny pits into the tubes that trap air, keeping the tubes from getting waterlogged or sinking. Even when these superhydrophobic tubes are submerged, dented, or punctured, the trapped air keeps them buoyant and, in a very literal sense, unsinkable. “We tested them in some really rough environments for weeks at a time and found no degradation to their buoyancy,” says Guo, a professor of physics and optics and a senior scientist at the University of Rochester’s Laboratory for Laser Energetics. “You can poke big holes in them, and we showed that even if you severely damage the tubes with as many holes as you can punch, they still float.” Guo and his team could usher in a new generation of marine tech, from resilient floating platforms and wave-powered generators to ships and offshore structures that can withstand damage that would sink traditional steel. Their research highlights the University of Rochester’s knack for translating physics into practical wonder. For reporters covering materials science, sustainable engineering, ocean tech, or innovative design, Guo is the ideal expert to explain why “unsinkable metal” might be closer to everyday use than you think. To connect with Guo, contact Luke Auburn, director of communications for the Hajim School of Engineering and Applied Sciences, at luke.auburn@rochester.edu.

2 min

How Higher Ed Should Tackle AI

Higher learning in the age of artificial intelligence isn’t about policing AI, but rather reinventing education around the new technology, says Chris Kanan, an associate professor of computer science at the University of Rochester and an expert in artificial intelligence and deep learning. “The cost of misusing AI is not students cheating, it’s knowledge loss,” says Kanan. “My core worry is that students can deprive themselves of knowledge while still producing ‘acceptable work.’” Kanan, who writes about and studies artificial intelligence, is helping to shape one of the most urgent debates in academia today: how universities should respond to the disruptive force of AI. In his latest essay on the topic, Kanan laments that many universities consider AI “a writing problem,” noting that student writing is where faculty first felt the force of artificial intelligence. But, he argues, treating student use of AI as something to be detected or banned misunderstands the technological shift at hand. “Treating AI as ‘writing-tech’ is like treating electricity as ‘better candles,’” he writes. “The deeper issue is not prose quality or plagiarism detection,” he continues. “The deeper issue is that AI has become a general-purpose interface to knowledge work: coding, data analysis, tutoring, research synthesis, design, simulation, persuasion, workflow automation, and (increasingly) agent-like delegation.” That, he says, forces a change in pedagogy. What Higher Ed Needs to Do His essay points to universities that are “doing AI right,” including hiring distinguished artificial intelligence experts in key administrative leadership roles and making AI competency a graduation requirement. Kanan outlines structural changes he believes need to take place in institutions of higher learning. • Rework assessment so it measures understanding in an AI-rich environment. • Teach verification habits. • Build explicit norms for attribution, privacy, and appropriate use. • Create top-down leadership so AI strategy is coherent and not fractured among departments. • Deliver AI literacy across the entire curriculum. • Offer deep AI degrees for students who will build the systems everyone else will use. For journalists covering AI’s impact on education, technology, workforce development, or institutional change, Kanan offers a research-based, forward-looking perspective grounded in both technical expertise and a deep commitment to the mission of learning. Connect with him by clicking on his profile.

2 min

Venezuela: Why Regime Change Is Harder Than Removing A Leader

With global attention on Venezuela following the U.S. removal of Nicolás Maduro, one of the central questions is whether taking out a leader actually changes the political system that put him in power. Two University of Rochester political scientists — Hein Goemans and Gretchen Helmke — study different sides of this issue, and can shed light on why authoritarian regimes often survive even when leaders fall and what the U.S. intervention means for Venezuela and the world order. Goemans specializes in how wars begin and end, regime survival, and why so-called “decapitation strategies” — removing a leader without dismantling the broader power structure — so often fail to produce stable outcomes. His research draws on cases ranging from Iraq and Afghanistan to authoritarian regimes in Latin America. In a recent interview with WXXI Public Media, Goemans warned that removing Maduro does not resolve the underlying system of military and economic control that sustained his rule. Without changes to those institutions, he said, power is likely to remain concentrated among the same elite networks. “The problem isn’t just the leader,” Goemans explained. “It’s the structure that rewards loyalty and punishes defection. If that remains intact, the politics don’t fundamentally change.” Helmke, a leading scholar of democracy and authoritarianism in Latin America, emphasizes that legitimacy, not just force, determines whether democratic transitions take hold. Her research helps explain why democratic breakthroughs so often stall after moments of dramatic change, and why outside interventions can unintentionally weaken domestic opposition movements by shifting power toward regime insiders. “When the institutions and elites remain in place, uncertainty — not democratic transition — often becomes the dominant political reality,” she said. For journalists covering the fast-moving situation, Goemans and Helmke are available to discuss why removing leaders rarely brings the political transformation policymakers expect and what history suggests comes next. They can address: • Why regime-change operations so often backfire, even when dictators are deeply unpopular • What sidelining democratic opposition means for legitimacy • Whether U.S. claims that Maduro is illegitimate hold up under international and U.S. law • How prosecuting a foreign leader in U.S. courts could reshape norms of sovereignty • The risks the U.S. intervention poses to the rules-based international order and NATO • How interventions affect international norms, including sovereignty and the rule of law, and why short-term tactical successes can create long-term strategic risks. • Why treating global politics as a series of “one-off” power plays misunderstands how states actually enforce norms over time • How competing factions inside the U.S. administration may be driving incoherent foreign policy Geomans also brings rare insight into the internal dynamics of U.S. policymaking, having taught and observed Stephen Miller, one of President Donald Trump’s closest aides who is helping shape the administration’s worldview. (Goemans taught Miller at Duke University in 2003.) Click on the profiles for Goemans and Helmke to connect with them.

View all posts