2 min
Why generative AI 'hallucinates' and makes up stuff
Generative artificial intelligence tools, like OpenAI’s GPT-4, are sometimes full of bunk. Yes, they excel at tasks involving human language, like translating, writing essays, and acting as a personalized writing tutor. They even ace standardized tests. And they’re rapidly improving. But they also “hallucinate,” which is the term scientists use to describe when AI tools produce information that sounds plausible but is incorrect. Worse, they do so with such confidence that their errors are sometimes difficult to spot. Christopher Kanan, an associate professor of computer science with an appointment at the Goergen Institute for Data Science and Artificial Intelligence at the University of Rochester, explains that the reasoning and planning capabilities of AI tools are still limited compared with those of humans, who excel at continual learning. “They don’t continually learn from experience,” Kanan says of AI tools. “Their knowledge is effectively frozen after training, meaning they lack awareness of recent developments or ongoing changes in the world.” Current generative AI systems also lack what’s known as metacognition. “That means they typically don’t know what they don’t know, and they rarely ask clarifying questions when faced with uncertainty or ambiguous prompts,” Kanan says. “This absence of self-awareness limits their effectiveness in real-world interactions.” Kanan is an expert in artificial intelligence, continual learning, and brain-inspired algorithms who welcomes inquiries from journalists and knowledge seekers. He recently shared his thoughts on AI with WAMC Northeast Public Radio and with the University of Rochester News Center. Reach out to Kanan by clicking on his profile.
