Christopher Kanan

Associate Professor of Computer Science

  • Rochester NY UNITED STATES

Christopher Kanan's research focuses on deep learning and Artificial Intelligence (AI)

Contact

Spotlight

2 min

How Higher Ed Should Tackle AI

Higher learning in the age of artificial intelligence isn’t about policing AI, but rather reinventing education around the new technology, says Chris Kanan, an associate professor of computer science at the University of Rochester and an expert in artificial intelligence and deep learning. “The cost of misusing AI is not students cheating, it’s knowledge loss,” says Kanan. “My core worry is that students can deprive themselves of knowledge while still producing ‘acceptable work.’” Kanan, who writes about and studies artificial intelligence, is helping to shape one of the most urgent debates in academia today: how universities should respond to the disruptive force of AI. In his latest essay on the topic, Kanan laments that many universities consider AI “a writing problem,” noting that student writing is where faculty first felt the force of artificial intelligence. But, he argues, treating student use of AI as something to be detected or banned misunderstands the technological shift at hand. “Treating AI as ‘writing-tech’ is like treating electricity as ‘better candles,’” he writes. “The deeper issue is not prose quality or plagiarism detection,” he continues. “The deeper issue is that AI has become a general-purpose interface to knowledge work: coding, data analysis, tutoring, research synthesis, design, simulation, persuasion, workflow automation, and (increasingly) agent-like delegation.” That, he says, forces a change in pedagogy. What Higher Ed Needs to Do His essay points to universities that are “doing AI right,” including hiring distinguished artificial intelligence experts in key administrative leadership roles and making AI competency a graduation requirement. Kanan outlines structural changes he believes need to take place in institutions of higher learning. Rework assessment so it measures understanding in an AI-rich environment. Teach verification habits. Build explicit norms for attribution, privacy, and appropriate use. Create top-down leadership so AI strategy is coherent and not fractured among departments. Deliver AI literacy across the entire curriculum. Offer deep AI degrees for students who will build the systems everyone else will use. For journalists covering AI’s impact on education, technology, workforce development, or institutional change, Kanan offers a research-based, forward-looking perspective grounded in both technical expertise and a deep commitment to the mission of learning. Connect with him by clicking on his profile.

Christopher Kanan

2 min

Why generative AI 'hallucinates' and makes up stuff

Generative artificial intelligence tools, like OpenAI’s GPT-4, are sometimes full of bunk. Yes, they excel at tasks involving human language, like translating, writing essays, and acting as a personalized writing tutor. They even ace standardized tests. And they’re rapidly improving. But they also “hallucinate,” which is the term scientists use to describe when AI tools produce information that sounds plausible but is incorrect. Worse, they do so with such confidence that their errors are sometimes difficult to spot. Christopher Kanan, an associate professor of computer science with an appointment at the Goergen Institute for Data Science and Artificial Intelligence at the University of Rochester, explains that the reasoning and planning capabilities of AI tools are still limited compared with those of humans, who excel at continual learning. “They don’t continually learn from experience,” Kanan says of AI tools. “Their knowledge is effectively frozen after training, meaning they lack awareness of recent developments or ongoing changes in the world.” Current generative AI systems also lack what’s known as metacognition. “That means they typically don’t know what they don’t know, and they rarely ask clarifying questions when faced with uncertainty or ambiguous prompts,” Kanan says. “This absence of self-awareness limits their effectiveness in real-world interactions.” Kanan is an expert in artificial intelligence, continual learning, and brain-inspired algorithms who welcomes inquiries from journalists and knowledge seekers. He recently shared his thoughts on AI with WAMC Northeast Public Radio and with the University of Rochester News Center. Reach out to Kanan by clicking on his profile.

Christopher Kanan

Areas of Expertise

AI and Machine Learning
Applied Machine Learning (e.g. Medical Computer Vision)
Language-guided Scene Understanding
Artificial Intelligence
Deep Learning
Medical Computer Vision
AI

Media

Social

Biography

Christopher Kanan is a tenured associate professor of computer science. His main research focus is deep learning, with an emphasis on lifelong (continual) machine learning, bias-robust artificial intelligence, medical computer vision, and language-guided scene understanding. He has worked on online continual learning, visual question answering, computational pathology, self-supervised learning, semantic segmentation, object recognition, object detection, active vision, object tracking, and more. Beyond machine learning, he has a background in eye tracking, primate vision, and theoretical neuroscience.

He is on the Scientific Advisory Board (SAB) of Paige.AI, Inc. whose goal is to revolutionize pathology and oncology by creating clinical-grade AI systems to help pathologists and to predict treatment relevant computational biomarkers.

Prof. Kanan was previously affiliated with NASA Jet Propulsion Laboratory (JPL).

Education

Oklahoma State University

BS

Philosophy and Computer Science

2004

University of Southern California

MS

Computer Science

2006

University of California, San Diego

PhD

Computer Science

2013

Selected Media Appearances

Can we teach AI to learn like humans?

Academic Minute WAMC  radio

2025-02-19

On University of Rochester Week: Human intelligence and artificial intelligence learn differently, but can that change?

Chris Kanan, associate professor of computer science at the Hajim School of Engineering and Computer Science, looks at the possibilities.

Christopher Kanan’s main research focus is deep learning, with an emphasis on lifelong (continual) machine learning, bias-robust artificial intelligence, medical computer vision, and language-guided scene understanding.

Can we teach AI to learn like humans?
Artificial intelligence algorithms don’t learn like people. Instead of continuously updating their knowledge base with new information over time as humans do, algorithms learn only during their training phase. After that, their knowledge remains frozen; they perform the task they were trained for without being able to keep learning as they do it. Learning new information often requires trained the system again from scratch; otherwise, systems can suffer from catastrophic forgetting, where they incorporate new knowledge at the cost of forgetting nearly everything it’s already learned. This situation arises because of the way that today’s most powerful AI algorithms, called neural networks, learn new things.

To help counter this, I have helped establish a new field of AI research known as continual learning. The goal is to keep AI learning new things from continuous streams of data, and to do so without forgetting everything that came before. This is a fundamental thing that we need to solve to make artificial general intelligence one day.

View More

Leveraging AI in the workplace and education brings both pros and cons

Rochester Business Journal  print

2024-05-01

“It doesn’t know what’s false, what’s right, it just knows given what you’ve asked it to do or what you’ve written, what’s the next word it should say,” said Chris Kanan, associate professor of computer science at the University of Rochester’s Center for Visual Science, Brain & Cognitive Sciences.

View More

AI's Rising Comes with No Easy Answers

Rochester Beacon  online

2023-05-23

The May 23 event featured Pencheng Shi, associate dean at RIT’s Golisano College of Computing and Information Sciences; Chris Kanan, associate professor of computer science at the University of Rochester; and Tim Madigan, professor and chair of philosophy at St. John Fisher University.
“(AI) has been everywhere over the past few years (but) has been less obvious. Unlocking your phone with your face, that’s AI,” Kanan said.

View More

Show All +

Selected Articles

Independent real-world application of a clinical-grade automated prostate cancer detection system

Journal of Pathology

Christopher Kanan (and 20 others)

2021-04-27

Artificial intelligence (AI)-based systems applied to histopathology whole-slide images have the potential to improve patient care through mitigation of challenges posed by diagnostic variability, histopathology caseload, and shortage of pathologists. We sought to define the performance of an AI-based automated prostate cancer detection system, Paige Prostate, when applied to independent real-world data.

View more

Quality control of radiomic features using 3D printed CT phantoms

Journal of Medical Imaging

Christopher Kanan, Usman Mahmood, Aditya Apte, David D. B. Bates, Giuseppe Corrias, Lorenzo Mannelli, Jung Hun Oh, Yusuf Emre Erdi, John Nguyen, Joseph O'Deasy, and Amita Shukla-Dave

2021-06-29

The lack of standardization in quantitative radiomic measures of tumors seen on computed tomography (CT) scans is generally recognized as an unresolved issue. To develop reliable clinical applications, radiomics must be robust across different CT scan modes, protocols, software, and systems. We demonstrate how custom-designed phantoms, imprinted with human-derived patterns, can provide a straightforward approach to validating longitudinally stable radiomic signature values in a clinical setting.

View more

Detecting Spurious Correlations with Sanity Tests for Artificial Intelligence Guided Radiology Systems

Frontiers in Digital Health

Christopher Kanan, Usman Mahmood, Robik Shrestha, David D. B. Bates, Lorenzo Mannelli, Giuseppe Corrias, and Yusuf Emre Erdi

2021-08-03

Artificial intelligence (AI) has been successful at solving numerous problems in machine perception. In radiology, AI systems are rapidly evolving and show progress in guiding treatment decisions, diagnosing, localizing disease on medical images, and improving radiologists' efficiency. A critical component to deploying AI in radiology is to gain confidence in a developed system's efficacy and safety. The current gold standard approach is to conduct an analytical validation of performance on a generalization dataset from one or more institutions, followed by a clinical validation study of the system's efficacy during deployment. Clinical validation studies are time-consuming, and best practices dictate limited re-use of analytical validation data, so it is ideal to know ahead of time if a system is likely to fail analytical or clinical validation. In this paper, we describe a series of sanity tests to identify when a system performs well on development data for the wrong reasons. We illustrate the sanity tests' value by designing a deep learning system to classify pancreatic cancer seen in computed tomography scans.

View more

Show All +