M. Ehsan Hoque

Asaro Biggar Family Fellow in Data Science, Assistant Professor of Computer Science, and faculty member in the Goergen Institute for Data Science and Artificial Intelligence

  • Rochester NY UNITED STATES

M. Ehsan Hoque is designing and implementing new algorithms to sense subtle human nonverbal behavior

Contact

Areas of Expertise

Data Science
Human Nonverbal Behavior
Interactive Machine Learning
Human-Computer Interaction
Computer Vision
Experimental Psychology
Social Skills Training

Social

Biography

M. Ehsan Hoque directs the Rochester Human-Computer Interaction Lab.

His research focuses on designing and implementing new algorithms to sense subtle human nonverbal behavior; enabling new behavior sensing and modelling for human-computer interaction; inventing new applications of emotion technology in high-impact social domains such as social skills training, public speaking; and assisting individuals who experience difficulties with social interactions.

Education

Penn State University

B.S.

Computer Engineering

2004

The University of Memphis

Electrical and Computer Engineering

M.Eng

2007

Massachusetts Institute of Technology

Ph.D.

Media Arts and Sciences (Media Lab)

2013

Affiliations

  • ACM Future of Computing Academy (ACM FCA)
  • Association for the Advancement of Artificial Intelligence (AAAI)
  • Association for Computing Machinery (ACM)
  • Institute of Electrical and Electronics Engineers (IEEE)
  • American Association for the Advancement of Science (AAAS)
Show All +

Selected Media Appearances

Smart Speakers Like Alexa and Google Assistant Could Tell if You Have Parkinson's

Newsweek  online

2025-07-25

A new AI-powered, speech-based screening tool could help people assess whether they are showing signs of Parkinson's disease at home.

Developed as part of a study by University of Rochester computer scientists, the web-based test asks users to recite two pangrams, short sentences using every letter of the alphabet.

"There are huge swaths of the U.S. and across the globe where access to specialized neurological care is limited," said Rochester computer science professor Ehsan Hoque in a statement. "With users' consent, widely used speech-based interfaces like Amazon Alexa or Google Home could potentially help people identify if they need to seek further care."

View More

Connections: Can AI help us become more fair as a society?

WXXI  radio

2020-01-09

Can artificial intelligence -- or AI -- help us become more fair as a society?

How can we make sure the technology we create does not simply serve the most powerful in society? Our guests explore it:

Matt Kelly, independent journalist

Ehsan Hoque, Asaro-Biggar Family Assistant Professor of Computer Science at the University of Rochester

Jonathan Herington, lecturer in the Department of Philosophy, and assistant director of graduate education in the College of Arts, Sciences, and Engineering at the University of Rochester

View More

Federal award establishes Parkinson’s research center at URMC

WXXI  online

2018-10-04

The University of Rochester Medical Center has received a multimillion-dollar federal grant to study Parkinson’s disease, the university announced Wednesday. The $9.2 million award will fund the creation of a new research center, officials said.

Because of the volume of data expected to be generated from these experiments, Dorsey said, the center will involve URMC faculty disciplines beyond neurology, including biostatistics and computer science. The new research center “represents the novel convergence of medicine and data science,” said Ehsan Hoque, an assistant professor with the university’s institute for data science.

View More

Show All +

Selected Event Appearances

How Emotional Trajectories Affect Audience Perception in Public Speaking

CHI 2018  The Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI)

Say CHEESE: Common Human Emotional Expression Set EncoderAnalysis of Smiles in Honest and Deceptive Communication

IEEE InternationalConference on Automated Face and Gesture Recognition  FG 2018

The What, When, and Why of Facial Expressions: An Objective Analysis of Conversational Skills in Speed-Dating Videos

IEEE International Conference on Automated Face and Gesture Recognition  FG 2018

Show All +

Selected Articles

Say CHEESE: Common Human Emotional Expression Set Encoder Analysis of Smiles in Honest and Deceptive Communication

IEEE International Conference on Automated Face and Gesture Recognition (FG)

T. K. Sen, K. Hasan, M. Tran, Y. Yang, M. E. Hoque

2018

In this paper we introduce the Common Human Emotional Expression Set Encoder (CHEESE) framework for objectively determining which, if any, subsets of the facial action units associated with smiling are well represented by a small finite set of clusters according to an information theoretic metric. Smile-related AUs (6,7,10,12,14) in over 1.3M frames of facial expressions from 151 pairs of individuals playing a communication game involving deception were analyzed with CHEESE. The combination of AU6 (cheek raiser) and AU12 (lip corner puller) are shown to cluster well into five different types of expression. Liars showed high intensity AU6 and AU12 more often compared to honest speakers. Additionally, interrogators were found to express a higher frequency of low intensity AU6 with high intensity AU12 (i.e. polite smiles) when they were being lied to, suggesting that deception analysis should be done in consideration of both the message sender's and the receiver's facial expressions.

View more

The What, When, and Why of Facial Expressions: An Objective Analysis of Conversational Skills in Speed-Dating Videos

IEEE International Conference on Automated Face and Gesture Recognition (FG)

M. R. Ali, T. K. Sen, D. Crasta, V-D. Nguyen, R. Rogge, M. E. Hoque

2018

In this paper, we demonstrate the importance of combinations of facial expressions and their timing, in explaining a person's conversational skills in a series of brief non-romantic conversations. Video recordings of 365 fourminute conversations before and after a randomized intervention were analyzed in which facial action units (AUs) were examined over different time segments. Male subjects (N=47) were evaluated in their conversation skills using the Conversational Skills Rating Scale (CSRS). A linear regression model was used to compare the importance of AU features from different time segments in predicting CSRS ratings. In the first minute of conversation, CSRS ratings were best predicted by activity levels in action units associated with speaking (Lips part, AU25). In the last minute of conversation, affective indicators associated with expressions of laughter (Jaw Drop, AU26) and warmth (Happy faces) emerged as the most important. These findings suggest that feedback on nonverbal skills must dynamically account for shifting goals of conversation.

View more

CoCo: Collaboration Coach for Understanding Team Dynamics during Video Conferencing

PACM on Interactive, Mobile, Wearable, and Ubiquitous Computing (IMWUT)

S. Samrose, R. Zhao, J. White, V. Li, L. Nova, Y. Lu, M. R. Ali, M. E. Hoque

2018

We present and discuss a fully-automated collaboration system, CoCo, that allows multiple participants to video chat and receive feedback through custom video conferencing software. After a conferencing session, a virtual feedback assistant provides insights on the conversation to participants. CoCo automatically pulls audial and visual data during conversations and analyzes the extracted streams for affective features, including smiles, engagement, attention, as well as speech overlap and turn-taking. We validated CoCo with 39 participants split into 10 groups. Participants played two back-to-back teambuilding games, Lost at Sea and Survival on the Moon, with the system providing feedback between the two. With feedback, we found a statistically significant change in balanced participation—that is, everyone spoke for an equal amount of time. There was also statistically significant improvement in participants’ self-evaluations of conversational skills awareness, including how often they let others speak, as well as of teammates’ conversational skills.

View more

Show All +