hero image
Richard J. Radke - Rensselaer Polytechnic Institute. Troy, NY, US

Richard J. Radke

Professor, Electrical, Computer, and Systems Engineering | Rensselaer Polytechnic Institute

Troy, NY, UNITED STATES

Studies computer vision problems related to human-scale, occupant-aware environments

Areas of Expertise (8)

Social Signal Processing

Video Processing

Human-Scale, Occupant-Aware Environments

Radiotherapy

Smart Lighting

Computer Vision

Video Analytics

Visual Effects

Biography

Richard J. Radke joined the Electrical, Computer, and Systems Engineering Department at Rensselaer in 2001, where he is now a full professor.
Radke’s current research interests involve computer vision problems related to human-scale, occupant-aware environments, such as person tracking and re-identification with cameras and range sensors.
Radke is affiliated with the NSF Engineering Research Center for Lighting Enabled Services and Applications (LESA), the DHS Centers of Excellence on Explosives Detection, Mitigation and Response (ALERT) and Soft target Engineering to Neutralize the Threat Reality (SENTRY), and Rensselaer’s Experimental Media and Performing Arts Center (EMPAC) and Cognitive and Immersive Systems Laboratory (CISL). He has bachelor’s and master’s degrees in computational and applied mathematics from Rice University, and master’s and doctoral degrees in electrical engineering from Princeton University.
He received an NSF CAREER award in March 2003 and was a member of the 2007 DARPA Computer Science Study Group. Dr. Radke is a senior member of the IEEE and was a senior area editor of IEEE Transactions on Image Processing. His textbook Computer Vision for Visual Effects was published by Cambridge University Press in 2012. His YouTube Channel contains many annotated lectures on signal processing, engineering probability, image processing, and computer vision.

Media

Publications:

Richard J. Radke Publication

Documents:

Photos:

Videos:

The Radke Lab @ RPI PB 0: Introduction Computer Vision for Visual Effects 2021 Rensselaer Augmented and Virtual Environment (RAVE) Lab

Audio/Podcasts:

Education (3)

Princeton University: Ph.D., Electrical Engineering

Rice University: M.A., Computational and Applied Mathematics

Rice University: B.A., Computational and Applied Mathematics

Media Appearances (2)

New Lab Helps Manufacturers Make the Most of Augmented Reality Technology

Assembly Magazine  online

2019-07-19

Augmented reality (AR) and virtual reality (VR) are cutting-edge tools that are becoming increasingly important to engineers for applications ranging from product design to assembly line layout.

view more

RPI Kicks Off First Full Semester For Virtual Reality Lab

WAMC  online

2019-02-06

Rensselear Polytechnic Institute is celebrating its first semester with a new virtual and augmented reality lab, where students can use modern technology to explore new environments and learning methods.

view more

Articles (3)

A Systematic Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets

IEEE Transactions on Pattern Analysis and Machine Intelligence

Mengran Gou, Ziyan Wu, Angels Rates-Borras, Octavia Camps, Richard J Radke

2019 Person re-identification (re-id) is a critical problem in video analytics applications such as security and surveillance. The public release of several datasets and code for vision algorithms has facilitated rapid progress in this area over the last few years. However, directly comparing re-id algorithms reported in the literature has become difficult since a wide variety of features, experimental protocols, and evaluation metrics are employed. In order to address this need, we present an extensive review and performance evaluation of single- and multi-shot re-id algorithms. The experimental protocol incorporates the most recent advances in both feature extraction and metric learning. To ensure a fair comparison, all of the approaches were implemented using a unified code library that includes 11 feature extraction algorithms and 22 metric learning and ranking techniques. All approaches were evaluated using a new large-scale dataset that closely mimics a real-world problem setting, in addition to 16 other publicly available datasets: VIPeR, GRID, CAVIAR, DukeMTMC4ReID, 3DPeS, PRID, V47, WARD, SAIVT-SoftBio, CUHK01, CHUK02, CUHK03, RAiD, iLIDSVID, HDA+, and Market1501. The evaluation codebase and results will be made publicly available for community use.

view more

Re-Identification with Consistent Attentive Siamese Networks

arXiv:1811.07487

Meng Zheng, Srikrishna Karanam, Ziyan Wu, Richard J. Radke

2019 We propose a new deep architecture for person re-identification (re-id). While re-id has seen much recent progress, spatial localization and view-invariant representation learning for robust cross-view matching remain key, unsolved problems. We address these questions by means of a new attention-driven Siamese learning architecture, called the Consistent Attentive Siamese Network. Our key innovations compared to existing, competing methods include (a) a flexible framework design that produces attention with only identity labels as supervision, (b) explicit mechanisms to enforce attention consistency among images of the same person, and (c) a new Siamese framework that integrates attention and attention consistency, producing principled supervisory signals as well as the first mechanism that can explain the reasoning behind the Siamese framework's predictions. We conduct extensive evaluations on the CUHK03-NP, DukeMTMC-ReID, and Market-1501 datasets and report competitive performance.

view more

A Multimodal-Sensor-Enabled Room for Unobtrusive Group Meeting Analysis

Proceedings of the 20th ACM International Conference on Multimodal Interaction

Indrani Bhattacharya, Michael Foley, Ni Zhang, Tongtao Zhang, Christine Ku, Cameron Mine, Heng Ji, Christoph Riedl, Brooke Foucault Welles, Richard J Radke

2018 Group meetings can suffer from serious problems that undermine performance, including bias, "groupthink", fear of speaking, and unfocused discussion. To better understand these issues, propose interventions, and thus improve team performance, we need to study human dynamics in group meetings. However, this process currently heavily depends on manual coding and video cameras. Manual coding is tedious, inaccurate, and subjective, while active video cameras can affect the natural behavior of meeting participants. Here, we present a smart meeting room that combines microphones and unobtrusive ceiling-mounted Time-of-Flight (ToF) sensors to understand group dynamics in team meetings. We automatically process the multimodal sensor outputs with signal, image, and natural language processing algorithms to estimate participant head pose, visual focus of attention (VFOA), non-verbal speech patterns, and discussion content. We derive metrics from these automatic estimates and correlate them with user-reported rankings of emergent group leaders and major contributors to produce accurate predictors. We validate our algorithms and report results on a new dataset of lunar survival tasks of 36 individuals across 10 groups collected in the multimodal-sensor-enabled smart room.

view more