Charlie is a world expert on robots for intelligent physical assistance and is passionate about enabling robots to help people. He has over a decade of experience directing applied robotics research at Georgia Tech. He is also an award winning teacher.
He has received a 3M Non-tenured Faculty Award, the Georgia Tech Research Corporation Robotics Award, a Google Faculty Research Award, and an NSF CAREER award. He was a Hesburgh Award Teaching Fellow in 2017 and received the Class of 1940 Course Survey Teaching Effectiveness Award. He has over 80 peer-reviewed publications. His research has been covered extensively by the popular media, including the New York Times, Technology Review, ABC, and CNN.
Areas of Expertise (5)
Massachusetts Institute of Technology: Ph.D., Electrical Engineering and Computer Science 2005
Massachusetts Institute of Technology: M.Eng., Electrical Engineering & Computer Science 1998
Massachusetts Institute of Technology: B.S., Computer Science and Engineering 1997
Selected Media Appearances (3)
Seeing through a Robot’s Eyes Helps Those with Profound Motor Impairments
Horizons - Georgia Tech Research online
Grice and Professor Charlie Kemp from the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University used a PR2 mobile manipulator manufactured by Willow Garage for the two studies. The wheeled robot has 20 degrees of freedom, with two arms and a “head,” giving it the ability to manipulate objects such as water bottles, washcloths, hairbrushes and even an electric shaver.
“Our goal is to give people with limited use of their own bodies access to robotic bodies so they can interact with the world in new ways,” said Kemp.
In their first study, Grice and Kemp made the PR2 available across the internet to a group of 15 participants with severe motor impairments. The participants learned to control the robot remotely, using their own assistive equipment to operate a mouse cursor to perform a personal care task. Eighty percent of the participants were able to manipulate the robot to pick up a water bottle and bring it to the mouth of a mannequin...
Robot "Eyes" Aid People with Profound Motor Impairments
Grice and Charlie Kemp, professor in the biomedical engineering department at Georgia Tech and Emory University, used a PR2 mobile manipulator for the two studies. The wheeled robot has 20 degrees of freedom, with two arms and a “head,” giving it the ability to manipulate objects such as water bottles, washcloths, hairbrushes, and even an electric shaver.
“Our goal is to give people with limited use of their own bodies access to robotic bodies so they can interact with the world in new ways,” Kemp says.
In the first study, Grice and Kemp made the PR2 available across the internet to a group of 15 participants with severe motor impairments. The participants learned to control the robot remotely, using their own assistive equipment to operate a mouse cursor to perform a personal care task. Eighty percent of the participants could manipulate the robot to pick up a water bottle and bring it to the mouth of a mannequin.
Body surrogate robot helps people with motor impairments care for themselves
Digital Trends online
“Our goal is to give people with limited use of their own bodies access to robotic bodies so they can interact with the world in new ways,” Professor Charlie Kemp from the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech said in a statement.
Selected Articles (5)
Grasping and manipulating objects is an important human skill. Since hand-object contact is fundamental to grasping, capturing it can lead to important insights. However, observing contact through external sensors is challenging because of occlusion and the complexity of the human hand. We present ContactDB, a novel dataset of contact maps for household objects that captures the rich hand-object contact that occurs during grasping, enabled by use of a thermal camera. Participants in our study grasped 3D printed objects with a post-grasp functional intent. ContactDB includes 3750 3D meshes of 50 household objects textured with contact maps and 375K frames of synchronized RGB-D+ thermal images. To the best of our knowledge, this is the first large-scale dataset that records detailed contact maps for human grasps. Analysis of this data shows the influence of functional intent and object size on grasping, the tendency to touch/avoid'active areas', and the high frequency of palm and proximal finger contact. Finally, we train state-of-the-art image translation and 3D convolution algorithms to predict diverse contact patterns from object shape
Robots can provide assistance to a human by moving objects to locations around the person’s body. With a well-chosen initial configuration, a robot can better reach locations important to an assistive task despite model error, pose uncertainty, and other sources of variation. However, finding effective configurations can be challenging due to complex geometry, a large number of degrees of freedom, task complexity, and other factors. We present task-centric optimization of robot configurations (TOC), which is an algorithm that finds configurations from which the robot can better reach task-relevant locations and handle task variation. Notably, TOC can return one or two configurations to be used sequentially while assisting with a task.
Recognizing an object's material can inform a robot on the object's fragility or appropriate use. To estimate an object's material during manipulation, many prior works have explored the use of haptic sensing. In this letter, we explore a technique for robots to estimate the materials of objects using spectroscopy. We demonstrate that spectrometers provide several benefits for material recognition, including fast response times and accurate measurements with low noise. Furthermore, spectrometers do not require direct contact with an object. To explore this, we collected a dataset of spectral measurements from two commercially available spectrometers during which a robotic platform interacted with 50 flat material objects, and we show that a neural network model can accurately analyze these measurements. Due to the similarity between consecutive spectral measurements, our model achieved a material classification accuracy of 94.6% when given only one spectral sample per object. Similar to prior works with haptic sensors, we found that generalizing material recognition to new objects posed a greater challenge, for which we achieved an accuracy of 79.1% via leave-one-object-out cross validation. Finally, we demonstrate how a PR2 robot can leverage spectrometers to estimate the materials of everyday objects found in the home. From this letter, we find that spectroscopy poses a promising approach for material classification during robotic manipulation.
Detecting when something unusual has happened could help assistive robots operate more safely and effectively around people. However, the variability associated with people and objects in human environments can make anomaly detection difficult. We previously introduced an algorithm that uses a hidden Markov model (HMM) with a log-likelihood detection threshold that varies based on execution progress. We now present an improved version of our previous algorithm (HMM-D) and introduce a new algorithm based on Gaussian process regression (HMM-GP). We also present a new and more thorough evaluation of 8 anomaly detection algorithms with force, sound, and kinematic signals collected from a robot closing microwave doors, latching a toolbox, scooping yogurt, and feeding yogurt to able-bodied participants.
Robots could be a valuable tool for helping with dressing but determining how a robot and a person with disabilities can collaborate to complete the task is challenging. We present task optimization of robot-assisted dressing (TOORAD), a method for generating a plan that consists of actions for both the robot and the person. TOORAD uses a multilevel optimization framework with heterogeneous simulations. The simulations model the physical interactions between the garment and the person being dressed, as well as the geometry and kinematics of the robot, human, and environment. Notably, the models for the human are personalized for an individual’s geometry and physical capabilities. TOORAD searches over a constrained action space that interleaves the motions of the person and the robot with the person remaining still when the robot moves and vice versa. In order to adapt to real-world variation, TOORAD incorporates a measure of robot dexterity in its optimization, and the robot senses the person’s body with a capacitive sensor to adapt its planned end effector trajectories. To evaluate TOORAD and gain insight into robot-assisted dressing, we conducted a study with six participants with physical disabilities who have difficulty dressing themselves. In the first session, we created models of the participants and surveyed their needs, capabilities, and views on robot-assisted dressing. TOORAD then found personalized plans and generated instructional visualizations for four of the participants, who returned for a second session during which they successfully put on both sleeves of a hospital gown with assistance from the robot. Overall, our work demonstrates the feasibility of generating personalized plans for robot-assisted dressing via optimization and physics-based simulation.