hero image
Matthew Johnson-Roberson - Carnegie Mellon University. Pittsburgh, PA, US

Matthew Johnson-Roberson

Professor | Carnegie Mellon University

Pittsburgh, PA, UNITED STATES

Matthew Johnson-Roberson's research goal is to develop robotic systems capable of operating in complex dynamic environments.

Biography

Matthew Johnson-Roberson is the director of the Robotics Institute at Carnegie Mellon University. He has worked on a NASA project on satellite swarms to to demonstrate how satellites might track and communicate with each other and has conducted research with NASA's Innovative Advanced Concepts (NIAC) on folding space structures. His research goal is to develop robotic systems capable of operating in complex dynamic environments. To this end, he seeks to expand and improve the perceptual capabilities of autonomous systems. He has focused on the processing and interpretation of three-dimensional data and his work has sought to push the bounds of scale and resolution in 3D reconstruction, segmentation, machine learning and robotic vision. Johnson-Roberson is the co-founder of Refraction AI, a robotics startup for last mile delivery.

Areas of Expertise (8)

Folding Space Structures

Robotic Vision

3D Reconstruction

Artifical Intelligence

Robotics/Autonomous Vehicles

Machine Learning

Satellite Swarms

Robotic Systems

Media Appearances (6)

Robot Week Teaser

WTAE-PIT (ABC)  tv

2024-11-08

The outlet teased out features they'll be airing next week for Robot Week that included work here at CMU as well as an interview with Matthew Johnson-Roberson (Robotics Institute) speaking on how robots may help us in the future.

view more

Will The Dark Warehouse Ever Become Reality? Perhaps Not In Our Lifetime

Forbes  online

2023-01-31

All of this sounds attractive as warehouses grapple with worker shortages and rising operating costs, but how realistic is it? According to Matthew Johnson-Roberson, director of Carnegie Mellon University’s Robotics Institute, "there isn’t one single robot that’s so intelligent and so versatile that it’s like a human worker."

view more

With the construction industry in crisis, a robot may build your home

The Washington Post  online

2023-01-30

“Construction robots are a great example of how robotic technology is going to touch people’s lives,” said Matthew Johnson-Roberson, the director of the robotics institute at Carnegie Mellon University. “Many [construction] jobs … that exist today are now going to be alongside robots.”

view more

What does Argo AI’s shutdown mean for the future of Pittsburgh tech?

Technical.ly  online

2022-10-27

Matthew Johnson-Roberson, director of the Carnegie Mellon University Robotics Institute, noted that the news reflects the realities of startups: Sometimes they fail. Indeed: Often they fail.

view more

Why CMU is turning a former Barnes & Noble into new Robotics Institute space

Technical.ly  online

2022-10-10

“I think that robots are going to become an increasingly larger and larger part of the day-to-day lives of people everywhere,” Johnson-Roberson said. “So that means you’re going to be working next to a robot, you’re going to be using a robot to do your job better. You’re going to have products made by robots or delivered by robots, or you’re going to purchase a robot to help you in your home.”

view more

When will robots take our jobs?

Fast Company  online

2022-04-21

“Moving pallets around, moving forklifts around, moving boxes around in fulfillment centers—that’s an area where we’ve seen just massive robotic explosion,” says Matthew Johnson-Roberson, director of the Robotics Institute at Carnegie Mellon University. Amazon operates its own in-house robotics company to push the tech forward. And a growing cadre of startups, such as Berkshire Grey, Covariant, Dexterity, and Plus One Robotics, are offering automation services to the rest of the industry.

view more

Media

Publications:

Documents:

Photos:

loading image loading image Photo by Brandon Dexter loading image

Videos:

When Drones and Robots Knock on Your Door Finding the Deep Truth About Deep Fakes [CVPR'22 WAD] Keynote - Matthew Johnson-Roberson, CMU Lessons from the Field: Deep Learning for Field Robotics | Matthew Johnson-Roberson | RoboLaunch

Audio/Podcasts:

Social

Industry Expertise (2)

Education/Learning

Construction - Residential

Accomplishments (1)

NSF CAREER Award (professional)

2015

Education (2)

University of Sydney: Ph.D., Robotics

Carnegie Mellon University: B.S., Computer Science

Event Appearances (1)

Robotics

(2022) TC Sessions  Boston, Massachusetts

Articles (5)

Learning Cross-Scale Visual Representations for Real-Time Image Geo-Localization

IEEE Robotics and Automation Letters

2022 Robot localization remains a challenging task in GPS denied environments. State estimation approaches based on local sensors, e.g. cameras or IMUs, are drifting-prone for long-range missions as error accumulates. In this study, we aim to address this problem by localizing image observations in a 2D multi-modal geospatial map. We introduce the cross-scale 1 dataset and a methodology to produce additional data from cross-modality sources. We propose a framework that learns cross-scale visual representations without supervision. Experiments are conducted on data from two different domains, underwater and aerial.

view more

CLONeR: Camera-Lidar Fusion for Occupancy Grid-Aided Neural Representations

IEEE Robotics and Automation Letters

2023 Recent advances in neural radiance fields (NeRFs) achieve state-of-the-art novel view synthesis and facilitate dense estimation of scene properties. However, NeRFs often fail for outdoor, unbounded scenes that are captured under very sparse views with the scene content concentrated far away from the camera, as is typical for field robotics applications. In particular, NeRF-style algorithms perform poorly: 1) when there are insufficient views with little pose diversity, 2) when scenes contain saturation and shadows, and 3) when finely sampling large unbounded scenes with fine structures becomes computationally intensive.

view more

A kinematic model for trajectory prediction in general highway scenarios

IEEE Robotics and Automation Letters

2021 Highway driving invariably combines high speeds with the need to interact closely with other drivers. Prediction methods enable autonomous vehicles (AVs) to anticipate drivers’ future trajectories and plan accordingly. Kinematic methods for prediction have traditionally ignored the presence of other drivers, or made predictions only for a limited set of scenarios. Data-driven approaches fill this gap by learning from large datasets to predict trajectories in general scenarios. While they achieve high accuracy, they also lose the interpretability and tools for model validation enjoyed by kinematic methods.

view more

Hybrid visual slam for underwater vehicle manipulator systems

IEEE Robotics and Automation Letters

2022 This letter presents a novel visual feature based scene mapping method for underwater vehicle manipulator systems (UVMSs), with specific emphasis on robust mapping in natural seafloor environments. Our method uses GPU accelerated SIFT features in a graph optimization framework to build a feature map. The map scale is constrained by features from a vehicle mounted stereo camera, and we exploit the dynamic positioning capability of the manipulator system by fusing features from a wrist mounted fisheye camera into the map to extend it beyond the limited viewpoint of the vehicle mounted cameras.

view more

Computer-vision object tracking for monitoring bottlenose dolphin habitat use and kinematics

Plos one

2022 This research presents a framework to enable computer-automated observation and monitoring of bottlenose dolphins (Tursiops truncatus) in a zoo environment. The resulting approach enables detailed persistent monitoring of the animals that is not possible using manual annotation methods. Fixed overhead cameras were used to opportunistically collect ∼100 hours of observations, recorded over multiple days, including time both during and outside of formal training sessions, to demonstrate the viability of the framework. Animal locations were estimated using convolutional neural network (CNN) object detectors and Kalman filter post-processing. The resulting animal tracks were used to quantify habitat use and animal kinematics.

view more