Zachary Manchester

Assistant Professor Carnegie Mellon University

  • Pittsburgh PA

Zachary Manchester is a researcher and aerospace engineer with broad interests in dynamics, control, estimation and optimization.

Contact

Carnegie Mellon University

View more experts managed by Carnegie Mellon University

Biography

Zachary Manchester is a researcher and aerospace engineer with broad interests in dynamics, control, estimation and optimization. He is especially interested in taking advantage of advancements in embedded electronics and computation to build robotic systems that are smaller, smarter and more agile. He founded the KickSat project in 2011 and has worked on unmanned aerial vehicles, legged robots, commercial aerospace simulation software and several small spacecraft missions.

Areas of Expertise

Legged Robots
Robotics
Commercial Aerospace Simulation Software
Unmanned Aerial Vehicles
Aerospace Engineering

Media Appearances

This robot dog learned a new trick—balancing like a cat

Popular Science  online

2023-04-19

But in robot dogs, their legs aren’t exactly coordinated. If three feet can touch the ground, generally they are fine, but reduce that to one or two robot feet and you’re in trouble. “With current control methods, a quadruped robot’s body and legs are decoupled and don’t speak to one another to coordinate their movements,” Zachary Manchester, an assistant professor in the Robotics Institute and head of the Robotic Exploration Lab, said in a statement. “So how can we improve their balance?”

View More

Quadruped robot uses satellite tools to walk along a balance beam

New Atlas  online

2023-04-17

"You basically have a big flywheel with a motor attached," said Manchester. "If you spin the heavy flywheel one way, it makes the satellite spin the other way. Now take that and put it on the body of a quadruped robot."

View More

CMU taught a robot dog to walk a balance beam

TechCrunch  online

2023-04-14

“This experiment was huge,” says assistant professor Zachary Manchester. “I don’t think anyone has ever successfully done balance beam walking with a robot before.”

View More

Show All +

Media

Social

Industry Expertise

Aerospace

Education

Cornell University

Ph.D.

Aerospace, Aeronautical and Astronautical/Space Engineering

2015

Cornell University

B.S.

Applied Engineering Physics

2009

Affiliations

  • American Institute of Aeronautics and Astronautics (AIAA)

Languages

  • English
  • Spanish

Articles

Propulsion-Free Cross-Track Control of a LEO Small-Satellite Constellation with Differential Drag

2023 62nd IEEE Conference on Decision and Control (CDC)

2023

In this work, we achieve propellantless control of both cross-track and along-track separation of a satellite formation by manipulating atmospheric drag. Increasing the differential drag of one satellite with respect to another directly introduces along-track separation, while cross-track separation can be achieved by taking advantage of higher-order terms in the Earth's gravitational field that are functions of altitude. We present an algorithm for solving an n-satellite formation flying problem based on linear programming. We demonstrate this algorithm in a receeding-horizon control scheme in the presence of disturbances and modeling errors in a high-fidelity closed-loop orbital dynamics simulation. Our results show that separation distances of hundreds of kilometers can be achieved by a small-satellite formation in low-Earth orbit over a few months.

View more

Deep Off-Policy Iterative Learning Control

Learning for Dynamics and Control Conference

2023

Reinforcement learning has emerged as a powerful paradigm to learn control policies while making few assumptions about the environment. However, this lack of assumptions in popular RL algorithms also leads to sample inefficiency. Furthermore, we often have access to a simulator that can provide approximate gradients for the rewards and dynamics of the environment. Iterative learning control (ILC) approaches have been shown to be very efficient at learning policies by using approximate simulator gradients to speed up optimization. However, they lack the generality of reinforcement learning approaches. In this paper, we take inspiration from ILC and propose an update equation for the value-function gradients (computed using the dynamics Jacobians and reward gradient obtained from an approximate simulator) to speed up value-function and policy optimization.

View more

Practical Critic Gradient based Actor Critic for On-Policy Reinforcement Learning

Learning for Dynamics and Control Conference

2023

On-policy reinforcement learning algorithms have been shown to be remarkably efficient at learning policies for continuous control robotics tasks. They are highly parallelizable and hence have benefited tremendously from the recent rise in GPU based parallel simulators. The most widely used on-policy reinforcement learning algorithm is proximal policy optimization (PPO) which was introduced in 2017 and was designed for a somewhat different setting with CPU based serial or less parallelizable simulators. However, suprisingly, it has maintained dominance even on tasks based on the highly parallelizable simulators of today. In this paper, we show that a different class of on-policy algorithms based on estimating the policy gradient using the critic-action gradients are better suited when using highly parallelizable simulators.

View more

Show All +