AI News, Aggressive Quadrotors Conquer Gaps With Ultimate Autonomy

Aggressive Quadrotors Conquer Gaps With Ultimate Autonomy

Just a few weeks ago, we posted about some incredible research from Vijay Kumar’s lab at the University of Pennsylvania getting quadrotors to zip through narrow gaps using only onboard localization.

Let’s be clear about this: to autonomously fly through these gaps (which are only 1.5 times the size of the robot, with a mere 10 centimeterof clearance on each side), the quadrotor is using a 752 x 480-pixel monochrome camera with a 180-degree field of view lens, and a PX4FMU autopilot with an IMU and a smartphone-class single board Odroid XU4 computer running ROS.

It then computes a trajectory to pass through the gap that keeps the quadrotor as far as possible from the edges, which is what you want to do to not run into stuff, while also trying to keep the gap itself in view of the quadrotor’s camera as much as possible.

This trajectory is very focused on gap traversal, so the starting point for it involves the quadrotor moving at high speed and possibly oriented a little bit sideways, so the system also needs to come up with a second trajectory that takes the robot from a stable hover into the gap traversal trajectory.

Once the robot starts heading for the gap, it does its best to keep its camera pointed at the frame of the gap to continually update its state estimation relative to the space it’s trying to squeeze through and dynamically replan its trajectory as necessary.

This is a requirement for actually using these skills in a way that’s practical: it’s not just about getting drones to fly through windows, but more about teaching them to be able to reliably maneuver around all kinds of different obstacles, in any environment, from forests to urban areas to your living room.

Indeed, for the robot to localize with respect to the gap, a trajectory should be selected, which guarantees that the quadrotor always faces the gap and should be re-planned multiple times during its execution to cope with the varying uncertainty of the state estimate (the uncertainty increases quadratically with the distance from the gap) while respecting the vehicle dynamics.

To traverse the gap, we compute a trajectory which can be executed blindly thanks to its short duration and to the fact that it requires precomputed constant inputs (namely, a collective thrust of given magnitude and zero angular velocities).

These scenarios are conceptually similar and our approach can be adapted such that, beside being perception aware (i.e., the pole or tree always visible in the image), the distance from the pole is now minimized (to minimize the overall flight time) while ensuring that is there is no collision.