AI News, Researchers develop virtual-reality testing ground for drones

Researchers develop virtual-reality testing ground for drones

Training drones to fly fast, around even the simplest obstacles, is a crash-prone exercise that can have engineers repairing or replacing vehicles with frustrating regularity.

Now MIT engineers have developed a new virtual-reality training system for drones that enables a vehicle to “see” a rich, virtual environment while flying in an empty physical space.

Pushing boundaries Karaman was initially motivated by a new, extreme robo-sport: competitive drone racing, in which remote-controlled drones, driven by human players, attempt to out-fly each other through an intricate maze of windows, doors, and other obstacles.

Currently, training autonomous drones is a physical task: Researchers fly drones in large, enclosed testing grounds, in which they often hang large nets to catch any careening vehicles.

If you want to push boundaries on how fast you can go and compute, you need some sort of virtual-reality environment.” Flight Goggles The team’s new virtual training system comprises a motion capture system, an image rendering program, and electronics that enable the team to quickly process images and transmit them to the drone.

The actual test space — a hangar-like gymnasium in MIT’s new drone-testing facility in Building 31 — is lined with motion-capture cameras that track the orientation of the drone as it’s flying.

With the image-rendering system, Karaman and his colleagues can draw up photorealistic scenes, such as a loft apartment or a living room, and beam these virtual images to the drone as it’s flying through the empty facility.

The virtual images can be processed by the drone at a rate of about 90 frames per second — around three times as fast as the human eye can see and process images.

Over 10 flights, the drone, flying at around 2.3 meters per second (5 miles per hour), successfully flew through the virtual window 361 times, only “crashing” into the window three times, according to positioning information provided by the facility’s motion-capture cameras.

Karaman points out that, even if the drone crashed thousands of times, it wouldn’t make much of an impact on the cost or time of development, as it’s crashing in a virtual environment and not making any physical contact with the real world.

Using the navigation algorithm that the researchers tuned in the virtual system, the drone, over eight flights, was able to fly through the real window 119 times, only crashing or requiring human intervention six times.

Want Drones to Fly Right? Make Them Hallucinate, MIT Prof Says

As associate professor of aeronautics and astronautics, and director of the Foundations of Autonomous Systems Technology Group at MIT, he focuses on research on autonomous vehicles and transportation of all sorts, on the ground and in the air.

The video revealed how falcons respond to visual cues while diving at speeds of 70 meters per second, turning their heads mid-flight to better track their prey.

Flying through the simulations — what Karaman calls “hallucinations” — the drone learns to navigate around obstacles like those it might encounter on a rescue mission or working in a factory.

Similar to using VR to safely train humans for dangerous work, making Sparrow hallucinate lets Karaman’s team train the drone without risking hardware damage in real-world collisions.

The team additionally has its sights set on developing a next-generation drone that relies on a Jetson TX2 for all computing functions, including real-time computing and CPU functions.

Campus Technology News

Virtual Reality Engineers at the Massachusetts Institute of Technology have created a virtual reality environment to train drones to fly fast around obstacles.

Such a setup is fine for training slow-flying drones, like those used in mapping, but not suitable for training racers, whose flight can vary wildly from small changes in their code.

The training facility includes a motion-capture system to track the drone's position and orientation as it flies, an image rendering program that can create photorealistic images of physical environments, and tools for beaming the images into the drone's computer as it flies, all to give the drone the illusion of physical obstacles as it flies through an empty room.

Karaman said the system could be used to test out new sensors, as well, by putting them on the drone and seeing if the performance improves or deteriorates in different environments, or to train drones how to fly around humans.

For example, a human could walk around in a motion capture suit and that data could be beamed into the drone's computer from a remote location, thus making the person appear near the drone while at no risk of physical harm.

MIT Researchers Use VR to Train Autonomous Drones

The result in computer science labs developing autonomous systems is often regular breaks where expensive drone components need to be put back together again.

Using a virtual-reality training platform, drones can be fooled into thinking they are seeing (and learning to navigate around) genuine obstacles despite flying in a wide open empty space.

“If anything, the system can make autonomous vehicles more responsive, faster, and more efficient.” “In the next two or three years, we want to enter a drone racing competition with an autonomous drone, and beat the best human player,” Karaman says.

It’s also useful because, in order to train drones to fly quickly through challenging environments, a new training system is required to help the drones learn to process visual information at speed.

If you want to push boundaries on how fast you can go and compute, you need some sort of virtual-reality environment.” The Flight Goggles virtual reality training kit features a motion capture system, an image rendering program, and electronics that enable the team to quickly process images and transmit them to the drone.

FlightGoggles: Visual-inertial-odometry flight with photorealistic camera simulation in the loop

FlightGoggles: Visual-inertial-odometry flight with photorealistic camera simulation in the loop Thomas Sayre-McCord, Amado Antonini, Jasper Arneberg, Austin ...

MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars

This is lecture 1 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 1 slides: ..

Scalable Cost Function Learning for Path Planning in Urban Environments

In this work, we present an approach to learn cost maps for driving in complex urban environments from a very large number of demonstrations of driving ...

Attention and Anticipation in Fast Visual-Inertial Navigation

Luca Carlone and Sertac Karaman, "Attention and Anticipation in Fast Visual-Inertial Navigation", submitted, arXiv preprint: ...

ALMİRA MİMARLIK-İBB ARNAVUTKÖY SOSYAL TESİSLERİ PLANKOTE VE DRONE UÇUŞ

OSTEM HARİTA AŞ. AMİRA MİMARLIK PLANKOTE ÇALIŞMASI İBB SOSYAL TESİSLERİ.

From Perception to Decision: Deep End-to-end Motion Planning

In this work we present a model that is able to learn the complex mapping from raw 2D- laser range findings and a target position to the required steering ...

Wasden Overview Video

This is the introduction to a video about the Wasden Archaeological site in Idaho. The full video can be seen in the exhibits at the Idaho Museum of Natural ...

Oto Beyin Tamir Kursu Auto Ecu Brain Repair Course

OTO BEYİN TAMİRİ OTO BEYİN TAMİR KURSU TEKNİK OTO ELEKTRONİK 0532 467 98 18 OTO BEYİN TAMİRİ OTO BEYİN TAMİR CİHAZLARI OTO ...

CSPFea Bentley Seminario Online Pointools - ScanPerLayer

CSPFea Bentley Seminario Online Pointools - Hyperlinks