AI News, Creating autonomous vehicle systems

Creating autonomous vehicle systems

Beginning with Facebook in 2004, social networking sites laid the fourth layer of information technology by allowing people to directly connect with each other, effectively moving human society to the World Wide Web.

As the population of Internet-savvy people reached a significant scale, the emergence of Airbnb in 2008, followed by Uber in 2009, and others, laid the fifth layer by providing direct Internet commerce services.

In this post, we'll explore the technologies involved in autonomous driving and discuss how to integrate these technologies into a safe, effective, and efficient autonomous driving system.

(For example, if the sensor camera generates data at 60 Hz, the client subsystem needs to make sure that the longest stage of the processing pipeline takes less than 16 milliseconds (ms) to complete.) The cloud platform provides offline computing and storage capabilities for autonomous cars.

Using the cloud platform, we are able to test new algorithms and update the HD map—plus, train better recognition, tracking, and decision models.

Vision-based localization undergoes the following simplified pipeline: 1) by triangulating stereo image pairs, we first obtain a disparity map that can be used to derive depth information for each point;

2) by matching salient features between successive stereo image frames in order to establish correlations between feature points in different frames, we could then estimate the motion between the past two frames;

In recent years, however, we have seen the rapid development of deep learning technology, which achieves significant object detection and tracking accuracy.

A general CNN evaluation pipeline usually consists of the following layers: 1) the convolution layer uses different filters to extract different features from the input image.

Object tracking technology can be used to track nearby moving vehicles, as well as people crossing the road, to ensure the current vehicle does not collide with moving objects.

In recent years, deep learning techniques have demonstrated advantages in object tracking compared to conventional computer vision techniques.

By using auxiliary natural images, a stacked autoencoder can be trained offline to learn generic image features that are more robust against variations in viewpoints and vehicle positions.

In the decision stage, action prediction, path planning, and obstacle avoidance mechanisms are combined to generate an effective action plan in real time.

One of the main challenges for human drivers when navigating through traffic is to cope with the possible actions of other drivers, which directly influence their own driving strategy.

To make sure that the AV travels safely in these environments, the decision unit generates predictions of nearby vehicles then decides on an action plan based on these predictions.

To predict actions of other vehicles, one can generate a stochastic model of the reachable position sets of the other traffic participants, and associate these reachable sets with probability distributions.

Planning the path of an autonomous, responsive vehicle in a dynamic environment is a complex problem, especially when the vehicle is required to use its full maneuvering capabilities.

Since safety is of paramount concern in autonomous driving, we should employ at least two-levels of obstacle avoidance mechanisms to ensure that the vehicle will not collide with obstacles.

There are three challenges to overcome: 1) the system needs to make sure that the processing pipeline is fast enough to consume the enormous amount of sensor data generated;

It is a suitable operating system for autonomous driving, except that it suffers from a few problems: Although ROS 2.0 promised to fix these problems, it has not been extensively tested, and many features are not yet available.

Imagine two scenarios: in the first, an ROS node is kidnapped and is made to continuously allocate memory until the system runs out of memory and starts killing other ROS nodes and the hacker successfully crashes the system.

To fix the first security problem, we can use Linux containers (LXC) to restrict the number of resources used by each node and also provide a sandbox mechanism to protect the node from each other, effectively preventing resource leaking.

To understand the challenges in designing a hardware platform for autonomous driving, let us examine the computing platform implementation from a leading autonomous driving company.

To explore the edges of the envelope and understand how well an autonomous driving system could perform on an ARM mobile SoC, we can implement a simplified, vision-based autonomous driving system on an ARM-based mobile SoC with peak power consumption of 15 W.

Surprisingly, the performance is not bad at all: the localization pipeline is able to process 25 images per second, almost keeping up with image generation at 30 images per second.

This system has several applications, including simulation, which is used to verify new algorithms, high-definition (HD) map production, and deep learning model training.

As shown in Figure 9, HD map production is a complex process that involves many stages, including raw data processing, point cloud production, point cloud alignment, 2D reflectance map generation, HD map labeling, as well as the final map generation.

A great advantage is that Spark provides an in-memory computing mechanism, such that we do not have to store the intermediate data in hard disk, thus greatly reducing the performance of the map production process.

To approach this problem, we can develop a highly scalable distributed deep learning system using Spark and Paddle (a deep learning platform recently open-sourced by Baidu).

Learn ROS for Self-Driving Cars | ROS Tutorial

Learn the essentials for doing autonomous cars control using ROS. You will be able to use the tools and ROS knowledge to apply to your self-driving car project.

Programming for Robotics (ROS) Course 1

The slides are available here: The ..

[ROS Q&A] How to Start Programming Drones using ROS

This video will show you how to start programming drones using ROS (Robot Operating System). We'll walk through the basics of a Parrot AR Drone Gazebo ...

Autoware - Mapping using rosbag

This video demonstrates how to create pointcloud map using rosbag data. 1. Go to Simulation tab and select a rosbag which includes /points_raw. 2. Click "Play" ...

Mastering ROS Tutorials 1.0: Getting started with ROS (Using an IDE!!)

This is the first video in the Mastering ROS Tutorial set. The video briefly discusses the major concepts of ROS as well as guides the viewers to my favorite IDE ...

Ten-Minute Guide to rosbridge

In ten minutes, this screencast walks through installing rosbridge and the basic operations (publishing and making service calls) used to access the full power of ...

Hybrid A* Path Planning with Search Visualization

NOTE: for the purpose of visualization the search is heavily slowed down to 5 ms per node expansion A hybrid A* algorithm that I am developing for my master ...

How to Make a Simple Tensorflow Speech Recognizer

Only a few days left to signup for my Decentralized Applications course! In this video, we'll make a super simple speech recognizer in ..

Rick Astley - Never Gonna Give You Up (Video)

Rick Astley - Never Gonna Give You Up (Official Music Video) - Listen On Spotify: Learn more about the brand new album 'Beautiful ..

[ROS Q&A] 116 - Launching Husarion ROSbot navigation demo in Gazebo simulation

In this video we will learn how to install the ROSBot Gazebo simulation in just 5 minutes and how to launch the mapping and navigation demos that it includes.