AI News, Drive.ai Solves Autonomous Cars' Communication Problem
- On Saturday, February 17, 2018
- By Read More
Drive.ai Solves Autonomous Cars' Communication Problem
Understandably, most people working on autonomous vehicles are very focused on things like getting thecars to avoid runninginto stuff.
And in general, this is something that autonomous cars have gotten very good at—especially on highways and in other areas where theydon't have to worry about unpredictable humans running around and making their thinking morecomplicated anddifficult.
Today, Drive.ai is “officially emerging from stealth”(whatever that means), and we've learned a bit more about what the company is working on.Drive.ai is toutinga retrofit kit for business fleets that can imbue existing vehicles with full autonomy.
Now, imagine a driverless car in the same situation.With no human in control, how would you know whether the car has: a) detected you at all;b) understood what you want to do;andc) decided that it's going to stop for you?
It also probably happens far less frequently than it actually should.(I consider myself an expert on this subject, since I drove from NYC to Washington, D.C., last weekend.) Not only will autonomous cars actually use their turn signals, but with the ability to communicate more complex concepts, they could even politely ask to merge, provide useful information like “slowing for accident ahead,”or even apologize if they cut you off, which they probably won't ever do.
The fundamental necessity for a focus on HRI in the first generation of commercial autonomous vehicles stems from the fact that there's going to be a significant transitional period between mostly human-driven cars and mostly autonomous cars.
Goingback to the crosswalk example, the difference is between making sure your autonomous car doesn’t hit humans in crosswalks, as opposed to actually helping humans safely cross the street.
For more details on applying HRI techniques to driverless cars, as well as more on Drive.ai's full stack deep learning approach to autonomy, we spoke with co-founder and president Carol Reiley: IEEE Spectrum: How is Drive.ai's approach to self-driving cars unique?
Sebastian [Thrun, who led Stanford's DarpaGrand Challenge team before developing autonomous cars at Google] had said “computer vision isn't going to work, and I'm betting on HD maps and lidar,”and that's how Google's autonomous car program was built: on the assumption that computer vision won't work.
At that point, Google had already invested years into a non deep learning approach, and they're switching it out module by module, but it's hard to fundamentally change the approach.
There are all these other subtle cues that humans look for that help us navigate in the world, and make it seem like [our cars are] more socially intelligent, because you can start anticipating motions before they happen.
I think that the auto industry takes a modular approach to things, but self-driving cars are not a modular problem: they're a software-based, holistic thing, and you have to step back and look at the big picture.
Because the company's vision involves vehicles that will “communicate transparently with us, have personality, and make us feel welcome and safe, even without a human driver,”we recommend that you find creative ways of pestering them just to see what they do, and then tell us about it.
The press release mentions some existing partnerships with major OEMs and automotive suppliers.And because ofthat $12 millionin funding, we wouldn't be surprised to see vehicles with big friendly screens politely driving around California delivering things within the next year or two.
To Survive the Streets, Robocars Must Learn to Think Like Humans
Next time you’re driving down the road or walking down the street, pause to consider how you read your surroundings.
To prevent the clog, those researchers are leaning on artificial intelligence and the ability to teach driving systems, through modeling and repetitive observation, what behaviors mean what, and how the system should react to them.
So their actions are rational when seen from [that perspective], and would appear irrational when seen from the perspective of other intentions.” Say a driver in the right lane of the freeway accelerates.
The computer knows people should slow down as they approach exits, and can infer this person is likely to continue straight ahead instead of taking that upcoming off ramp.
“We’re really good as human beings at recognizing certain kinds of behaviors that look one way to a machine, but in our social lens, it’s something else.” Imagine you’re driving down a city block when you see a man walking toward the curb.
“So now I’m trying to figure out whether or not it’s safe to keep going based on what the rest of the traffic is going to do.” If it seems the world is now headed for some sort of drivers-ed hellscape, don’t worry.
Dariu Gavrila studies intelligent vehicles at Delft University of Technology, training computers for challenges ranging from navigating complex intersections with multiple moving hazards to more specific situations such as road debris, traffic police, and things as unusual as someone pushing a cart down the middle of the street.
That work means factoring in the context around pedestrian traffic—proximity to curbs, the presence of driveways or public building entrances—and the norms of behavior in these environments.
“We showed in real vehicle demonstration that an autonomous system can react up to one second faster than a human, without introducing false alarms.” There are practical limits to what the computers can do, though.
More sophisticated behavior models might give us up to two seconds of predictability.” Still, that second or two of warning might be all a computerized system needs, since it’s well within the scope of the human response times.
“When you’re essentially trying to predict the future, that’s a massive computational task, and of course it just produces a probabilistic guess,” says Jack Weast, Intel’s chief systems architect for autonomous drive systems.
Autonomous cars are learning our unpredictable driving habits
A team led by Katherine Driggs-Campbell has developed an algorithm that can guess with up to 92 per cent accuracy whether a human driver will make a lane change.
Enthusiasts are excited that self-driving vehicles could lead to fewer crashes and less traffic.
When we drive, we watch for little signs from other cars to indicate whether they might turn or change lanes or slow down.
“How do you ensure the autonomous vehicle is clearly communicating with the humans, and how do you know the human is understanding what they’re doing?”
Each time the driver decided to make a lane change, they pushed a button on the steering wheel before doing so.
The researchers could then analyse data from the simulator for patterns at the time of lane changes: Where were all of the cars on the road?
They used some of the data to train the algorithm, then put the computer behind the wheel in re-runs of the simulations.
Humans Will Bully Mild-Mannered Autonomous Cars
Speaking to the Guardian, the company’s senior technical leader, Erik Coelingh, explained that the automaker plans to leave its self-driving cars unmarked during upcoming London trials so that human drivers aren’t tempted to take advantage.
A new study from the University of California, Santa Cruz, has modeled how pedestrians and autonomous vehicles might interact using game theory—in essence applying a little academic thinking to the everyday game of playing chicken with traffic.
pedestrians will be able to behave with impunity, and autonomous vehicles may facilitate a shift toward pedestrian-oriented urban neighborhoods,” writes the author, Adam Millard-Ball.
Google, for instance, has explained in the past that its AI systems are able to detect cyclists, with the cars being“taught to drive conservatively around them.” But one cyclist in Austin reported that a Google vehicle found itself unable to set off because ofits overcautious approach around the bicycle.
In fact, researchers will probably be able to overcome some of these problems by simply making their self-driving cars act more like humans—with, say, smoother driving or authentic horn-honking.
- On Friday, January 18, 2019
MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars
This is lecture 1 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 1 slides: Contact: email@example.com.
NVIDIA AI Car Demonstration
In contrast to the usual approach to operating self-driving cars, we did not program any explicit object detection, mapping, path planning or control components into this car. Instead, the...
MIT Self-Driving Cars: Sacha Arnoud, Director of Engineering, Waymo
This is a talk by Sacha Arnoud for course 6.S094: Deep Learning for Self-Driving Cars (2018 version). Sacha is the Director of Engineering at Waymo and his talk is titled "The rise of machine...
Toyota Autonomous Driving Car | Toyota Self Driving Car Test (Guardian and Chauffer Test)
Toyota Research Institute Releases Video Showing First Demonstration of Guardian and Chauffeur Autonomous Vehicle Platform Toyota Research Institute (TRI) has released a video showing the...
Chris Urmson: How a driverless car sees the road
Statistically, the least reliable part of the car is ... the driver. Chris Urmson heads up Google's driverless car program, one of several efforts to remove humans from the driver's seat. He...
Robot Meets Self Driving Car - Sophia by Hanson & Jack by Audi
Sophia by Hanson robotics takes a ride in a self-driving car named Jack by Audi. I wrote about my trip to audi's HQ in detail here if you're interested to learn more about deep learning, ai...
What a driverless world could look like | Wanis Kabbaj
What if traffic flowed through our streets as smoothly and efficiently as blood flows through our veins? Transportation geek Wanis Kabbaj thinks we can find inspiration in the genius of our...
How to Simulate a Self-Driving Car
We're going to use Udacity's car simulator app as an environment to create our own autonomous agent! We'll use Keras to train a convolutional neural network on images from the car's cameras...
Stanford Seminar: Autonomous Driving, are we there yet? - Technology, Business, Legal Considerations
EE380: Computer Systems Colloquium Seminar Autonomous Driving, are we there yet? - Technology, Business, Legal Considerations Speaker: Sven Beiker, Stanford GSB Autonomous driving is arguably...