AI News, Simons Institute Open Lecture: Safety-Critical Autonomous Systems: What is Possible? What is Required?
Simons Institute Open Lecture: Safety-Critical Autonomous Systems: What is Possible? What is Required?
The last 20 years have seen enormous progress in autonomous vehicles, from planetary rovers, to unmanned aerial vehicles, to the self-driving cars that we are starting to see on the roads around us.
In this talk, I will discuss some of the approaches used in the aerospace industry, where flight critical subsystems must achieve probability of failure rates of less than 1 failure in 109 flight hours (i.e.
The Dark Secret at the Heart of AI
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.
The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.
Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.
The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.
There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.
But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.
“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right.
The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.
Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.
If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.
Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.
It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand.
The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.
The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges.
In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for.
The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.
It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.
“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine.
The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment.
She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are
After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study.
Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too.
The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data.
A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military.
But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning.
A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.
But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems.“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trustit.”
Self-driving car technology: When will the robots hit the road?
Around the world, the number of ADAS systems (for instance, those for night vision and blind-spot vehicle detection) rose from 90 million units in 2014 to about 140 million in 2016—a 50 percent increase in just two years.
The adoption rate of surround-view parking systems, for example, increased by more than 150 percent from 2014 to 2016, while the number of adaptive front-lighting systems rose by around 20 percent in the same time frame (Exhibit 1).
Many higher-end vehicles not only autonomously steer, accelerate, and brake in highway conditions but also act to avoid vehicle crashes and reduce the impact of imminent collisions.
But while headway has been made, the industry hasn’t yet determined the optimum technology archetype for semiautonomous vehicles (for example, those at SAE level 3) and consequently remains in the test-and-refine mode.
The radar-over-camera approach, for example, can work well in highway settings, where the flow of traffic is relatively predictable and the granularity levels required to map the environment are less strict.
The combined approach, on the other hand, works better in heavily populated urban areas, where accurate measurements and granularity can help vehicles navigate narrow streets and identify smaller objects of interest.
In 2015, accidents involving distracted drivers in the United States killed nearly 3,500 people and injured 391,000 more in conventional cars, with drivers actively controlling their vehicles.
Unfortunately, experts expect that the number of vehicle crashes initially will not decline dramatically after the introduction of AVs that offer significant levels of autonomous control but nonetheless require drivers to remain fully engaged in a backup, fail-safe role.
Safety experts worry that drivers in semiautonomous vehicles could pursue activities such as reading or texting and thus lack the required situational awareness when asked to take control.
At 65 miles an hour, cars take less than four seconds to travel the length of a football field, and the longer a driver remains disengaged from driving, the longer the reengagement process could take.
We’ve seen similar problems in other contexts: in 2009, a commercial airliner overshot its destination airport by 150 miles because the pilots weren’t engaged while their plane was flying on autopilot.
While the technology is ready for testing at a working level in limited situations, validating it might take years because the systems must be exposed to a significant number of uncommon situations.
Another prerequisite is tuning the systems to operate successfully in given situations and conducting additional tuning as the geofenced region expands to encompass broader use cases and geographies.
The challenge at SAE’s levels 4 and 5 centers on operating vehicles without restrictions in any environment—for instance, unmapped areas or places that don’t have lanes or include significant infrastructure and environmental features.
Building a system that can operate in (mostly) unrestricted environments will therefore require dramatically more effort, given the exponentially increased number of use cases that engineers must cover and test.
While hardware innovations will deliver the required computational power, and prices (especially for sensors) appear likely to go on falling, software will remain a critical bottleneck (infographic).
Several high-tech players claim to have reduced the cost of lidar to under $500, and another company has debuted a system that’s potentially capable of enabling full autonomy (with roughly a dozen sensors) for approximately $10,000.
The system, for example, should treat a stationary motorcycle and a bicyclist riding on the side of the street in different ways and must therefore capture the critical differences during the object-analysis phase.
Also, the sensor fusion required to validate the existence and type of an object is technically challenging to achieve given the differences among the types of data such systems must compare—the point cloud (from lidar), the object list (from radar), and images (from cameras).
However, developers can build a database of if-then rules and supplement it with an artificial-intelligence (AI) engine that makes smart inferences and takes action in scenarios not covered by if-then rules.
Given the differences in accuracy between the two, designers can use the second approach in areas (for example, rural and less populated roads) where precise information on the location of vehicles isn’t critical for navigation.
Instead of the current overwhelming focus on components with specific uses, the industry needs to pay more attention to developing actual (system of) systems, especially given the huge safety issues surrounding AVs.
Incumbents looking to shape—and perhaps control—strategic elements of this industry face a legion of resourceful, highly competitive players with the wherewithal to give even the best-positioned insider a run for its money.
Given the frenetic pace of the AV industry, companies seeking a piece of this pie need to position themselves strategically to capture it now, and regulators need to play catch-up to ensure the safety of the public without hampering the race for innovation.
The Big Problem With Self-Driving Cars Is People
The engineers who built routers for the fledgling ARPANET in 1969 never dreamed that networking technology would upend journalism.
am confident we will eventually get to fully self-driving cars, but my concern is that during trial deployments we will run into many unexpected consequences that will delay mass deployment for many years.
If I was walking on a moonless night along a country road and heard a car approaching, I’d get off the road, climbing into bushes if necessary, until the car had passed.
Two questions arise: If self-driving cars can’t handle such examples of human caprice, how will people feel about sharing space with these new aliens?
And how much will the performance of self-driving cars need to be reduced, or otherwise modified, to enable them to share the roads smoothly with cars that are driven entirely or primarily by humans?
People expect to be able to cross a street at any point, but they know that there is give-and-take between drivers and pedestrians, often mediated by subtle cues of eye contact or body language.
First, on the longer main roads the cars mostly travel without interruption, but there are stop signs mediating access to these through roads from the smaller streets that cross them.
People walking along these main roads assume that they, too, have the right-of-way, expecting that drivers who have stopped on a side street will let them walk in front if they are about to step off the curb.
And third, the sidewalks here are narrow, and when snow has made them hard or impossible to traverse, people often choose to walk along the roads instead, trying to provide room for the cars to pass but nevertheless expecting the cars to be respectful of them.
People step out tentatively into the marked crosswalks and visually check whether oncoming drivers are slowing down or indicate in some way that they have seen the pedestrians.
Pedestrians and drivers mostly engage in this kind of brief social interaction, and any lack of interaction is usually an indicator to pedestrians that the driver has not seen them.
In yet more hostile areas, such as parts of New York City, pedestrians and drivers often play even more contentious games, such as purposefully avoiding eye contact so as to force the other party to yield.
And won’t some bored jerks try to spoof such cars by standing at the side of the road and gesticulating as though they’re about to jump off the curb?
Otherwise, without social interactions, it would be like the case of the dark country road, in which the driverless car has to be granted the right-of-way over pedestrians and cars with human drivers.
The self-driving cars will themselves range from semiautonomous ones, with level-2 or -3 autonomy, to fully autonomous ones, at levels 4 and 5 [see sidebar, “5 Levels of Autonomy”].
In private conversations with me, at least one manufacturer is afraid that human drivers will bully self-driving cars operating with level-2 autonomy, so the engineers are taking care that their level-3 test cars look the same as conventional models.
One day I needed to stop at the UPS store there to ship a heavy package, and as there were no free parking spots I found myself cruising up and down a 100-meter stretch as I waited for a spot to open up.
I’m sure the owners will be more creative than I can be, but here are three additional examples: Early on in the transition to driverless cars, the rich will have a whole new way to alienate the rest of society.
True, there are many driverless train systems in the world, but they mostly operate in very restricted environments—in the United States most are found in airports and span just a few kilometers of track, all of it completely segregated from the vehicles and people that are outside the system.
And, just like trains in airports, we’ll see level-4 cars driving themselves in limited, pedestrian-free domains—in garages, for instance, where drivers can drop off cars and let them park themselves with only inches to spare on each side.
Somewhat later we might see level-4 autonomy for ride-hailing services in limited areas of major cities, with the ride beginning and ending within a well-defined area where well-observed “walk” signals keep pedestrians and cars apart.
As Roy Amara’s eponymous law famously states, “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” That is where we are today.
- On 14. april 2021
MIT 6.S094: Introduction to Deep Learning and Self-Driving Cars
This is lecture 1 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 1 slides: Contact: email@example.com.
Russia's Geography Problem
Get 15 free days of knowing your data is safe by using this link with Backblaze: “What if Russia Never Existed” by AlternateHistoryHub:
The future we're building -- and boring | Elon Musk
Elon Musk discusses his new project digging tunnels under LA, the latest from Tesla and SpaceX and his motivation for building a future on Mars in conversation with TED's Head Curator, Chris...
HondaJet F3 Worlds Most Advanced Business Jet Commercial CARJAM TV HD Car TV Show
HondaJet F3 Worlds Most Advanced Business Jet Commercial CARJAM TV HD Car TV Show CARJAM TV - Subscribe Here Now Like Us Now On Facebook:
How to Level a Motorhome
Leveling an RV on a sloped campsite is pretty simple when you know the right steps to follow. Here's how we do it, plus we're giving away some great prizes in our latest RVgeeks Giveaway! ...
Less Car, Less Cars – The Uniti Vision | Siemens Digital Cities Forum
Uniti was invited by our partner Siemens to participate in Digital Cities Forum 2017 in London. Our CEO, Lewis, delivered a keynote that explains the vision for Uniti. Digital Cities Forum...
Explanation of gyro precession: More: Less Than: Equal To: Huge thanks to A/Prof Emeritus Rod Cross
How to Be as Productive as Elon Musk - 5 Essential Practices
The first 500 people to use this link will get a free 2-month trial with unlimited learning on Skillshare: Big thanks to Skillshare for sponsoring this video! How..
Why Russia Did Not Put a Man on the Moon - The Secret Soviet Moon Rocket
It's probably the most well known peacetime battle between the USA and the Soviet Union, in both technological and ideological terms of the 20th century. Patreon :
Day in the Life: Mechanical Engineer
Nivay Anandarajah is a mechanical engineer with Alloy Product Development. As part of ConnectEd's "Day in the Life" series, Nivay shows us the process of designing headphones for Beats by Dre,...