AI News, Nvidia Wants to Build the Robocar's Brain

Nvidia Wants to Build the Robocar's Brain

Itpacksa pair of the company’sTegra X1 processors, each capable of a bit more than a teraflop—a trillion floating-point operations per second.Together theycan manage up to 12 cameras, including units that monitor the driver for things like drowsiness or distractedness.

If you’ve played Grand Theft Auto,you’ll have a good idea of whata professional driving simulator is like, and if you’ve played with simulators, you’ll havea passing familiaritywith self-driving cars.

Learning as you go would be the idealexperimental method, and such a skillwouldcome in handy whenever the high-detail maps on which autonomous cars rely fail—for instance, whena truck jackknifes, closing a lane.

Auto companies that work with Nvidia (which, by the way, already has processors ofone kind or another in some 8 million cars)and are presumed to be lining up for the development kit include Tesla, Audi and BMW, as well as top-tier suppliers, such as Delphi.

Take the failure case involving driver-assistance using radar: anythingthat’shighly reflective—metalconfetti or a Mylar balloon, for example—willbuild alarge radar signature.We actually had an engineer driving when an empty potatochip bag blew in front and thecar slammed on the brakes.” Lidar has its own weaknesses, which simulators can model and help to correct.

Self-driving car technology: When will the robots hit the road?

Around the world, the number of ADAS systems (for instance, those for night vision and blind-spot vehicle detection) rose from 90 million units in 2014 to about 140 million in 2016—a 50 percent increase in just two years.

The adoption rate of surround-view parking systems, for example, increased by more than 150 percent from 2014 to 2016, while the number of adaptive front-lighting systems rose by around 20 percent in the same time frame (Exhibit 1).

Many higher-end vehicles not only autonomously steer, accelerate, and brake in highway conditions but also act to avoid vehicle crashes and reduce the impact of imminent collisions.

But while headway has been made, the industry hasn’t yet determined the optimum technology archetype for semiautonomous vehicles (for example, those at SAE level 3) and consequently remains in the test-and-refine mode.

The radar-over-camera approach, for example, can work well in highway settings, where the flow of traffic is relatively predictable and the granularity levels required to map the environment are less strict.

The combined approach, on the other hand, works better in heavily populated urban areas, where accurate measurements and granularity can help vehicles navigate narrow streets and identify smaller objects of interest.

In 2015, accidents involving distracted drivers in the United States killed nearly 3,500 people and injured 391,000 more in conventional cars, with drivers actively controlling their vehicles.

Unfortunately, experts expect that the number of vehicle crashes initially will not decline dramatically after the introduction of AVs that offer significant levels of autonomous control but nonetheless require drivers to remain fully engaged in a backup, fail-safe role.

Safety experts worry that drivers in semiautonomous vehicles could pursue activities such as reading or texting and thus lack the required situational awareness when asked to take control.

At 65 miles an hour, cars take less than four seconds to travel the length of a football field, and the longer a driver remains disengaged from driving, the longer the reengagement process could take.

We’ve seen similar problems in other contexts: in 2009, a commercial airliner overshot its destination airport by 150 miles because the pilots weren’t engaged while their plane was flying on autopilot.

While the technology is ready for testing at a working level in limited situations, validating it might take years because the systems must be exposed to a significant number of uncommon situations.

Another prerequisite is tuning the systems to operate successfully in given situations and conducting additional tuning as the geofenced region expands to encompass broader use cases and geographies.

The challenge at SAE’s levels 4 and 5 centers on operating vehicles without restrictions in any environment—for instance, unmapped areas or places that don’t have lanes or include significant infrastructure and environmental features.

Building a system that can operate in (mostly) unrestricted environments will therefore require dramatically more effort, given the exponentially increased number of use cases that engineers must cover and test.

While hardware innovations will deliver the required computational power, and prices (especially for sensors) appear likely to go on falling, software will remain a critical bottleneck (infographic).

Several high-tech players claim to have reduced the cost of lidar to under $500, and another company has debuted a system that’s potentially capable of enabling full autonomy (with roughly a dozen sensors) for approximately $10,000.

The system, for example, should treat a stationary motorcycle and a bicyclist riding on the side of the street in different ways and must therefore capture the critical differences during the object-analysis phase.

Also, the sensor fusion required to validate the existence and type of an object is technically challenging to achieve given the differences among the types of data such systems must compare—the point cloud (from lidar), the object list (from radar), and images (from cameras).

However, developers can build a database of if-then rules and supplement it with an artificial-intelligence (AI) engine that makes smart inferences and takes action in scenarios not covered by if-then rules.

Given the differences in accuracy between the two, designers can use the second approach in areas (for example, rural and less populated roads) where precise information on the location of vehicles isn’t critical for navigation.

Instead of the current overwhelming focus on components with specific uses, the industry needs to pay more attention to developing actual (system of) systems, especially given the huge safety issues surrounding AVs.

Incumbents looking to shape—and perhaps control—strategic elements of this industry face a legion of resourceful, highly competitive players with the wherewithal to give even the best-positioned insider a run for its money.

Given the frenetic pace of the AV industry, companies seeking a piece of this pie need to position themselves strategically to capture it now, and regulators need to play catch-up to ensure the safety of the public without hampering the race for innovation.

Nvidia vs NXP—Whose Robocar Brain Will Win?

NXPand Nvidia haveannounced computing platforms for smart cars, and each company claims that its platformis the best by far.

“All the parts that come out will have the advantage of being compatible with this platform—this vision processor, this radar processor, this torque managment device,” saidMatt Johnson, who’s in charge of NXP’s product lines, software, and automotiveprocessors.

NXP says that eight of the top 15 car makers have adopted the S32x platform for use in future models, and this isn’t too surprising.The company, based inEindhoven, the Netherlands, instantly became the biggest supplier of automotive chips with its 2015 acquisition of the semiconductor manufacturing company Freescale.

“And none of the other companies are stating what their compute horsepower is.No way anyone else out there comes close to what we have.” Pegasus can do the equivalent of320 trillion operations per second, the work ofroughly 100 servers, Shapiro says.

But an entire integrated self-driving system is fed by data from lidar, radar, ultrasound, GPS, inertial guidance, and on and on, and it must process it all and produce a decision in a fraction of a second.

That’s the trick behind programs that recognize stop signs, respect the edges of the lane or highway, and distinguish a flickering shadow from a child running into the street.

Companies that want to test NXP’s product must content themselves now with software simulations.Nvidia’s Pegasus won’t incorporate the latest GPUs until next year, so for the time being, its customers must work with Drive PX, a forerunner to Pegasus that wasunveiled last year.

The life cycle phases of a product such as a naval ship include design, production, deployment, operations, maintenance, and upgrades.

The Lockheed Martin Surface Navy Innovation Center (LM-SNIC) is investigating the best practices for applying VR and AR to the lifecycle phases of complex products such as naval ships, aircraft, and ground radars.

PlayStation Presents - PSX 2017 Opening Celebration | English CC

Join us as we kick off PSX 2017 with PlayStation Presents, starting at 8 PM Pacific on Friday December 8th. Listen in on candid discussions with some of PlayStation's top developers, get...