Developing for autonomous driving is hard – really hard

January 8, 2018 Rich Nass

Lots of people are talking about autonomous driving, but the number of people actually developing systems around it is relatively small. There are many reasons for this; some have to do with liability issues and some have to do with the super-long design cycles. But frankly, the biggest reason many shy away from developing the technology is because it’s really hard to do.

Autonomous driving is defined in terms of levels. They range from Level 0, where there’s no automation, to Level 6, which is full automation. Today, at the high end, we’re around Level 3 or 4, with things like Tesla’s Autopilot capabilities and the auto-parallel park that’s becoming more common.

Some say it starts with the sensors. That group includes Jada Tapley, Delphi’s Vice President of Advanced Engineering. "But," she says, “if you don’t have the right architecture in place to enable those sensors and support all the additional software and computing power that’s necessary for a level four or five application, then it’s just fundamentally not going to work.”

Think of a car like you think of a human. As we get into the car, its sensors do the job of our eyes and ears. Those sensors could be in the form of cameras, radar, light detection and ranging (LIDAR), and so on. They gather data about what’s happening around the driver. The “nervous system” takes that data and sends it to the “brain,” which makes decisions based on that data.

What comprises the brain? Like the evolution of man, it’s gotten way smarter than it was in the past. It’s now at super-computer level, running millions of lines of code, and making decision in real time.

Every subsystem within the car is generating data and we have to decide which data is important, what needs to be sent to the car’s central brain, what needs to go up to the Cloud, and what should be discarded. That’s where Edge computing comes into play, especially as we’re trying to minimize what goes to the Cloud. This is critical for two reasons. First, it’s expensive to send data to the Cloud, which in the car, has to be done over cellular. Second, there’s a time delay to send information to the Cloud, have it processed, and return that information. Hence, something that requires a real-time response needs to be processed locally.

There are some benefits that most people are less aware of. For example, knowing the real-time traffic patterns could mean that the city changes the traffic-light patterns on the fly. Also, it could make the roads more efficient for emergency vehicles.

What is the right pipe to handle data going to the Cloud? Most developers agree that it should be 5G. Current 4G is an option, but it can be cost prohibitive. WiFi could be an option, but you’re limited to certain areas. The best answer is something that’s probably not here yet. But the critical part of that is making sure you can identify the right data because having the right data to transmit is step one.

There are a lot of expectations around 5G. The bar is being set quite high, and it’ll be fantastic if 5G can achieve everything that it’s pundits claim it’ll achieve. But it’s a wait and see right now.

According to Tapley, “We need a supercomputer to handle Level 4 and 5 applications. We’re already seeing a shift towards centralized intelligence put into domains in the vehicle, like safety, the cockpit, and propulsion. The car is kind of transforming into something that’s more like today’s conventional computer, where you pick the applications that run on it. It’s got a set number of processing power and memory. Then we can leverage that to run various applications based on what consumers want.”

As we sit on the eve of CES 2018, and a host of announcements, it’ll be interesting how many vendors attempt to tackle the difficult autonomous-driving problem.



About the Author

Rich Nass

Richard Nass is the Executive Vice-President of OpenSystems Media. His key responsibilities include setting the direction for all aspects of OpenSystems Media’s Embedded and IoT product portfolios, including web sites, e-newsletters, print and digital magazines, and various other digital and print activities. He was instrumental in developing the company's on-line educational portal, Embedded University. Previously, Nass was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events in the U.S., Europe, and Asia. Nass has been in the engineering OEM industry for more than 25 years. In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites. Nass holds a BSEE degree from the New Jersey Institute of Technology.

Follow on Twitter Follow on Linkedin Visit Website More Content by Rich Nass
Previous Article
With Advanced Sensor Technology, Guardian Offers Solution to Heartbreaking Problem of "Hot Car" Infant Fatalities

Passenger-Aware Cars Can Save Lives of Children Accidentally Left Alone

Next Article
Luxoft to demonstrate new in-vehicle translator feature powered by SmartDeviceLink technology at CES 2018

Luxoft has announced that it will exhibit a new in-vehicle translator feature co-developed with Ford at CES...