AI Challenges & Opportunities in the Automotive Industry

August 31, 2018

Blog

AI Challenges & Opportunities in the Automotive Industry

I caught up with Dr. Maha Achour, CEO of Metawave, and Dr. Matt Harrison, Metawave?s first AI engineer, to discuss AI in general and applying AI to autonomous driving.

The automotive industry has seen significant advances in driver assist capabilities. In May, a wireless technology company called Metawave announced a $10 million additional seed investment from strategic investors including Hyundai, Toyota, Denso and Infineon. The Metawave announcement included an all-in-one radar sensor with integrated AI edge processing to operate seamlessly with the sensor fusion module for self-driving vehicles, which includes camera, lidar and radar technologies. Since that time, Metawave has announced incorporating AI to further advance autonomous driving. I caught up with Dr. Maha Achour, CEO of Metawave, and Dr. Matt Harrison, Metawave’s first AI engineer, to discuss AI in general and applying AI to autonomous driving.

What kinds of signs should stakeholders look for when their environment would benefit from AI?

“It breaks down to how you define AI,” Dr. Harrison started. “A lot of it is data collection and monitoring. But that’s not complete machine learning. Neural networks and advanced machine learning require a very large amount of high quality data. If as a stakeholder, you’re hearing the analysts are being flooded with more data than they can analyze, you’re ready for AI.”

Harrison also mentioned the value of envisioning the use case to ensure the ability to deploy the AI algorithm in a meaningful way. You need to be cognizant of challenges with deployments and their benefits. Prior to applying AI, sit down and map the problem to be solved to an established machine learning algorithm as a starting point. Harrison said as people come up the learning curve, starting with completely new approaches is dangerous.

“The barrier to entry is lower than you’d expect,” Harrison said. “For example, TensorFlow is an open source machine learning framework comprised of a C library accessed using Python. Deep learning application typically boils down to designing the right computational graph, defining operations on input to output, then using the series of operations to design the forward graph. The machine learning framework can then be used for the back computation. This will result in output that can be used for training the AI.”

Once you determine you need AI, how do you get started? Resources, foundational things you should know, etc.

I was surprised when Dr. Harrison mentioned all of the state of the art AI algorithms that are published and the tools that are available. If you’re an experienced programmer, you can sit down with the tutorials for training. Online courses are also available. You should have a broad general knowledge of neural networks and information on the general architecture you’re looking to implement.

The trick is experienced AI engineers that can develop the algorithms for the use case and provide the right input to the learning engine in order for the AI to be trained properly. Harrison mentions it’s invaluable to have someone who has done these things before in order to avoid “garbage in, garbage out” scenarios that are common occurrences with less experienced AI engineers. This is why starting with an established solution and someone experienced in AI can be valuable until more depth and experience can be gained.

Does AI need to be designed in from the start or is it something that, when you know where the AI comes in, you can design in later?

Harrison posited that you can take an established system, then design an AI add-on. Most applications don’t need to incorporate AI from the beginning. However, if the data streams you need for AI aren’t accessible in an easy way, then refactoring may be needed.

“The engine needs to see the information required for the AI to operate,” Harrison said. “If you have long records or full, accurate data, you can use it post facto for training. There is an element of trying to replace something that required a human. Image object recognition, for example. There are other tasks that are not easy for humans, but can be easy for a machine learning system to do. A good example of this is the use of principal component analysis. The basic idea is if your data exists in a high dimensional space, I’m going to condense that large set of numbers into a smaller set of numbers that is still representative of the data set. This is commonly done as a pre-processing step to machine learning algorithms. These boiled down things mean something to the AI, but wouldn’t be able to be interpreted for a human.”

How has Metawave applied these concepts to Autonomous Driving?

Dr. Achour had some fascinating insights. “The auto industry is trending from driver assistance to semi-driver to fully autonomous. This is in response to the next generation’s general disinterest with driving. This trend impacts car companies and their business model. They won’t be selling cars, but will be selling miles. Of course, safety becomes a major focus with this model. The winners will be the high quality, high safety environments. High safety requires driving focus, great vision, clear hearing, and fast reaction. Right now, everything is driven by the camera. It’s the best sensor with the highest resolution. But cameras are limited anywhere between 70 to 100 meters. Image capture is important, but you must also be able to classify the objects for autonomous driving in order to understand those objects’ behavior.”

Dr. Achour went on to describe environmental challenges. “Being able to operate in low light, bright light, and bad weather is also important. Lidar is not capable of operating in bad weather conditions. We also found that operating in Florida where you have a lot of mosquitos can even be a problem. Dirty roads are also a challenge. This is where radar excels. With its 3.8mm wavelength, sub-one-degree resolution can be reached at long ranges with specific power allocated. However, the problem with today’s radar is it can only detect the object, distance and speed. But this isn’t sufficient for an autonomous driver. You need to know what the object is – a bike? Person walking? An animal? Object classification becomes critical.”

Dr. Achour went on to describe the challenges with training the radar. There are a number of aspects that fall under the AI engine for autonomous driving. Melding computer vision with taking a sequence of radar scans to pass a list of objects categories, and velocities onto the driver AI.

Radar has a stream of data that adds critical information to solve the autonomous driver AI challenge. One example is radial velocity. You’ll get reflections from road and signs and everything running at a different velocity in reference to the ground. This is where you need to do a whole new class of AI operations on data that Lidar cannot provide.

Another key is building into the AI the ability to shape and steer the bean in a similar way to how drivers’ eyes continually scan and also use peripheral vision. This is another key to the Metawave combined sensor module.

Final Thoughts

The key to effective AI integration is mixing the proper available, established algorithms and tools with enough core AI experience to know how to effectively design and train the AI.

Other factors involve space, power consumption and AI decision latencies. All these things need to be considered when building the AI system.

The machine learning and training phase is typically best done in parallel leveraging high powered cloud environments due to the sheer amount of data required to train the AI.

Finally, understanding the IoT application from deployment and given the computational resources, then determined capability can be delivered.

AI is evolving and advancing. But Harrison emphasized this is not Hollywood where some sentient computer will be taking over the world soon. “AI is different than what many people think – it’s a machine that’s crunching computation for the purposes of feeding a decision making engine. These environments are not going to take over the world anytime soon.”