Why the U.S. is Falling Behind in AI and Autonomous Drive Tech, Part 1

January 26, 2019 Brandon Lewis

Embedded Computing Design recently conducted targeted industry research on the prevalence of machine learning and artificial intelligence in the development community. Much to our surprise, more than 75 percent of the respondents are already researching or using AI in one capacity or another.

Now, AI and machine learning are not new, and the questions didn’t specify whether respondents were indicating the use of time-honored techniques like principle component analysis, decision trees, etc. On the other hand, modern AI/ML is becoming synonymous with neural networking, so I took the results at face value.

One important note, however, is that the respondents were evenly distributed geographically. At CES this year, I realized which developers were working with more advanced AI.

And they weren’t American. Or, for that matter, even “Western.”

AI: Play by the Rules, You Might Lose

MINIEYE is an autonomous vehicle sensor company headquartered in Shenzhen China. Rather than the sensor hardware, however, MINIEYE’s core competency is developing precision convolutional and recurrent neural networks (C/RNNs) that run on mono front cameras and interior cabin sensors to enable Level 1-3 autonomous driving features.

MINIEYE licenses its neural net IP to Tier 1 automotive suppliers looking to achieve the functionality of Mobileye products at a fraction of the cost and power consumption. The company’s neural network technology is based on a three-pronged architecture that enables small, fast, and accurate algorithms.

  • ThiNet, a homegrown compression technology, shrinks the computational and storage requirements of neural networks to make them suitable for small, resource-constrained edge processors.
  • FastNet, a neural network acceleration library, enhances the computational throughput of neural networks to 1.8x that of Caffe or TensorFlow Lite
  • HardNet, the company’s neural network architecture IP, also helps reduce the size and improve the accuracy of neural networks running on programmable logic

The MINIEYE neural net IP is optimized to run on automotive-grade Xilinx Zynq-7000 SoCs, which provide dual Arm Cortex-A9 CPU cores, two floating point units, and, of course, FPGA fabric to ensure that neural network workloads are able to run on the most efficient compute resource. The current architecture only consumes 60 percent of the Zynq platform’s performance and memory resources, leaving ample headroom for end user applications and value add.

Automotive-grade Xilinx Zynq-7000 SoCs equip programmable FPGA fabric, two Arm Cortex-A9 CPU cores, and two NEON floating point units (FPUs) to support a range of artificial intelligence (AI) workloads.

MINIEYE technology has been designed into nine 2019 Chinese car models, including vehicles from BYD, Zotye, Chery, and Dongfeng. The neural networks are used in forward collision warning, lane departure warning, highway front vehicle monitoring and warning, city front collision warning, pedestrian collision warning, traffic sign recognition, and automatic emergency braking. Because the solutions is hardware and software programmable, it can also be retroactively upgraded to meet future requirements, such as Euro NCAP.

BYD’s new Song SUV is just one of the many BYD vehicles with ADAS systems powered by Xilinx Zynq chips. Photo courtesy of BYD.

AI Teaching AI

But you’re working in the U.S. or Western Europe. So what?

Well, a few months ago MINIEYE began marketing its IP to American automakers, admittedly without any U.S. traffic data to support their algorithms. That was in the October timeframe. By the time CES 2019 rolled around, the company had retrained their AI models and had fully functional, highly accurate algorithms to demonstrate to potential customers – based on U.S. traffic.

The way MINIEYE achieved this was through another homegrown technology called Mini-Annotation, which is essentially an AI tool that automates the process of labeling and training AI models. MINIEYE is using AI to train its AI.

While you may think that’s not a prudent or safe way of developing technologies for safety-critical applications, Dr. Guoqing Liu, Founder and CEO of MINIEYE, says that this process removes 80 percent of the human errors associated with annotating and labeling super wide, raw, and rough input data sets. Dr. Liu acknowledges that need for some sort of benchmarking mechanism to validate accuracy, but asserts that the ability of MINIEYE solutions to identify the presence of an object with 100 percent accuracy has been validated by the auto manufacturers designing his technology into vehicles.

And being first to market is a big, big advantage.

Rethinking Artificial Intelligence

My travels have resulted in many discussions about AI in mission- and safety-critical applications, with the general consensus being that meaningful implementations of the technology are still some way off. Rightfully so, as engineers are understandably wary of the unwieldy potential that unsupervised learning systems hold. Supervised learning, currently the leading field of research, requires monumental data sets and the time to continually refine them. Even after these models have been trained, we hesitate to use AI algorithms as more than just a single input amongst numerous other values.

Everything we believe about developing safety-critical systems is based on determinism, repeatability, and our ability to validate this performance over time in various environments. Currently, this is counter to most AI.

Then, there is the school of thought adopted by MINIEYE and a number of other foreign companies, not just in China. Because the most effective method of training an AI model is to constantly feed it data, by certain logic the best way to train a model is to deploy it into the real world as quickly as possible. To harness these benefits you must be able to label data flows with extreme speed and precision, which, in theory, is a tailor-made scenario for AI training AI.

While it may not be widely accepted, development principles could be delaying your deployment while the competition soaks up market share.

Read "Why the U.S. is Falling Behind in AI and Autonomous Drive Tech, Part 2" here.

About the Author

Brandon Lewis

Brandon Lewis, Editor-in-Chief of Embedded Computing Design, is responsible for guiding the property's content strategy, editorial direction, and engineering community engagement, which includes IoT Design, Automotive Embedded Systems, the Power Page, Industrial AI & Machine Learning, and other publications. As an experienced technical journalist, editor, and reporter with an aptitude for identifying key technologies, products, and market trends in the embedded technology sector, he enjoys covering topics that range from development kits and tools to cyber security and technology business models. Brandon received a BA in English Literature from Arizona State University, where he graduated cum laude. He can be reached by email at brandon.lewis@opensysmedia.com.

Follow on Twitter Follow on Linkedin Visit Website More Content by Brandon Lewis
Previous Article
Everyday Life, Enhanced With Artificial Intelligence and Machine Learning

Modern technologies are enabling increased automation across multiple markets and enhancing everyday life.

Next Article
Sensors Expo & Conference 2019 Adds Embedded Technologies Expo & Conference as Newest Co-Located Event

The industry’s largest collective of sensor-focused events returns to San Jose in June with three co-locate...