Minimizing Algorithm Footprint and Training at the AI Network Edge

September 04, 2020

Story

Minimizing Algorithm Footprint and Training at the AI Network Edge

Data processing is certainly not a new concept, nor are algorithms. However, where algorithms are being trained and run is rapidly evolving.

Data processing is certainly not a new concept, nor are algorithms. However, where algorithms are being trained and run is rapidly evolving. In recent years, training of machine learning (ML) algorithms has, by and large, been conducted within cloud environments due to the ability to utilize temporary compute resources for performing these data-intensive tasks.

Today, there is a big push to process data as close to the source as possible. This is due to the advent of the Internet of Things (IoT) and various technologies that are now generating a massive amount of data. All this data has organizations scrambling to make the best use of it in a cost-effective manner. Organizations need to factor in the cost of data transmission from its originating source, to where its being processed, along with the cost of storing and processing the data, again typically in resource-heavy servers/cloud environments.

Artificial intelligence (AI) technologies are starting to emerge that enable ML-model training and execution on low-compute power devices such as ESP32 and Cortex M4-based microcontroller units (MCUs) rather than larger microprocessor units (MPUs). This allows for data to stay local and transmission of processed data to only take place in the cloud when necessary.

By bringing the overall footprint requirements down to less than 100kb to train and run a ML model, AI in embedded computing is entering a new realm. For instance, the bubble sorting algorithm could be more welcomed by an embedded algorithm engineer than the merging sorting algorithm, because the former uses existing memory in place. Although many algorithms already exist, new AI-based time series prediction algorithms are being developed and optimized for the embedded environment. With this new approach, AI/ML models are trained on the embedded boards. These models are then used to do multi-variant statistic inferences during the execution period.

There are three advantages to these new AI-based time series prediction algorithms:

  1. The solution is agnostic to network latency since the computation is conducted on local boards, so performance is improved.
  2. The raw data's safety/privacy is guaranteed since the raw signal/data only appears locally.
  3. For each embedded board, a new ML/AI model is trained. This might be the core strength of this approach because in typical industrial cases, due to the environment variants, the sensor’s imperfection, and machine variants, it is not possible to use a single ML/AI model to cover a cluster of machines' features. Nor was it affordable to train models for each embedded board by using cloud servers.

Technology Breakthroughs

Algorithms play an important role in embedded computing. Typically, algorithmic tasks that were performed by the embedded devices include sensor data cleaning/filtering, data encoding/decoding, and control signal generation. Due to the limited memory capacity, CPU power, and different architectures, the definition of the "best algorithm" in the embedded computing environment could be quite different than in PCs and cloud servers.

In the last several years, there has been a breakthrough and very rapid progress of AI/ML algorithms. Many efforts have been focused around adopting AI/ML models (these models were trained elsewhere) to the embedded context. In other words, to deploy AI/ML models successfully, the memory/CPU usage and the power consumption of the algorithms needs to be optimized.

AI is shrinking and can run these advanced algorithms. Technology advances now allow AI and predictive maintenance to move from MPU-based devices to MCU-based devices, with a small footprint and significantly lower price point. MCU-based devices can now perform tasks at the network edge—such a predictive maintenance—that were previously only available on MPUs. This new functionality enables silicon manufacturers, original equipment manufacturers (OEMs) and smart device manufacturers to reduce costs and deliver differentiated product offerings.

About the Author

Yasser Khan is CEO of One Tech, Inc., a global organization focused on redefining artificial intelligence at the network edge. He is a veteran business executive and serial entrepreneur in the digital transformation space with a focus on business process automation, IIoT (Industrial Internet of Things) and artificial intelligence/machine learning. He has more than 25 years of experience in launching smart technology solutions and new business models for mid to enterprise-level organizations. He has implemented innovative technology ecosystems within multiple, global Fortune 100 organizations, including AT&T, Nutrien and Cricket Wireless. In 2016, Khan was nominated for “Entrepreneur of the Year” by Ernst & Young LLP.