AI for the embedded IoT

October 11, 2016 OpenSystems Media

The Internet of Things (IoT) has been touted as the next Industrial Revolution, with pervasive connectivity and the insights it can generate offering a new digital lens for viewing and managing the physical world. But in addition to the tangible process efficiencies and quality of life improvements expected from the IoT, it’s also a stepping stone to perhaps the greatest achievement in human history: artificial intelligence (AI).

In many ways the technological progression of AI and the IoT are intertwined. IoT will provide the information that fuels our data-driven economy, while AI is the engine that will consume it. Though both paradigms are still in their infancy, each’s success is contingent upon the other’s: The IoT can never reach its potential without a mechanism for autonomously processing large heterogeneous data sets, just as AI is incapable of expanding without being fed massive amounts of data.

Like many other IoT-enabling technologies, however, AI research and development has largely been restricted to the IT sector, as the complexity of convolutional neural networks (CNNs), hidden Markov models (HMMs), natural language processing, and other disciplines used in the creation of machine learning algorithms and deep neural networks (DNNs) requires storage and computing resources usually only accessible on a data center scale. Likewise, programming methodologies have been tuned to IT developers, with tools such as R, Python, SQL, Excel, RapidMiner, Hadoop, Spark, and Tableau being the most widely employed by data analysts and computer scientists working in the AI field (Figure 1).

[Figure 1 | A 2016 poll of data analysts and scientists shows that R, Python, and SQL continue to gain traction as software tools and libraries for machine learning. Graph courtesy KDnuggets.]

This gap between AI and data collection at the physical/digital interchange is a common complication for the IoT, which is just beginning to drive the integration of IT and operational technology (OT). Nonetheless, it’s a gap that must be bridged.

AI for the embedded IoT

One of AI’s early excursions into the OT space came with the release of the NVIDIA Jetson TK1 platform in 2014. Based on the Tegra K1 system on chip (SoC) and its 192-core Kepler GPU and quad-core ARM Cortex-A15, the Jetson TK1 brought data center-level compute performance to computer vision, robotics, and automotive applications, but also provided embedded engineers with a development platform for the CUDA Deep Neural Network (cuDNN) library. The cuDNN primitives enabled operations such as activation functions, forward and backward convolution, normalization, and tensor transformations required for DNN training and inferencing, and the combination of this technology with the Jetson TK1’s 10 W power envelope meant that deep learning frameworks such as Caffe and Torch could be accessed and executed on smaller OT devices.

Today that groundwork has been extended, as the Jetson TK1’s successor, the Jetson TX1 system on module (SoM), contains 256 CUDA cores, an ARM Cortex-A57 CPU, and is capable of 1 TFLOPS performance. Machine learning tools and libraries are also more widely available through the NVIDIA JetPack 2.3, an evolution of the original set of cuDNN libraries that better serves OT developers by packaging the CUDA Toolkit 8 development environment for building GPU-based applications in C and C++; camera and Video4Linux2 (V4L2) APIs; the TensorRT inferencing engine, and cuDNN 5.1, which now supports recurrent neural networks (RNNs) and long short-term memory (LSTM) networks. As seen in Figure 2, an NVIDIA benchmark shows that optimizations in the Jetson TX1 and JetPack 2.3 can permit up to 20 times better energy efficiency than CPUs running comparable deep learning workloads, while still maintaining an 8-10 W power draw under typical loads on the TX1.

[Figure 2 | The NVIDIA Jetson TX1 can provide up to 20 times the power efficiency of an Intel Core i7 processor when running comparable GoogleNet deep learning inference loads.]

Never stop learning

As the IoT produces data for the AI revolution, the need to monitor the progression of machine learning technologies has also become apparent. This not only ensures that intelligent systems endowed with learning capabilities properly pursue the objectives of their education, but also that human developers properly refine the underlying frameworks and libraries upon which machine learning is based to meet the desired goals.

For this purpose, Cornell Computer Science PhD Jason Yosinski created the Deep Visualization Toolbox, an open-source project that allows users to observe the various layers of a DNN to infer how machine learning platforms compute answers to complex problems. A demonstration of the Deep Visualization Toolbox running on the Jetson TX1 developer kit can be seen in the video below, and for those of you fortunate enough to be visiting CES in 2017, NVIDIA typically showcases deep learning technologies at its automotive booth in North Hall.

It’s just the beginning, but an IoT inflection point is occurring at the intersection of IT and embedded. That inflection point is AI.


Brandon Lewis, Technology Editor
Previous Article
IIoT devices: Combatting the decreasing endurance of flash memory

Many of Datalight's recent conversations with customers and industry folks have focused on concerns over th...

Next Article
Top ten articles of October 2016

The audience has spoken. From AI to audio processing to the Internet of Things, these were your top ten fav...


Follow our industry-leading coverage of the Internet of Things with our IoT Design newsletter.

Subscribed! Look for 1st copy soon.
Error - something went wrong!