High-performing deep learning is possible on embedded systems

December 26, 2017 Christoph Wagner, MVTec

The Industrial (IIoT) is characterized by highly automated and universally networked production flows. In this area, machine vision is becoming increasingly more important as a key technology. As the "eye of production," machine vision processes digital image information generated by cameras and is, therefore, able to identify a wide range of objects that then can be reliably allocated and handled throughout the entire value chain.

To make the identification process even more precise and adapt it to the requirements of flexible and networked IIoT processes, AI-based methods such as deep learning and CNNs, are becoming more prevalent in machine vision. The challenge for embedded systems is that, compared to stationary desktop systems, embedded systems are more limited in terms of their processors, memory, and storage capacity, and therefore have less computing power.

One technological proof point that that brings deep learning to the nVidia Pascal architecture comes from MVTec Software. The deep learning inference in version 17.12 of its Halcon machine vision software was successfully tested on nVidia Jetson TX2 boards based on 64-bit ARM processors.

The deep learning inference, i.e., applying the trained convolutional neural network (CNN), almost reached the speed of a conventional laptop GPU (about 5 ms). Hence, users can enjoy all the benefits of deep learning on the popular nVidia Jetson TX2 embedded board, thanks to the availability of two pretrained networks that ship with the software. One of them (the so called "compact" network) is optimized for speed and therefore ideally suited for use on embedded boards.

In addition to deep learning, the full functionality of the standard Halcon machine vision library is available on these embedded devices. Applications can be developed on a standard PC. With the help of HDevEngine, the trained network as well as the application can then be transferred to the embedded device. Plus, users can utilize more powerful GPUs, available for the PC, to train their CNN, and then execute the inference on the embedded system. This shortens time to market.

Christoph Wagner is the Embedded Vision Product Manager at MVTec.

Previous Article
Simple device-to-device communication with the NRF24L01+ module
Simple device-to-device communication with the NRF24L01+ module

While the NRF24L01+'s range is limited to 100 meters or less line-of-sight, it’s perfect for simple remote ...

Next Article
How many rubber trees are owned by the Ford Motor Company?

There is a good, albeit rather general, rule that applies to the management of a successful business: focus...