The Recipe for Designing in Deeply Embedded AI

September 16, 2019 Rich Nass

There are two key aspects of artificial intelligence (AI) that you should be aware of. First, it’s being designed into an increasing percentage of embedded systems at the deep edge of the network, from industrial controls to automotive applications to consumer/mass market devices. So there’s a good chance you’ll be needing a primer on how to work with these AI-related components.

The second aspect is that designing around AI is potentially a complex endeavor. And that’s where we come in. We’re here to increase your productivity in doing so and provide the recipe for success in the AI landscape; we will point you in the right direction so your probability for success is quite high.

The recipe for an AI design can begin like just any other embedded system—though perhaps the choice of the right microprocessor/microcontroller should consider the availability of an “AI-friendly” ecosystem. In this case, we’ll start with an STM32. The ecosystem includes the STM32Cube.AI, a package within the ST toolkit that can interoperate with deep-learning libraries to automatically convert pre-trained artificial neural networks and map such a conversion onto just about any STM32 microcontroller (MCU).

The next ingredient in your AI recipe is the AI deep-learning open software. Various frameworks are available with the most common and popular being TensorFlow, Keras, Pytorch and Caffe. Within your framework, you can generate your neural-network library, which is simplified thanks to ST’s pre-trained models offered in the AI application packs.

Using Keras or TensorFlow for example, you basically create a topological model to represent your neural network, or network of nodes. Each node can be an operation over tensors with varying levels of complexity, from a simple math function up (e.g. add) to a complex multi-variable non-linear equation.

The operations return data that are plotted on the network graph. Where it gets a little tricky is that an operation can consume and produce data with more than two dimensions, called tensors. This conversation gets a little deep and is beyond the scope of this article, but there are some good references available.

Then such a conversion is performed by a tool that can generate a library that produces code you can integrate into your project; the STM32Cube.AI and its output libraries can run on any STM32 MCU. To further ease integration for its customers, ST generated some end-to-end application examples in individual function packs for motion, audio and image analysis.

Now that you have your overlying hardware and software, the next step would be to acquire some test data using your nascent embedded system or from another source. The test data trains the neural network using tools such as Keras or TensorFlow Lite. As you’d expect, this is an on-going, iterative process, so the models are continually being refined, updated, and improved until you achieve the level of accuracy you need. That training process produces a model that can be automatically converted by the STM32Cube.AI tool into optimized run-time libraries for the STM32 MCU.

The STM32L476 family of MCUs provides the main ingredient for AI recipe.

Ready to start your AI design? If so, you can use any of a wide range of MCUs depending upon your application. ST has posted numerous videos demonstrating its MCUs doing a range of applications. While your performance requirements may differ and lead to a different MCU choice, you could do Object Classification on a high-performance STM32H7 or wearable/wellness applications on an 80-MHz STM32L476JGY or similar microcontroller.

The bottom line is that AI is very likely in your future, if it’s not already in your present. So if you aren’t already familiar with how to incorporate it into your designs, it’s time to learn. One important note: AI ecosystems are advancing rapidly so it is wise to choose vendors whose roadmaps show their understanding of the pace of change and whose investments demonstrate their willingness to keep up.

About the Author

Rich Nass

Richard Nass is the Executive Vice-President of OpenSystems Media. His key responsibilities include setting the direction for all aspects of OpenSystems Media’s Embedded and IoT product portfolios, including web sites, e-newsletters, print and digital magazines, and various other digital and print activities. He was instrumental in developing the company's on-line educational portal, Embedded University. Previously, Nass was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events in the U.S., Europe, and Asia. Nass has been in the engineering OEM industry for more than 25 years. In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites. Nass holds a BSEE degree from the New Jersey Institute of Technology.

Follow on Twitter Follow on Linkedin Visit Website More Content by Rich Nass
Previous Article
5G Primer Part 3: 5G Core Network

In today’s column, I’ll be diving into the new 5G core net-work architecture and components description.

Next Article
Dev Kit Weekly: Thundercomm TurboX AI Kit
Dev Kit Weekly: Thundercomm TurboX AI Kit

This week we review the Thundercomm TurboX AI Kit, which is based on the Thundercomm TurboX module, which i...