Exploring Embedded Machine Learning

January 23, 2019 Curt Schwaderer, Technology Editor

In 1943 neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on neurons and how they work. A model was created using an electrical circuit and the neural network came into being. Seventy years later these beginnings have evolved into a number of large-scale projects by some of the top technology companies and technology communities around the globe – GoogleBrain, AlexNet, OpenAI, Amazon Machine Learning Platform are examples of some of the most well-known initiatives relating to AI and machine learning.

Enter IoT. And its embedded emphasis. And monetization dependencies on (near) real-time analysis of sensor data and taking actions on that information. These leading initiatives assume massive amounts of data can be fed into the cloud environment seamlessly where analysis can be performed, directions distributed, and actions taken all within the time deadlines required for every application.

Qeexo (pronounced “Keek-so”) CTO Chris Harrison believes machine learning belongs at the edge and Qeexo is developing solutions to do just that.

Mobile Sensors and AI

Like many paradigm shifting initiatives, this particular initiative started with a challenge - how can more sophisticated interaction with touch for a mobile device be done? This led to the exploration of the fusion of the touchscreen data with accelerometer to measure taps against the screen. The result was the ability to distinguish between finger, knuckle, nail, and stylus tip and eraser, which broadens the interaction between the user and the device.

“If we’re going to put in sophisticated multi-touch, we need to do some smart things in order to resolve ambiguous user inputs,” Chris mentioned. “The way to do this is with machine learning. The machine learning software behind our FingerSense product differentiates between finger, knuckle, and nail touches. These new methods of input allow for access to contextual menus. This brings a right-click functionality as opposed to touch and hold.”

Mobile Device Machine Learning Challenges

The power and latency budget for machine learning on a mobile device was tiny. It took almost three years before the requirements were met.

“As a mobile application developer, you have two choices on a mobile device - you can do things fast at higher power, or slower at lower power. This led to a key capability we call Hybrid Fusion. The machine learning software needs to be very clever about access to and processing of the sensor data in order to fit within the power and latency budget,” Chris said.

FingerSense became very good at doing edge and device optimized machine learning – something that traditional machine learning cloud environments don’t have to consider.

“Most companies are thinking about deep learning from a gigantic servers and expensive CPUs perspective. We took the opposite path. The IoT goal is a “tiny” machine learning that can effectively operate with limited resources and maintain near real-time deadlines of the application. By cutting our teeth in the mobile industry, it gave us the skills and technologies to apply machine learning to edge IoT and embedded devices.

One of the most exciting frontiers is bringing what Chris calls “a sprinkle of machine learning” to IoT and small devices. For example, your light bulb doesn’t have to be able to do a web search for the weekly weather but adding a touch of machine learning that allows it to sense movement and temperature to make on/off decisions has real-world value.

Embedded Machine Learning Architecture

The machine learning environment is written in C/C++ and ARM assembly to optimize efficiency and operating system portability. Most of the operation is within a kernel driver component. The software must deal with power management for battery powered devices. Using the main CPU in the device for the embedded machine learning can be very power consumptive. So, instead of hooking accelerometer and motion sensors to the main CPU, a low power microcontroller sits in-between the sensor and the main CPU acting as a “sensor hub.” The sensor hub is more power efficient and specialized for the heavy lifting of sensor communication. The sensor hub can also can execute a little bit of logic to allow the main CPU to be off for a much longer period of time. This tiered design optimizes the power and latency budgets that makes the embedded machine learning environment possible on mobile device and IoT sensors.

“Accelerometer data is constant streams of data with no logic being applied, so this needs to be continually sampled,” Chris said. “This is where the machine learning logic starts (and perhaps ends). There may be additional machine learning logic that can be done on the main CPU. You may decide that the sensor hub can filter out or pre-choose the data, so fewer amounts of data go to the main CPU.”

One example is when bursts of traffic occur. If sensor information is idle, then generates a burst of information and this burst moves into main memory or ties up the bus, things can go badly. Alternatively, if the coprocessor provides a vector representation of the information to the main processor, this can streamline efficiency while still being able to interpret the information.

Moving Away from the Cloud

One must be careful not to assume perfect and high bandwidth network connectivity and infinite machine learning resources on the way to a successful IoT system. Chris warns against the cloud environment being used as a crutch.

“If you take the time to properly analyze, gather requirements, and design the IoT system, you can absolutely perform machine learning at the edge. This minimizes network requirements and provides a high level of near real-time interaction.”

And of course, security considerations are also at the forefront. Whenever possible, you want to reduce the attack surface. Some applications may be able to do machine learning and actions exclusively at the edge, eliminating the internet connection altogether.

“At CMU [Carnegie Mellon University] we would occasionally get calls from law enforcement telling us our cameras were being used to send emails,” Chris said. “And these attacks were happening with security experts running the network! When possible, don’t connect your system to the internet. If we can get away from that trend [leveraging cloud processing for everything], we should be able to achieve a far more secure, private, and efficient system. There is a time and place for cloud connections, but engineers need to stop jumping immediately to that resource.”

Given how fast these processors are improving it certainly seems achievable. There is also a cost-benefit. Today most smart devices are priced out of the mass market. If we can sprinkle intelligence into these devices and bring down the costs and provide real value, adoption will accelerate.

Previous Article
As the Stakes Rise, Software Suppliers Must Sync with Car Manufacturers’ Standards

The stakes continue to rise, and the importance of suppliers and manufacturers being in sync on software sa...

Next Article
Challenges of Building an Omni Wheel Robot
Challenges of Building an Omni Wheel Robot

You may wonder why wheels that roll at 90º to their traditional axis, and their 45° Mecanum cousins, aren’t...