Machine learning starts with the algorithms

December 5, 2017 Rich Nass

There are lots of different ways to look at machine learning, which is the ability for a computing device to make decisions based on actions and conditions. Some look at it from the very starting point, the initial software and algorithms that are run on the hardware to make the whole process work.

Some areas that are currently taking advantage of machine learning include big data, like for SEO and other analytics. There’s also a lot of talk (with less action) in the industrial-automation space for predictive maintenance. For example, we put systems at the Edge to learn what normal behavior looks like, then monitor performance and raise a flag if something abnormal is observed.

It’s fair to say that the real key to accurate and useful machine learning consists of assembling the right combination of algorithms, compilers, and hardware architecture. If you don’t have the right components in any of those three areas, machine learning won’t work as it should. For example, if you don’t start with an algorithm that can be parallelized, you won’t get very far. Similarly, if your hardware doesn’t support parallelism to handle the intense computations, that’s a non-starter. And the compiler, which sits in the middle, must provide the right bridge.

A lot of the do’s and don’ts are still being worked out, as machine learning can be a very inexact science. Hence, lots of people are trying to develop the tools that address these issues. The real-time nature of the majority of machine-learning applications compounds the problem, making it significantly more difficult.

According to Randy Allen, Director of Advanced Research for Mentor Graphics, “Machine learning problems are going to boil down to a matrix multiplication. This consists of two phases, training and using. In the training, you generate a sequence of large matrix multiplications that These are continuously repeated.”

That’s why the combination of the three aspects outlined earlier is so significant. If there’s even a slight error somewhere in the sequence, it will be magnified over time, resulting in a large error, which is unacceptable in machine-learning applications.

To ensure that information is returned in real time, your choices may be to reduce the required precision, or increase the amount of processing power thrown at the problem. In general, neither of these options are good ones.

Going forward, we’ll see more application-specific, rather than general-purpose, models. Vision is a good example of that, where the hardware-software combination can be tuned to handle the vision algorithms. Also, we’ll see changes in what computations are handled at the Edge rather than the Cloud.

“It’s always the software that’s the big issue here,” says Allen. “Lots of people are coming up with hardware that takes lots of different approaches. That hardware is only useful if the programmer can get at it. And that’s where the compilers and algorithms come in. If you don’t have the right set of tools to go and utilize it, it doesn’t matter how good the hardware is.”

Mentor’s formula is to optimize things so you can work in a non-cloud environment by optimizing performance at the Edge. This can be achieved with what it calls “data-driven hardware.” And that doesn’t mean just throwing more processing power at the problem.

Note that Mentor will be hosting a webinar on how to optimize machine learning applications for parallel hardware. The company also provide s a fair amount of information on the topic on its site.

Allen adds, “We use an entirely different set of algorithms to optimize machine learning. And that’s not something the hardware guys typically consider when they’re developing an interface to the software. That’s where we can assist.”

About the Author

Rich Nass

Richard Nass is the Executive Vice-President of OpenSystems Media. His key responsibilities include setting the direction for all aspects of OpenSystems Media’s Embedded and IoT product portfolios, including web sites, e-newsletters, print and digital magazines, and various other digital and print activities. He was instrumental in developing the company's on-line educational portal, Embedded University. Previously, Nass was the Brand Director for UBM’s award-winning Design News property. Prior to that, he led the content team for UBM Canon’s Medical Devices Group, as well all custom properties and events in the U.S., Europe, and Asia. Nass has been in the engineering OEM industry for more than 25 years. In prior stints, he led the Content Team at EE Times, handling the Embedded and Custom groups and the TechOnline DesignLine network of design engineering web sites. Nass holds a BSEE degree from the New Jersey Institute of Technology.

Follow on Twitter More Content by Rich Nass
Previous Article
Avnet Releases Upgraded TPM V2.0 Pmod for Advanced IIoT Security

Avnet, a leading global technology distributor, today released the next generation Trusted Platform Module ...

Next Article
Lantronix Launches Advanced Wireless Embedded IoT Gateways
Lantronix Launches Advanced Wireless Embedded IoT Gateways

Lantronix - a global provider of IIoT-enablement solutions - is launching the xPico 200 embedded IoT gatewa...

×

Stay updated on industrial topics with the Industrial edition of our Embedded Daily newsletter

Subscribed! Look for 1st copy soon.
Error - something went wrong!