Analytics-driven embedded systems, part 2 - Developing analytics and prescriptive controls

March 14, 2016

In Part 1, we motivated the use of analytics within embedded systems using examples such as the BuildingIQ intelligent climate control system and the...

In Part 1, we motivated the use of analytics within embedded systems using examples such as the BuildingIQ intelligent climate control system and the Scania emergency truck braking system. We covered data access, data pre-processing, and identification of the most predictive features. Now let’s turn to developing the predictive analytics algorithms themselves.

Developing analytic algorithms

It’s important to consider whether an analytic algorithm is your best approach. In cases where system behavior can be well characterized by known scientific equations, proven mathematical modeling can be a simple and efficient way to meet design objectives. This approach uses techniques like data fitting, statistical modeling, ode and pde solving, and parameter estimation. Models built this way have the advantage of being pre-determined with historical data or based on first principles, can be memory and computationally efficient to implement on embedded systems, and can be simpler to develop and maintain. So, before considering data-centric machine learning techniques, it’s prudent to first consider if “workhorse” modeling methods can meet your design objectives. However, for an increasing number of design challenges, such as dynamically setting climate control points in the BuildingIQ example or object identification in the Scania braking application, machine learning is the best approach.

Machine learning

Machine learning algorithms use computational methods to “learn” information directly from data without relying on a predetermined equation as a model. It turns out this ability to train models using the data itself opens up a broad spectrum of use cases for predictive modeling – such as financial credit scoring and online recommendations for movies, songs, and retail purchases. In embedded systems, machine learning is used for a rapidly growing range of applications, including face recognition, tumor detection, electricity load forecasting, and the BuildingIQ and Scania applications mentioned earlier. The increased availability of “big data,” compute power, and software tools makes it easier than ever to use machine learning in engineering applications.

Machine learning is broadly divided into two types of learning methods, supervised and unsupervised learning, each containing several algorithms tailored for different problems.

[Figure 1 | An overview of different types of machine learning methods and categories of algorithms.]

Supervised learning is a type of machine learning that uses a known dataset (called the training dataset) to make predictions. The training dataset includes input data and labeled response values. From it, the supervised learning algorithm seeks to build a model that can make predictions of the response values for a new dataset. A test dataset is often used to validate the model. Using larger training datasets often yields models with higher predictive power that can generalize well for new datasets.

Supervised learning includes two categories of algorithms:

  • Classification: For categorical response values where the data can be separated into specific “classes.” Common classification algorithms include support vector machines (SVM), neural networks, Naïve Bayes classifiers, decision trees, discriminant analysis, and nearest neighbor (kNN).
  • Regression: For prediction when continuous response values are desired. Common regression algorithms include linear regression, nonlinear regression, generalized linear models, decision trees, and neural networks.

Choosing algorithms depends on a number of design factors, such as memory usage, prediction speed, and interpretability of the model. Other considerations include whether a single or multi-class response is needed and if predictors are continuous or categorical. Since the models are only as good as the labeled training data used, it’s important to take care in using representative training datasets. The machine learning workflow starts with selecting features, then specifying training and validation sets, training with multiple algorithms, and finally assessing results. An interactive app like the one shown in Figure 2 makes the machine learning workflow easy to learn and use.

[Figure 2 | An example of the MATLAB app (Classification Learner app) used to train models for classification. You can explore data, select features, specify cross-validation schemes, train models, and assess results. The resulting models can be exported for use in applications such as computer vision and signal processing.]

Unsupervised learning is a type of machine learning used to draw inferences from datasets consisting of input data without labeled responses.

  • Cluster analysis is the most common unsupervised learning method and is used for exploratory data analysis to find hidden patterns or groupings in data. k-means is a popular cluster modeling algorithm that partitions data into k distinct clusters based on a measured distance to the centroid of a cluster.
  • Hierarchical clustering uses a different approach of building a multi-level hierarchy cluster tree, which offers visual interpretation but has higher computational requirements, making it less suitable for large amounts of data.
  • Other algorithms include Gaussian mixture models, hidden Markov models, and self-organizing neural network maps.

The BuildingIQ team used cluster analysis as part of their model creation process. They used k-means clustering and Gaussian mixture models to segment the data and determine the relative contributions of gas, electric, steam, and solar power to the heating and cooling processes.

Deep learning

For classification problems involving images, text, and signals, deep learning has emerged as a new category of advanced analytics. When trained on large labeled training datasets (often requiring hardware acceleration with graphics processing units (GPUs) and intensive training and assessment), deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance in object classification. For image classification, convolutional neural networks (CNNs) have become popular because they eliminate the need for manual feature extraction by extracting features directly from raw images. This automated feature extraction makes CNN models highly accurate for computer vision tasks such as object classification.

The approaches and algorithms listed above are becoming more accessible to systems engineers to combine effective analytics in their embedded systems. In the final article of this three-part series, we’ll cover performing analytics and predictive controls in real time and integrating these into an overall solution, including sensors and embedded systems as well as enterprise IT systems and cloud infrastructure.

Paul Pilotte is a Technical Marketing Manager at MathWorks.

MathWorks

www.mathworks.com

@MATLAB

LinkedIn: www.linkedin.com/company/the-mathworks_2

Facebook: www.facebook.com/MATLAB

Google+: plus.google.com/+matlab

Paul Pilotte, MathWorks