Are you ready to handle the ever-bigger IoT picture?

July 30, 2014 OpenSystems Media

The Internet of Things (IoT) implies that there is no boundary – each embedded system will have to handle other embedded systems, data centers at HQ, and cloud-based apps. This new universe is exciting and frightening for developers and their management teams.

A “disconnected” embedded system can be complex, but at least the software is operating within a defined domain of memory and processors, together with the I/O registers that connect to real-world sensors, displays, timers, and actuators. Development engineers create architecture and design documents to specify every piece of the system, and define the response to every external stimulus.

While a potential boon for users, the IoT has made the environment for embedded software a lot more complex. Architects and designers are finding ways of making products more functional, more competitive, and more convenient by creating systems-of-systems to implement and deliver new capabilities. There are examples in every industry, from aerospace and industrial machinery to healthcare and consumer electronics.

If you’re building controllers for agricultural machinery today, you have to think about GPS capabilities to enable the connected controller to determine the optimum amount of fertilizer to apply to each square yard in the field. The old objective of better control at lower cost still exists, but there’s a new expectation. Developers of smart products know that combining multiple systems will enable new and disruptive change. The challenge is to understand the full scope of what’s possible, then beat the competition with new approaches and concepts.

Some of the skills needed are normal extensions of the old closed view of embedded software. For example, imagine development of the networked version of a previously standalone piece of hand-held equipment. One of the design studies may be to decide if signal processing and data-archiving functions could be implemented via a dedicated wireless link to an external server. This might reduce the size, weight, and cost of the handheld part of the equipment. Investigating and optimizing workload shares between the handheld and the external server – considering the various cost issues – is not simple, but it’s moderately routine.

Remember that Dijkstra was lecturing about Co-operating Sequential Processes in the 1960s, and it’s more than 10 years since Jim Gray wrote about the factors that must be considered to find an optimum distribution of workload between local and remote systems. Gray characterized the breakeven point at that time as “a minute of computation per megabyte of network traffic,” and points out that this result depends critically on the relative cost of processing versus communication between nodes, which is a parameter that’s changed dramatically in the intervening years.

But distributing a previously defined task across multiple nodes is still the inward looking approach, even, or perhaps especially, when it’s the updated version of multicores accessing shared memory. This is not enough. What conditions will trigger the “Aha!” moment in which the potential for a completely new capability crystallizes out from a designer’s thought process, or perhaps occurs to the marketing person writing out a product roadmap.

Perhaps an example might help. Imagine a new version of a controller for a production machine. The controller handles complex, real-time response to temperature and pressure sensors. Over many years, each new version of the software has improved the operator display information, reduced the machine’s energy consumption, and so on. This year, the controller has a network connection. What will it be used for?

Of course, there’s a non-answer that’s based on making this someone else’s problem – the role of the network connection will be spelled out in the requirements. The problem with this non-answer is that it simply moves the problem to the requirements engineer, who in turn may have to extract answers from marketing people. Organizations that have implemented an agile development methodology can say, “That’s why we don’t define all these things up front. We work through iterations designed to expose this kind of issue/opportunity to the entire community of stakeholders, who make decisions for the next iteration.” That’s a good answer.

However, I believe that it’s only a partial answer. The well-known Henry Ford story comes to mind here: “If we had asked the stakeholders what they wanted, they’d have asked for faster horses.” The core issue is finding the right scope of the problem, the technology, and the development and operational systems that will deliver and sustain the new product.

Think of the installation steps for the production machine example. At some point, often during installation, someone defines how the machine will be treated as an asset. This includes, for example, registering the machine on the company’s asset register, creating capital value depreciation schedules, making decisions about maintenance planning, classifying the qualifications, and determining the training and experience that operators and maintenance people must have.

It’s a long list and at first glance it has almost nothing to do with the machine controller’s capabilities. But let’s challenge that initial reaction. Now that the machine controller has a network connection, surely there are plenty of steps in this asset-management-setup sequence that could be automated. When the machine is connected to a new network, it could go through a discovery phase to find its new owner’s accounting and maintenance systems. It could self-register as an asset, and negotiate with the maintenance system about regular and as-needed servicing.

Of course, there’s an existing machine controller development process, which for years has specified and delivered better control algorithms for sensor handling, energy management, and operator information. This is a good process, it has delivered great results so far, but will it ever consider the installation process?

My point is that the Internet of Things world is going to be full of these kinds of situations, and the vast majority of opportunities will be small, incremental steps. They won’t justify a “cars-rather-than-horses” initiative, and will be invisible unless someone is looking at the project with the right scope, and with the right knowledge to start asking questions.

The agile development approach is a start, but the technical opportunities, risks, and gotchas will be a big factor. Therefore, it’s necessary to enable and encourage an outward looking mindset across all the engineers – hardware, software, requirements, test, installation, and service. When the stakeholder reviews come around, the engineers must be willing and able to point out things that are both relevant and have a chance of being achievable.

I’d like to conclude by pointing to tools and techniques that make this possible, but I don’t think this is the most important factor. Certainly, a systems engineering approach can force consideration across the domains of product, development system, and operational environment. Life-cycle analysis can also trigger the right thinking across aspects of installation, operation, service, and recycling. But it’s going to be the engineer-in-the-loop that will be the unpredictable source of important “Aha!” moments.

The particular talent pool that needs to be mobilized is the group of embedded software developers who have been developing software for standalone products. In the production machine example, it would be one of this group who would realize that the new network connection could provide visibility of the parameters of the next batch, and this would allow a more efficient cool-down, warm-up changeover sequence. The management team looking after these people must push them not only for inward-looking, control-system-algorithm type improvements, but also for outward-looking, change-the-game type insights. And the engineers must remember that being years ahead of your time is relatively easy. The ideas your company needs are the ones that can be implemented within its planning horizons.

Peter Thorne is a managing director for Cambashi. He’s responsible for consulting projects related to the new product introduction process, e-business, and other industrial applications of information and communication technologies. Peter holds a Master of Arts degree in Natural Sciences and Computer Science from Cambridge University, is a Chartered Engineer, and a member of the British Computer Society.

Peter Thorne, Cambashi Ltd.
Previous Article
On the loss of Joe Pavlat, a friend and mentor
On the loss of Joe Pavlat, a friend and mentor

My first day of work in the embedded computing space came the day after the Super Bowl in 2011. I had about...

Next Article
Analytics-driven embedded systems, part 3 - Integrating real-time analytics
Analytics-driven embedded systems, part 3 - Integrating real-time analytics

The first two articles in this series introduced analytics-driven embedded systems and described data acces...