Edge computing bridges cloud and embedded worlds

June 5, 2018 Joseph Byrne, NXP

Some say that if you aren’t living on the edge, you’re taking too much room. Likewise, if you aren’t computing on the edge, you’re waiting too long for server response and consuming too much WAN bandwidth by performing analysis in the cloud. Situated near the sources and sinks of data, edge-computing nodes enable real-time processing and eliminate an expensive access-network connection to the cloud as a bottleneck to analysis. Edge computing also can increase privacy compared with cloud computing by keeping data onsite and improve resilience by allowing processing to continue even when a WAN link flickers out.

In some cases, these nodes will be servers in the usual sense. In most cases, however, they will be compact devices tucked out of the way, toiling to keep the IoT humming. In this way they’re like good old-fashioned embedded systems, but more so with respect to hardware than software. One definition of embedded system, is one that does not run user-installed software. Operating under that assumption, these systems run custom software, bundled in a monolithic image. It is difficult, therefore rare, for developers to add features or patch flaws. It’s likewise difficult for these systems’ users to apply updates or upgrades, at least compared with the process of updating a PC or smartphone. The software is moreover tied to the underlying hardware and its specifics, whereas server-based software runs in containers or virtual machines that decouple software from actual hardware.

This decoupling is one of the key advantages underlying cloud computing: spawning virtual copies of a machine scales out capacity. Adding network connections and provisioning additional storage is similarly a software function. In cloud computing, software frameworks and new programming languages further abstract applications from underlying hardware and improve developer productivity. Tools like Hadoop and TensorFlow enable programmers to analyze big data and implement artificial intelligence with relative ease. Software images are easy to develop and deploy. Serverless functions are even easier, enabling a services-oriented architecture that can replace monolithic software builds.

Edge nodes, therefore, will share hardware elements with embedded systems but run software inherited from cloud computing, including for orchestrating containers and allocating storage and for application middleware functions, such as databases, analytics, and artificial intelligence. This technology legacy endows the edge with the cloud’s developer productivity, application management, and scalability associated with the cloud. Most cloud companies have edge-computing application frameworks and APIs for edge applications that they created to pave an onramp from IoT to their services. Joining these companies are major industrial OEMs, which offer comparable software for edge computing in a factory or other industrial setting. Edge computing, in summary, brings cloud technology to premises away from the data center, transforming embedded processing similarly to how cloud computing transformed IT.

Security and management challenges

Edge computing presents new challenges to security and device management. Computing nodes are potentially widely distributed and physically inaccessible. Even if they are accessible, they may have little to no physical user interface. At the same time, they are well networked, connecting to local embedded systems and IoT endpoints and possibly to the cloud. Moreover, a business could have many edge-computing nodes. Meeting these challenges requires hardening the nodes and developing software to remotely manage the nodes and complement the application-management software of major cloud providers.

Edge-computing nodes are a lot like sophisticated IoT endpoints, such as the ones that connect to the internet and a local network and are capable enough to run a high-level operating system. These endpoints have featured prominently in news stories the past several years for having been hacked and turned into a thingbot army or a springboard into IT systems. Security firm Darktrace, for example, in its Global Threat Report 2017 described a casino that had its high-roller database compromised by hackers that broke in via a high-tech fish tank. At least as capable as IoT devices and likely tied to both operations technology (OT) and information technology (IT) networks, edge devices are desirable targets for hackers.

Cloud systems have not suffered comparable hacks, but fault lines in their security are starting to appear. Side-channel attacks, such as Spectre and Meltdown, could lead to malicious tenants exfiltrating data from their cloud neighbors. Like living in the countryside, edge nodes have no neighbors. Data, therefore, should be inherently more secure if kept on premises. But without adequate defenses, edge nodes could be vulnerable to information burglars.

One of the big selling points for a cloud-computing developer is that the cloud service provider handles the physical stuff. The developer deals with abstract virtual versions of computing, storage, and networking resources. Ordering more resources takes at most the click of a mouse or perhaps no action at all if the cloud provider scales them with the load. Edge-computing nodes, however, are corporeal for the developer, and he is likely to own them instead of rent their resources. Simply commissioning a node securely could be time consuming if a serial number must be manually typed into a cloud-based device registry. These nodes then must be monitored and applications—programs, serverless functions, virtual machines, or containers loaded on them and updated.

Platform trust and cloud management

The first order of business is to secure edge-computing nodes. Network security is one aspect of this and mostly focuses on the confidentiality of data in transit. An effective technique in IoT security is to segregate IoT nodes, placing them on their own physical or virtual LAN, or virtual private network. In the case of the casino aquarium, this was not enough, but network-security system’s analysis of network traffic spied the breach.

Securing the integrity of a system, be it for edge computing or other use, requires a platform trust architecture. The best of these provide secure enclaves for important data like identifiers and cryptographic keys and offer trusted execution environments—modes in which critical software runs in isolation from the rest of the system. These features must be rooted in hardware for maximum security. Once implemented, they help an edge node boot up securely by checking software’s cryptographic fingerprint using the keys stored on chip. Likewise, any software update can be checked. Moreover, previous versions can be invalidated by nixing keys in the secure enclave. Other features help secure debug capabilities—a favorite backdoor of hackers with physical access to a device to exploit. Platform trust architectures can also support run-time integrity checking. A separate process continually inspects the executing software. If somehow something unapproved was injected, the system restarts, going through the secure boot process again.

These security features also facilitate management of edge-computing nodes. For example, provisioning is made easier by a cryptographic key and unique identifier securely stored on chip. A person deploying a new node need not type in a serial number, for example. Instead, the system itself can send its identifier to a registry via a cryptographically secure channel, automating the registration process. The registry can then send a signed software image to the node to run after it has verified its authenticity. New code can be sent and old code invalidated, as noted above.

Such management of hardware and software images requires code on the edge device and in the cloud or on a local server, with the latter providing a management console. Like the hardware trust features, this software can come from a technology supplier. The company selling, installing, or using the edge nodes can use it as is or build on it, but does not need to start from scratch. At the same time, these companies benefit from software to manage the applications running on edge-computing nodes. This may come from as part of the edge computing framework supplied by the cloud provider or industrial OEMs. Ideally, the companies providing device-management and application-management tools have coordinated, streamlining developers’ efforts to design edge-computing systems and their customers’ efforts to deploy and manage them.

Only in its infancy now, edge computing looks to transform automation of homes, commercial buildings, and factories. Data analysis and artificial intelligence technologies that today reside in the cloud will be able to move to the premises owing to the commonality between cloud and edge frameworks. Doing so increases their availability, makes it feasible to process more data than could be uploaded to a distant data center, keeps sensitive data on site, and reduces the turnaround time between data generation, analysis, and reaction. This transformation challenges security and manageability, but technology suppliers have prepared solutions to overcome these hurdles. 

Joseph Byrne is a senior strategic marketing manager for NXP's Digital Networking Group. Prior to joining NXP, Byrne was a senior analyst at The Linley Group, where he focused on communications and semiconductors, providing strategic guidance on product decisions to senior semiconductor executives. Prior to working at The Linley Group, he was a principal analyst at Gartner, leading the firm's coverage of wired communications semiconductors. There, he advised semiconductor suppliers on strategy, marketing and investing. Byrne started his career at SMOS Systems after graduating with a bachelor of science in engineering from Duke University. He spent three years at SMOS as part of the R&D engineering team working on 32-bit RISC microcontrollers. He then returned to school for an MBA, which he received with high distinction from the University of Michigan. He worked with Deloitte & Touche Consulting Group for a year before going on to work at Gartner, where he spent the next nine years until going to work for The Linley Group in 2005.

Previous Article
MathWorks adds new predictive maintenance product for MATLAB
MathWorks adds new predictive maintenance product for MATLAB

The product allows engineers to develop and validate algorithms that predict when an equipment failure migh...

Next Article
PICMG will demo Industrial IoT development concept at Sensors Expo

PICMG, a not-for-profit 501(c) consortium of companies and organizations that collaboratively develop open ...