Embedded Hypervisors Aren't New, But?

By Colin Walls

Embedded Software Technologist

December 17, 2018

Blog

Embedded Hypervisors Aren't New, But?

Some technologies stand the test of time. Hypervisors have been around for a long time, but have recently come into their own.

Some technologies, it seems to me, should not really exist. They do, however, because they address a specific need. Typically, such technologies stretch something to make it perform in a way that was not originally intended. An example would be the fax machine. In a paper-based office environment, there was a frequent need to move documents from A to B. Initially, this resulted in the mail. But fax was an ingenious way to use phone lines to deliver a similar result, albeit in a very inefficient way. As soon as email became widespread, fax disappeared almost overnight, except for the NHS here in the UK, who are still big users. I have heard it called “Tyrannosaurus Fax,” which says a lot.

The technology that I have in mind today is hypervisors, which are a software layer that enables multiple operating systems (OSs) to run simultaneously on one hardware platform. Hypervisors aren’t really a new technology; the first recognizable products were introduced on mainframe computers nearly 50 years ago. The incentives at that time were to make the best use of costly resources. The expensive hardware needed to be used efficiently to be economic, and downtime was expensive. Software investments needed to be protected, so facilitating seamless execution on new hardware was attractive. An interesting irony is that IBM's early virtualization software was distributed in source form (initially with no support) and modified/enhanced by users; this was many years before the open-source concept was conceived. Now, hypervisors are increasingly relevant to embedded developers.

The first question to ask when looking at the capabilities of any technology is, why? What’s the benefit of running multiple OSs on one piece of hardware, bearing in mind that this introduces significant complexity? The most important answer is security. A hypervisor provides a strong layer of insulation and protection between the guest OSs, ensuring that there’s no possibility of one multi-threaded application interfering with another.

A secondary, but still significant motivation to run multiple OSs is IP reuse. Imagine that there is some important software IP available for Linux that you want to use in your design. However, your device is real time, so an RTOS makes better sense. If multicore is not an option (as that is another way to run multiple OSs on one device), using a hypervisor is the way forward, so that you can run Linux and your RTOS.

Hypervisors broadly come in two flavors, which are imaginatively named Type 1 and Type 2. Type 1 hypervisors run on bare metal; Type 2 require an underlying host OS. Type 1 makes the most sense for the majority of embedded applications. I attended a conference session recently, where the speaker referred to Types 0, 1 and 2. Type 0 seemed to equate to what I would call Type 1 and I could not figure out the difference between his Types 1 and 2. Clearly, care is needed with the interpretation of terminology in this space.

There are broadly three application areas in which an embedded hypervisor finds its place:

  • Automotive: In this context, there’s the possibility for infotainment software, instrument cluster control, and telematics to all be run on one multicore chip. As a mixture of OSs is likely to be needed, such as an RTOS for instrumentation and GPS and Linux for audio, a hypervisor makes sense.
  • Industrial: For industrial applications (factories, mines, power plants, etc.) there’s typically a need for real time control and sophisticated networking (like what’s available in Linux). In addition, in recent years there’s been an increasing concern about cyber-attacks on or other introduction of malware into control systems. A hypervisor is a good way to separate systems and maintain security.
  • Medical: Medical systems introduce some new challenges. Typically, there’s a mixture of real-time (patient monitoring and treatment control) and non-real-time (data storage, networking, and user-interface) functionality, so a hypervisor initially looks attractive. The patient data confidentiality is critical, so the security side of a hypervisor becomes significant. Lastly, the ability to completely separate the parts of the system that require certification (normally the real-time parts), make a hypervisor compelling.

I said that a hypervisor enables multiple OSs on one hardware platform, implying that this meant a single processor. In fact, many hypervisor products support the use of multiple CPUs, with the hypervisor providing overall supervision and inter-OS communication. This is becoming the most important context in which hypervisors contribute to the design of complex, yet reliable, embedded software.

Colin Walls is an Embedded Software Technologist at Mentor Graphics’ Embedded Software Division.

My work in the electronics industry spans nearly 40 years, almost exclusively with embedded software. I began developing software and managing teams of developers.Then, I moved to customer roles, including pre-and-post sales technical support, sales management and marketing. I have presented at numerous conferences, including Design West, Design East, Embedded World, ARM TechCon, and my work frequently appears on Embedded.com.

More from Colin