Using multicore and virtualization for efficient and flexible development - Q&A with Rob Oshana, Freescale Semiconductor

By Rich Nass

Executive Vice President

Embedded Computing Design

July 09, 2015

Using multicore and virtualization for efficient and flexible development - Q&A with Rob Oshana, Freescale Semiconductor

Use of multicore processors in embedded systems has dramatically increased over the past several years, and virtualization is an important component t...

Use of multicore processors in embedded systems has dramatically increased over the past several years, and virtualization is an important component to for developers to get the most out of multicore. Rich Nass interviewed Robert Oshana, Director, Global Software R&D and Enablement, Digital Networking at Freescale Semiconductor about the usefulness of virtualization when developing with multicore processors, strategies to make these technologies most effective, and the future of multicore. Edited excerpts follow.

 

Multicore has gone from a buzzword about seven years ago to something that’s mainstream. What will we be saying about multicore seven years from now?

Not much. It’s going to be so mainstream that people will stop talking about it at conferences. It’s like how the RTOS in embedded was the talk of the town a couple decades ago. Things got ubiquitous and it’s not a big discussion topic anymore; its just assumed that people use them, just like multicore in seven years.

Different people have different definitions of virtualization. What’s yours?

Efficiency of resource management and utilization. Virtualization spans many areas, like “server” or core-based virtualization with things like KVM, containers, and hypervisors. But now, virtualizing entire “networks” using technology like network function virtualization (NFV) is common. In either case, you’re making it easier for developers to consolidate applications and/or control onto a single multicore device without having to go through expensive porting to get the consolidation. If I can easily move multiple applications using multiple operating systems (OSs) onto a single multicore processor without having to port to a single OS, that’s a time to market advantage.

What’s the connection between multicore and virtualization?

This enables me to take two or more applications running on disparate processors and OSs and move them to a single multicore device where the OS talks to the virtual machine (VM) instead of the HW. So I can have multiple OSs running on the same device (or even the same core on the device) without having to port from one OS to another. This provides me increased flexibility, increased utilization of multicore resources, and faster time to market. Multicore and virtualization fit hand in hand.

How much of the burden of virtualization should fall on the OS vendors verses the processor vendors, and why?

It needs to be both. Ultimately, the OS vendors will produce a product, but the processor vendors need to design for this technology. For example, there are optimizations that can be made to make virtual I/O faster and more efficient, to enable more efficient communication between VMs, etc. There’s an added overhead to using virtualization so HW/SW co-design should be brought to bear to make this “system” more efficient and optimized.

We’ve seen of few examples of multicore processors with hundreds of cores, but the majority are eight or less. Will that change going forward? What determines the point of diminishing returns?

The economics say that there will be more cores added; they’re so cheap these days, so why not? The utilization of these many cheap cores will be application dependent. For “embarrassingly parallel” applications and applications that fit more into Gustafson’s Law as opposed to Amdahl’s Law, the more cores the better. For more “bound” applications, it will depend on how effective the developer can achieve the right algorithmic transformation and how well the tools guys can spread the load!

How much of the “usefulness” of multicore is determined by the application, what applications are best suited for lots of cores, and why?

There are two interesting lemmas in the multicore industry that allow applications to scale faster than Amdahl’s Law would predict. One says, “There exists workloads that are gaseous in nature: When provided with more compute power, they expand to consume the newly provided power.” An example of this is graphics. If I get more compute power, I will just run my frames at a higher resolution or with more details, for example. Another example is weather prediction. If I get more compute power, I’ll just run my software longer to get more accurate predictions.

The other lemma is, “When the problem size is increased, the parallel portion expands faster than the serial portion.” Martix-Matrix-Multiply (MMM) is an example of this. In the setup of MMM, i.e., initializing the matrices increases linearly with the size of the matrix. However, the actual compute is O(n^3). For a system where the problem size is not fixed, performance increases can continue to grow by adding more processors. But both of these lemmas have to be true for this to work.

Using virtualization (VMs) in conjunction with multicore processors is a technology that helps designers get the most out of limited resources. Especially as we seek to push more and more resources to the edge of the Internet of Things (IoT), when does it make sense to leverage virtualization technology?

There are a few key use cases for virtualization, many of which apply to the IoT space. They are:

  1. Consolidation
  2. Utilization
  3. Dynamic resource management
  4. Security and sandboxing
  5. Failover

Virtualization offers a new level of flexibility that was not present before. Besides better usage of limited resources, a virtualized infrastructure can scale much better than a traditional one. With the constant increase of IoT devices, the infrastructure needs to be more elastic to use the resources present in customer location and cloud. One example is vCPE where the network functions can be processed either on the CPE, in cloud, partially in CPE, and partially in cloud.

Given the benefits of virtualization, what types of virtualization technology are available to embedded developers now, what are their pros and cons, and how would you prescribe investigating the right solution for a particular application and/or target processor?

There are type 1 and type 2 virtualization approaches. This mainly just has to do with whether an underlying OS is present. Type 1 hypervisors run directly on the hardware and offer advantages of efficiency and low overhead but many of these are vendor-centric, which may or may not be an issue to the developer. KVM (Kernel Virtual Machine) is a type 2 hypervisor, integrated into Linux. This is an open-source technology, which has benefits but is generally slower than a type 1 given its integration into Linux. We invest in both types.

Also, containers (LXC) are a “lightweight” form of partitioning that offers the advantages of isolation and excellent performance without a full para-virtualized system underneath. All three of these technologies are used in the embedded space. Again, the choice comes down to what type of application and use case is being developed.

Current processing virtualization technologies are:

Hypervisor based, offering Virtual Machines booting various OSs (Guests). The user will see an OS and resources like a physical machine but it will not know on what HW the resources are running. There are open-source hypervisors (KVM/QEMU, XEN, etc.) and proprietary hypervisors. For accelerating the Guest OSs and offering the required isolation between Guests, various HW extensions were added by the core (PPC, ARM, x86) providers.

Container-based containers share the same kernel and only one OS (kernel version) is supported. The user will have an experience similar with a VM but the isolation is at the process level, and the control on resources is done in the kernel at SW level. Containers don’t use extensions, as they are intended to be lightweight hypervisors (rapid and flexible), but with a trade-off on isolation. There are many open-source solutions targeting different use cases, like LXC and libvirt_lxc. Their goal is mainly to deliver an application that’s self-contained.

Choosing the best virtualization solution depends a lot on the use case, isolation required, performance, flexibility, legacy SW, and of course the virtualization support in HW. The processing virtualization is the foundation for NFV, where the network functions are run in VMs.

From a network virtualization perspective, the focus is similar. VMs are decoupled by HW and are managed (created, migrated, and/or destroyed) programmatically in the network side as the networks are decoupled by the HW infrastructure (switches and routers) programmatically creating the network slices (virtual networks). The networks are created, reconfigured, and destroyed in a flexible manner following the VM migration. This trend is called software defined networking (SDN).

From a silicon perspective, are there certain strategies you put to use when architecting multicore processors that help facilitate or ease virtualized system design? Further, as chip vendors are increasingly being asked to provide development tools and software stacks to embedded engineers, what are you offering or partnering on in terms of hypervisor/container solutions that can help designers make the right decisions when starting virtualization projects?

Yes, we invest a lot in architecting SoCs with virtualization in mind. We design our I/O subsystems to support virtualized I/O for example. We design mechanisms that allow VMs to more easily communicate with each other, to help route interrupts to the right VM, and so forth. Even though we use a community-based virtualization technology (KVM), we can differentiate at the SoC level. We focus on optimizing our SoCs for the use cases we are interested in, e.g., networking, wireless access, etc. Then we build optimized enablement software and SoC software drivers to allow the developer to take advantage of these SoC features.

The strategy is to offer HW isolation between VMs at all levels, such as core, I/O subsystem, and accelerators, and to accelerate using HW assistance all the SW parts that add overhead, like hypervisors and virtual switches.

From a SW perspective, Freescale offers its SDK that contains plenty of technologies optimized for Freescale platforms (KVM, proprietary hypervisor, LXC, libvirt_lxc, Docker, Libvirt, and OpenVSwitch) with many reference applications. There are build cloud reference applications on top of these technologies using the orchestration support offered by OpenStack to manage VMs, networks, storage, and so on.

Rich Nass, Embedded Computing Brand Director

Richard Nass’ key responsibilities include setting the direction for all aspects of OSM’s ECD portfolio, including digital, print, and live events. Previously, Nass was the Brand Director for Design News. Prior, he led the content team for UBM’s Medical Devices Group, and all custom properties and events. Nass has been in the engineering OEM industry for more than 30 years. In prior stints, he led the Content Team at EE Times, Embedded.com, and TechOnLine. Nass holds a BSEE degree from NJIT.

More from Rich

Categories
Processing