Multicore processors: Providing opportunities for embedded systems designers

October 1, 2008 OpenSystems Media

The big news in processor development is how major CPU manufacturers are now standardizing on multicore processor technology. While most of the software community has focused on server applications, developers across the wide spectrum of embedded computing applications can also benefit from the latest advances in multicore processors.

Multicore processors offer a solution to the need for mixing new features with legacy code and combining multiple operating environments on the same system. Compared to traditional embedded systems composed of multiple subsystems, a highly integrated system can be constructed with real-time software components and human-directed elements running on separate cores in a single processing system, decreasing system manufacturing and maintenance costs by eliminating redundant hardware.

The challenge is to implement software that efficiently utilizes the new processor silicon. Today, systems are dedicating processor cores to separate, distinct operating environments for both Real-Time Operating Systems (RTOSs) and General-Purpose Operating Systems (GPOSs).

Sharing I/O at the expense of performance

Software that hosts multiple operating environments must support virtualization of the processor's hardware interfaces so that multiple software applications can share the multicore processor's I/O without conflict. In this context, the concept of virtualization involves using software to allow a single piece of hardware to service multiple OSs at the same time.

Historically, virtual machine management software has emulated the entire underlying machine, including all the I/O devices. However, using a completely virtualized machine imposes a performance penalty that the guest OS does not have if it interacts directly with the hardware. For example, graphics-intensive applications need access to real hardware for maximum performance. A virtual frame buffer is too slow and lacks the adequate features for an application that renders 3D moving images. This poses a major problem for applications such as medical imaging systems or robotic assembly machines. In such systems, the guest OS that renders the images needs direct access to the physical frame buffer and its control I/O.

Direct access to I/O improves responsiveness

Given this performance setback, a different approach to virtual machine management is needed to support the latest I/O hardware enhancements and yield maximum performance in deterministic processing environments. To address this problem, a Virtual Machine Manager (VMM, shown in Figure 1) assigns specific devices directly to the I/O tasks that control them. In this system, the VMM doesn't emulate the underlying machine's entire I/O interface, only those devices that are shared. For all other devices, it ensures that only authorized operating environments can access specific performance-critical I/O. For example, as shown in the diagram, the VMM ensures that the main operator display is only accessible to the GPOS, in this case Windows.

21
Figure 1

This notion of assigning I/O exclusively to a specific virtual machine is essential to guaranteeing real-time responsiveness. Access to response-critical hardware must be restricted to the RTOS that controls the hardware; likewise, access to legacy I/O interfaces should be restricted to the corresponding legacy application software.

Virtualization enables legacy code migration

Running a legacy RTOS in a virtual real-time machine on its own processor core enables legacy real-time software to be migrated from obsolete hardware to modern embedded platforms. Because I/O can be virtualized, it is possible to simulate old hardware devices, which minimizes the need to rewrite proven software. For example, a VMEbus system can be converted to a less expensive SBC system by intercepting I/O requests to legacy VMEbus I/O and redirecting them to equivalent onboard I/O devices.

An effective VMM system distinguishes resources that can be multiplexed by the VMM from those that must be exclusive to a virtual machine. For example, devices like the disk and an enterprise Ethernet interface can be multiplexed and shared among all virtual machines. However, when determinism and performance are more important than equal access, the virtualization software should isolate resources for use by a specific virtual machine and its guest OS.

Benefits of combining independent subsystems

Because a multicore chip can host multiple operating environments, systems that previously required multiple discrete computing modules can now be combined in a single hardware environment. By reusing proven legacy applications and supporting faster communication and coordination between RTOS and GPOS subsystems, this technology can decrease costs, improve reliability and robustness, and save design, manufacturing, and maintenance resources.

Paul Fischer is a senior technical marketing engineer at TenAsys Corporation in Beaverton, Oregon. He has more than 25 years of experience building and writing about real-time and embedded systems in a variety of engineering and marketing roles. Paul has an MSE from UC Berkeley and a BSME from the University of Minnesota.

TenAsys
503-748-4720
info@tenasys.com
www.tenasys.com

Paul Fischer (TenAsys)
Previous Article
Media 2.0

To reflect our expanding range of services, OpenSystems Publishing has changed its name to OpenSystems Medi...

Next Article
Hybrid silicon tuners help TVs kick the CAN
Hybrid silicon tuners help TVs kick the CAN

TV manufacturers must switch to different tuner architectures to meet evolving consumer and technical demands.