As software content continues to grow in importance and complexity, the industry is facing challenges presented by multiple heterogeneous processors with much tighter communications than in the past. To ensure quick time to market for high-quality software, developers need a high-performance, system-level virtual prototype of the hardware, on which software can be designed, implemented, and tested. While previous prototypes have been too slow or arrived too late in the development cycle, the recently announced Open Virtual Platforms (OVP) initiative enables both early and fast virtual prototype availability.
Electronic Design Automation (EDA) flows are built on the fundamental premise that models are interoperable and freely interchangeable among vendors, meaning that models can be written or obtained from anywhere and be accepted by any vendor‚Äôs tools. These features have been elusive for the abstract models necessary to support high-performance prototypes. Because of this, EDA has failed to deliver a system-level virtual prototype that provides the right levels of capability and speed of execution.
Major changes happening in both the hardware and software worlds will soon make it impossible to construct systems without an abstract model. By adopting reuse, designers are now essentially assembling complex embedded systems like LEGO systems. Processor complexity has hit a wall created by diminishing performance gains at the expense of huge power increases, such that most systems today utilize multiple heterogeneous processors rather than one central processor. As system functionality continues to grow, it must cope with the transition to a multiprocessor world. With all of these changes, designers cannot continue building systems without a viable system-level model on which this functionality and architecture can be designed and verified.
Some companies have attempted to bring the hardware and software communities together by providing virtual hardware models that can be used for software development. For example, Seamless from Mentor Graphics substituted Instruction Set Simulator (ISS) models for each processor and integrated them into a conventional Register Transfer Level (RTL) simulation environment. This model aided driver debugging but lacked sufficient performance for anything else. The Seamless product also included several performance boosters that virtualized the host memory system, which extended its usage into some low-level operating system areas.
In later years, faster models replaced the RTL models, such as C or SystemC models. Although these models provided better performance, complex systems still operated too slowly, making them unsuitable for mainstream software usage.
The industry has spent considerable time and effort constructing virtual platforms based on SystemC. Examples include platforms created and proliferated by CoWare and the proposed work project under the Eclipse Virtual Prototyping Platform (VPP). These prototypes provide a flexible and adaptable platform on which bus traffic, power, performance, and many other implementation attributes can be analyzed. While much faster than the RTL prototypes discussed, these prototypes perform at levels that keep them in the domains of hardware verification and firmware development.
In addition, SystemC has failed to solve the model interoperability problem, an issue that the Open SystemC Initiative (OSCI) Transaction-Level Modeling (TLM) group is trying to rectify. The group‚Äôs latest attempt has not impressed many in the industry, as some have called the effort "too little too late." Furthermore, this proposed standard only addresses memory-mapped interfaces, limiting its ability to define a complete system-level prototype.
Other companies such as Virtutech and VaST Systems have forsaken the standards arena and used custom languages and tools to create faster models of processors, memory systems, and some aspects of hardware. While these companies have successfully created prototypes with higher performance, they suffer from the problems of model availability and proprietary formats.
Changing needs and increasing complexity
Most prototypes today include timing, which is essential for hardware and architecture verification as well as low-level driver testing. But timing information slows down the prototype. For the software team handling applications development, timing information is unnecessary. Time advances as each processor is clocked, and events advance in the correct order for each thread.
To work reliably, multiprocessor applications must perform synchronization that does not depend on timing. Thus, a system-level model for the software community can dispense with timing altogether, relying instead on sequential order of execution and proper synchronization between threads. Synchronization is performed using semaphores, handshakes, or other mechanisms that ensure the two software threads that need to communicate are both in the necessary state for exchanging data.
As time progresses, developers are not as concerned about how a single block or an isolated algorithm functions as they are about controlling and coordinating blocks and algorithms to form a complete multifunction system. This additional capability leads to increased complexity. Total system complexity is proportional to the square of the number of independent nodes that communicate. These nodes can communicate with each other and collaborate to perform the total function. By implication, each of those nodes performs an independent task or coordinates with others to fulfill a more complex task. With the advent of multiprocessor Systems-on-Chip (SoCs), software has now become truly multinodal because threads can execute in a fully concurrent manner and interact with each other in real time.
Multiprocessor software demands
In the past, cross-compiling the code onto the host was quick and easy; however, this does not hold true for multiprocessor software. Even though current desktops now have two or four processors, they provide a less reliable view into how software will operate or perform on the actual embedded hardware, which might have special communications between the processors or require heterogeneous processors. Multiprocessor software needs a more accurate prototype to investigate application communications and synchronization.
At the other end of the scale, many companies utilize physical prototypes to conduct software verification. While these prototypes operate at near real-time speeds and have accurate timing, they are available too late in the development cycle, given that problems found in the software cannot be reflected by necessary changes in the hardware. With the introduction of multiprocessor systems, it is more difficult to see what each processor is doing in real time, and operations such as single-stepping are almost impossible. Designers need a platform that provides the same level of performance but is available earlier in the design cycle.
OSCI maintains the SystemC language and provides a free simulator. While these offerings appear beneficial, they have in fact stifled commercial advancements. In addition, SystemC has failed to solve the model interoperability problem discussed earlier.
Imperas recently launched the OVP initiative to promote the open virtual platform concept. OVP encourages developers to adopt the new way of developing embedded software, especially for SoC and multiprocessor SoC platforms. The company took a different approach with OVP and OVPsim by first making the interface available to the public, thus addressing the model interoperability problem. The company offers several models that demonstrate the interface‚Äôs capabilities as well as a Windows platform simulator for developers to build and debug models.
OVP comprises four C interfaces, as shown in Figure 1.
ICM ties together system blocks, such as processors, memory subsystems, peripherals, and other hardware blocks. ICM is a C interface that produces an executable model when compiled and linked with each of the models and some object files. Given that it is standard C code, any C compiler can be used to create the model. The ICM interface also allows memory images to be defined so that programs or data can be preloaded into the system model.
VMI is the virtual machine or processor interface that allows the processor model to communicate with the kernel and other components. VMI is essentially the heart of the high-performance execution provided by OVP. OVP uses a code-morphing approach with a just-in-time compiler to map the processor instructions into those provided by the host machine. In between is a set of optimized opcodes into which the processor operations are mapped. OVPsim provides interpretation or compilation into the native machine capabilities. This differs from the traditional ISS approach, which interprets every instruction. VMI also enables a form of virtualization for capabilities such as file I/O, which allows direct execution on the host using the standards libraries provided.
PPM, the peripheral modeling interface, is similar to the fourth interface, BHM, which is intended for more generalized behaviors. These models run in a second portion of the simulator called the Peripheral Simulation Engine. OVPworld states that "this is a protected runtime environment that cannot crash the simulator." It does this by creating a separate address space for each model and restricting communications to the mechanism provided by the API. The principal difference between the two interfaces is that the PPM interface understands buses and networks. It is thus similar to the OSCI TLM interface proposal in terms of functionality. The BHM more closely resembles a traditional behavioral modeling language with process activation and the ability to wait for time or a specific event.
Several different processor models and prepackaged demos are available at the OVPworld website (http://www.ovpworld.org). A free simulator is available for developers to create their own platforms. Table 1 shows the performance results obtained for each of the cores running various benchmarks.
The cornerstone of hardware/software virtual prototypes
OVP has the potential to provide a true system-level virtual prototype for both hardware and software development. It is poised to become the first general-purpose abstract modeling system that will form the cornerstone of complete flows into the hardware and software communities. While this has been accomplished before in specialized areas such as DSP designs, it has never been solved in the more general case. OVP has enabled the commercial market for these prototypes, meaning that it could garner more commercial attention than SystemC. If successful, OVP will address the model interoperability problem and thus benefit the entire industry.
- Klein, Russ. "Hardware Software co-verification." Mentor Graphics white paper.
- Harris, David; Stokes, DeVerl; and Klein, Russ. "Executing an RTOS on simulated hardware using co-verification." Mentor Graphics white paper.
- Andrews, Mike. "Managing design complexity through high-level C-model verification." Mentor Graphics white paper.
- Serughetti, Marc. "Virtual Platforms for Software Development - Adapting to the Changing Face of Software Development." CoWare white paper.
- Eclipse Virtual Prototyping Platform (VPP)
- Hellestrand, Graham. "Systems Architecture: The Empirical Way - Abstract Architectures to ‚ÄòOptimal‚Äô Systems." VaST white paper.