Although capable of running embedded software through emulation and simulation, traditional electronic design automation tools do not focus on the critical concerns of embedded systems design. This deficiency, in addition to software developers’ unfamiliarity with hardware tools, requires a new approach. Software developers can interact with hardware earlier by implementing the same native tools used to develop applications that run on FPGA prototypes and final silicon. This allows for early investigation, which improves hardware design and accelerates software debug and bring-up for embedded systems.
Long gone are the days when embedded software development commenced when hardware was ready. The ever-increasing size and complexity of software necessitate an earlier starting date to have a chance of shipping on time. Companies must ensure proper software integration ahead of silicon tape-out given the investment and risk associated with software development. Ever-shortening time-to-market windows further exacerbate these challenges.
The only solution is to look for ways to execute, and hence, test and debug software before the final target hardware is available. Development often must start in earnest even before the hardware design is finalized. One way to improve hardware design and verification and accelerate software bring-up is to deploy the application on hardware that is either virtual or emulated.
There is no single, precise definition of the term “virtual prototype.” For the purposes of this article, the term will be used to describe any environment in which embedded code may run, thus enabling useful development prior to the availability of actual target systems. Let’s take a closer look at each of these possibilities.
Native code running on a PC
It’s an obvious first step to compile and run code on a PC. Tools are readily available, cheap, or even free, and a PC offers high levels of functionality. This environment is fine for testing algorithms and basic logic. The code will probably run faster on a PC compared to the real target, as a PC’s CPU is likely to be more powerful than an embedded processor.
Apart from clock speed, code timing is not useful, as the instruction mix on an x86 processor is very different from most embedded devices. As soon as the code needs to interact with hardware or the Real-Time Operating System (RTOS), this execution environment stops being useful.
Native code run with peripheral/system models
Most RTOS manufacturers provide a host execution environment that enables either a special version of the RTOS to be run or its functionality to be emulated under Windows. This, along with a means of providing functional models of peripherals, enables further progress to be made. Timing is still misleading, however.
Execution on an evaluation board
Most semiconductor vendors provide low-cost evaluation boards to facilitate fast deployment of their CPUs. Commercial RTOS products such as Mentor Embedded’s Nucleus RTOS may be available preconfigured for such boards, thus enabling rapid production.
This execution environment is attractive, as the CPU speed and instruction mix is likely to be very close to the final target, which makes the testing of time-sensitive code viable. The accuracy of such an environment depends upon the similarity of the peripheral devices to those on the target.
Use of an Instruction Set Simulator
Although executing a real chip’s code seems attractive, it has the drawback of requiring code to be added in order to gain visibility of some software functionality. This is called “instrumenting” the code. An alternative approach is to use an Instruction Set Simulator (ISS), which simulates code execution on an instruction-by-instruction basis. An ISS can run close to real-time speed and offers precise, highly visible code execution. In effect, real time can be stopped as the ISS tracks clock cycles consumed during simulation.
Most ISS products allow some kind of functional peripheral modeling, which allows significant progress to be made with software development.
An ISS with Hardware Description Language models of peripherals (co-simulation)
Hardware is designed using a Hardware Description Language (HDL) such as VHDL or Verilog. Designers routinely use simulators to verify their HDL designs, and many of today’s development tools merge an ISS with an HDL simulator. This enables code to be executed in an accurate CPU environment that interacts with what appears to be real hardware. The software developer can use the HDL models of the final target system to develop software components such as drivers and boot code that interact closely with the hardware.
The downside of co-simulation is that greater precision comes at the cost of reduced execution speed.
An HDL model of the complete system, including CPU
It would seem logical that the most accurate virtual prototype would be an HDL model of the complete system, including the CPU and peripherals. Three reasons explain why this is not really the case:
- The code execution speed on such a model would be glacial. It would not be fast enough to get anything useful done.
- An HDL for the CPU is unlikely to be available.
- Since an ISS is likely to be designed carefully, its use does not have any downside, but does increase performance to a useful level.
An ISS with SystemC models of peripherals
To allow proper simulation speed that can accommodate software execution, a system can be modeled with higher abstraction languages such as SystemC (C/C++ class library). Modeling at higher abstraction levels uses loose or approximated timing. Such timing is appropriate for software execution and performance analysis.
The virtual prototyping technologies discussed thus far can be plotted on a graph of code execution speed against precision and essentially yield a straight line (see Figure 1). Developers can choose from a range of possibilities: fast, abstract simulation at one extreme and slow, exact simulation at the other. However, another technology bucks this trend and strays away from the straight line.
Although the speed limitations of HDL simulation can be reduced by simply using a powerful desktop computer, this has limits and designers always want more. The response from the Electronic Design Automation (EDA) community was to develop emulation. An emulator is specialized hardware that, in effect, offers a dedicated environment to run an HDL simulation. This is typically achieved using FPGAs.
An integrated platform built with an ISS, SystemC models, and an emulator that simulates some of the peripheral hardware breaks the mold and provides a precise, high-performance execution environment. Running a virtual target and emulation offers much deeper visibility into hardware and software execution threads and enables more efficient debug as well as system performance analysis.
Embedded software developers traditionally have focused on getting their code to function correctly. At the highest level of abstraction, this results in the device responding to stimuli in a predicable fashion in line with design specifications. This has not changed, but the developer’s brief is becoming wider. The most significant addition to the software developer’s workload is the consideration of power.
Low-power design is topical for several reasons. While this historically has been a hardware issue, today’s complex designs offer numerous opportunities for power consumption to be tuned according to the system’s current state, software, and real-time context. That state is determined by the software; hence, power management is becoming a software issue.
It is a tall order to develop and debug power management code ahead of hardware availability using a virtual prototype, but that is exactly what is required. Of course, it is all possible in principle; hardware simulation can yield power consumption figures, and the actual power consumed by a CPU can be measured. It is simply a question of communicating this information to the software developer in a meaningful way.
The way forward
Any thought of software and hardware development being separate activities must be dismissed. The good news is that System-on-Chip (SoC) producers now recognize the need for embedded software development ahead of silicon. The bad news is that although traditional EDA hardware tools can run embedded software through emulation and simulation, they do not focus on the critical concerns of embedded systems design, including operating system context, multicore and thread handling, and caching considerations.
An integrated approach is needed to provide tools that are engineered to work in a well-coordinated fashion and present information in a way that is familiar to both hardware and software teams. Software developers must be able to interact with hardware earlier using the same native tools they leverage to develop applications that will run on FPGA prototypes and final silicon with integrated technologies. One such unified approach is available in the Mentor Embedded Platform, which incorporates familiar technologies from Mentor Graphics such as virtual prototyping using Vista hardware debug and analysis and the Sourcery CodeBench Integrated Development Environment (IDE) for software development. By using this integrated embedded platform for early software development, developers can conduct performance analysis with virtual and emulated hardware, as well as investigate cache, process, thread, and core activities before silicon is available.
This early investigation between disciplines improves design hardware and accelerates software debug and bring-up for SoCs and embedded systems. Software developers and hardware engineers can all agree this is a move in the right direction.
Mentor Graphics Embedded Software Division