Use Transaction-Level Models to ensure hardware and software are in sync

December 01, 2011

SystemC-based Transaction-Level Models (TLMs) ease communication and synchronization between software and hardware design teams.

 

The consumer and wireless communications markets are more competitive than ever. The ongoing battle between aggregation and disaggregation of companies is in full swing. One example of aggregation is a decision to own more of the vertical design chain by bringing chip design in-house. This has helped companies like Apple differentiate by controlling more of the overall product design and thus not being limited by off-the-shelf chips that are available to everyone else.

While Apple has demonstrated the potential reward of vertical differentiation, this approach does pose significant risk, whether a company has experience designing chips or not. Specifically, how does the software team develop software that works with the hardware as shipped?

On the other side of the equation, complete disaggregation is enabled by software abstraction layers like Google’s Android operating system. It has somewhat democratized the design space, allowing all system companies to participate and differentiate using software. Android allows semiconductor vendors to participate equally by providing supporting hardware. Again, the way the software works with the hardware determines product success.

Traditional solutions to this problem do not work in today’s market. Companies used to be able to start software development based on specifications and wait for chip prototypes to be available for testing. That works if the software is very simple, independent of hardware, and has a straightforward specification, but not with today’s consumer electronics that require everything be connected.

Furthermore, waiting a long time to begin testing pushes the debug cycle out far too late in the schedule. Many companies addressed this in recent years by moving to standard off-the-shelf chips, but that approach limits the ability to differentiate. What if you want to add a power-saving sleep mode but there’s no way to shut down the chip?

In an aggregated scenario, companies are looking to differentiate not only in software and industrial design, but also in electronic hardware. Embarking on a chip design project poses its own risks; couple that with embedded software development, and overall project risks go up exponentially. Most companies are careful enough to spend ample time up front architecting the system, testing it, partitioning it into software and hardware, and specifying the behavior of both. But once each team begins designing, certain implementation assumptions are made, bugs are introduced, and features can be added.

The scenario is even worse in a disaggregated world, as the responsibility now spans across company borders. Companies from the systems and semiconductor world might decide to work together to optimize hardware/software interaction and create a chip optimized for system needs. Even if there are constant synchronization meetings, design changes will sneak in unbeknownst to the software team and might not be seen until the first time the software is run on the actual hardware. This cycles back to the problem of the hardware not being available soon enough. How do engineers solve this conundrum?

A golden model for prototyping

Virtual prototypes (or virtual platforms) of hardware that come in the form of software models give the software team a model of system hardware earlier in the process. This enables developers to begin testing on a model of the hardware specification. However, it is only a model of the specification. Most hardware design today starts with engineers reading and interpreting the specification, then writing low-level Register-Transfer Language (RTL) models in a hardware design language such as Verilog to begin the verification and implementation process. Due to the factors mentioned previously, hardware behavior will likely diverge from the specification.

The solution is to use a common “golden model” on which the software team can develop and with which the hardware team can begin their implementation. This is now possible with the availability of the Open SystemC Initiative (OSCI) Transaction-Level Modeling (TLM) 2.0 standard.

In short, SystemC is a class library that enables hardware design using C/C++ by modeling hardware data types and concurrency. Because hardware can now be modeled in C, that same model can be run by the software team. The TLM extensions are important because they abstract away all the signal-level protocol details the hardware needs to ensure that it communicates properly with the system bus. An excess of these details makes the model too slow for running the software. TLM abstracts those details to higher-level models that can be mapped to detailed hardware during high-level synthesis.

Resolutions to high-level synthesis limitations

High-level synthesis provides the automated link between the C model and the actual hardware that gets built. This removes the human factor of the hardware designers interpreting the specification and manually writing their own model to begin building the hardware. Until recently, this had rarely been used in practice because of some key limitations that have now been addressed:

  • Quality of results: The first two generations of high-level synthesis were never able to produce hardware that met the same performance, power consumption, and size that could be achieved by manually writing RTL. Modern high-level synthesis technology has resolved this issue.
  • Refinement methodology: The high-level virtual prototype for software development is described with SystemC TLM, but it still requires that the hardware team refine it by adding in hardware architecture details so that high-level synthesis can produce optimal hardware microarchitectures. These details are too low-level for software testing and would slow down its speed, but they are important for building efficient hardware. This methodology now exists and has been proven by early adopter customers.
  • Verification: Until very recently, engineers lacked a mature methodology to verify the correctness of the hardware architecture in SystemC TLM and the rest of the hardware implementation flow. This is mainly because an automated path into implementation did not exist, so most verification was done at lower levels. Thus verification became the bottleneck in the hardware development schedule. Now that the automated path exists, verification methodology has been developed.

Hardware design teams are familiar with these traditional barriers to designing and verifying hardware using SystemC TLM. Most, however, are not aware that these barriers have been addressed. Those who are aware now enjoy a significant competitive advantage. They can describe their hardware much more efficiently, verify it more rapidly, and reuse it in derivative chips more easily.

Virtual platform in practice

A common model of the hardware is now available as part of a virtual platform much earlier so hardware/software interactions can be addressed sooner. This common model can be delivered as part of the bigger system in a virtual platform either within the company in an aggregated development scenario or across companies in a disaggregated world.

One example of the way this works in practice is captured in Figure 1, which illustrates the flow provided by the Cadence System Development Suite, an integrated set of hardware/software development platforms.

 

Figure 1: A hardware model is refined from concept to product within the Cadence System Development Suite.


21

 

 

The system concept is first described as a SystemC TLM virtual prototype. In the Cadence flow, this virtual prototype is used by the Virtual System Platform to run the software on this hardware model. In parallel, the hardware design team will refine the TLM to add hardware architecture details for C-to-Silicon Compiler high-level synthesis, which is the beginning of the implementation process that will lead to silicon.

If bugs are uncovered during testing, the Virtual System Platform is integrated with the Incisive Verification Platform so that debug can happen on both software and hardware. This means that issues can be addressed at their source without cumbersome firmware patches. As the hardware implementation process progresses, more detailed RTL models become available to create hardware emulation models in the Verification Computing Platform or an FPGA prototype in the Rapid Prototyping Platform.

This entire process is a successive set of refinements that begins with a fast TLM model, adding more hardware detail as it becomes available while maintaining runtimes that are fast enough for software development. This ultimately enables the software and hardware teams – even across company borders – to have a common model that enables earlier communication and constant synchronization. This is the type of collaboration needed to keep pace with the innovation and delivery schedules required in today’s consumer market. It is only achievable if the hardware team evolves its design and verification methodology to encompass SystemC TLM.

Michael (Mac) McNamara is VP and general manager of system-level design at Cadence Design Systems. In the early 1990s, he helped start Chronologic, which brought compiled Verilog simulation to the world; thereafter, he cofounded SureFire Verification (which became part of Verisity) to improve the state of verification software. After Cadence acquired Verisity, Mac led the effort to improve high-level design, and currently manages Cadence’s C-to-Silicon Compiler and Virtual System Platform product lines.

Cadence Design Systems 408-348-7025 • [email protected] Linkedin: www.linkedin.com/company/cadence-design-systems Facebook: www.facebook.com/pages/Cadence-Design-Systems-Inc/66598923031 Twitter: @Cadence www.cadence.com