Strategies for verifying an FPGA design

November 01, 2014

Story

Strategies for verifying an FPGA design

The escalating cost, time, and risk associated with custom integrated circuit (IC) fabrication has driven increased field programmable gate array (FPG...

 

The latest FPGAs can support designs with more than 20 million equivalent gates, plus processor platforms and a range of communications, digital signal processing (DSP), and other functional blocks. These devices are a far cry from the simple programmable chips of yesteryear, where a designer could quickly load a few thousand gates of logic into an FPGA and immediately see them run. Today's devices require a comprehensive verification strategy every bit as exhaustive as that for an ASIC.

Traditional FPGA verification

The early FPGA design flow consisted of entering a gate-level schematic design, downloading it onto a device on a test board, and then validating the overall system with real test data. Even with just a few thousand gates, it became clear that some form of simulation of the design prior to download provided an easier and faster method to resolve issues through early detection.

With FPGA technology improvements, more advanced design techniques were inevitable. Similar to ASIC design, the use of hardware description languages (HDLs) became commonplace and the golden representation of the design shifted from gates to register transfer level (RTL) code. Advanced simulation was used to thoroughly functionally verify the design prior to synthesis, and today, all the advanced ASIC functional verification techniques are also leveraged on FPGA RTL code.

However, post-synthesis FPGA verification is another story.

Fabrication dependent verification

ASIC and custom IC fabrication is expensive, time consuming, and risky. This has led to a rigorous sign-off process where the final design is tested in a number of ways to ensure it is correct. Furthermore, hardware emulation is often employed for large ICs to further test the device using real-world data and/or the software that will be run on it in production.

Of course, FPGAs are different. As the FPGA may be quickly updated with new design code as many times as is required to get it right, the need for exhaustive sign-off and separate emulation would appear unnecessary.

A particularly useful feature of the FPGA is the ability to rapidly prototype designs. This has proved invaluable for high-speed verification, with FPGAs even being used to prototype designs targeted for other IC types. Indeed, some emulators utilize FPGAs as their core technology, due to this property.

In the past, it has been assumed that for large FPGAs, it is sufficient to functionally test the RTL code and perform a final check on the prototype device itself. However, now that FPGAs with many millions of equivalent gates are being utilized, new design flow requirements have changed this situation.

Large FPGA design flow issues

Two types of hardware bug can be introduced into ICs, including FPGAs. Design bugs through human error are eliminated during functional verification. Systematic issues, on the other hand, are introduced by the automated design refinement tool chain and typically are not checked by the functional verification process. These can be hard to detect and damaging if they make it into the final device.

High-quality FPGA solutions rely on tool chain effectiveness, particularly optimizations provided by synthesis and place and route (P&R) functions. The ratio of registers to available inter-register logic is fixed, allowing sections of the matrix to be wasted if this ratio is unbalanced across the design code. As such, sequential optimizations, where the positions of flip-flops are changed relative to the logical gates, are an important FPGA synthesis and P&R capability (Figure 1).

 

Figure 1: A basic FPGA design.


21

 

 

These requirements have driven FPGA vendors to invest in complex, state-of-the-art synthesis technology. To engineer the highest quality designs, extremely aggressive optimizations are employed within these tools, a key driver of the quality of results (QoR) of the overall FPGA design.

For smaller FPGAs, systematic bugs resulting from the RTL code refinement process are relatively uncommon and would be discovered during the final test of the FPGA within the hardware. For larger FPGAs leveraging modern design flows, this assumption has been proven to be flawed and can lead to significant design problems.

Equivalence checking solutions for systematic bugs

The combination of synthesis and P&R tools employing aggressive optimizations is prone to systematic errors. Because these tools are sensitive to seemingly small differences in the RTL code, it is impossible to test every design and tool optimization combination. Therefore, the best results are achieved by ratcheting up the optimization level and checking to make sure no systematic errors are introduced for specific designs.

Testing the gate-level design representation in large FPGAs has become a critical requirement due to the nature of systematic design problems. Systematic issues can occur anywhere in the FPGA with little relationship to the design section under development. They often produce unexpected behavior or are triggered by unusual, corner-case scenarios, making the creation of verification tests complex and time consuming. They are irritating to debug, as often the whole design must be examined with little information on the source of the problem. Worst of all, they can easily make it into the final product, causing a post-production re-spin.

Formal verification-based equivalency checking (EC) for ASIC design exhaustively compares RTL code to the derived gate-level equivalent, specifically targeting systematic problems (Figure 2). As the RTL code is fully verified, the overall solution represents the most effective way to guarantee design functionality.

 

Figure 2: Equivalence checking must support sequential optimizations.


22

 

 

For FPGA design, a new breed of EC is required that can support the advanced sequential optimizations leveraged by the latest FPGA synthesis tools. With the FPGA design flow moving latches within the logical design space, standard equivalency checking cannot easily map RTL registers to gate flips flops. This can be resolved by utilizing advanced formal techniques more commonly associated with property checking, a new and significant capability for EC tools used in OneSpin's 360 EC-FPGA, for example. It is an absolute requirement for the effective removal of systematic errors from FPGA designs.

The use of EC in the FPGA flow has the following benefits:

  • Confidence that any problems observed in the final FPGA test are related to the design and are not systematic, driving a faster and easier debug process.
  • Elimination of the time-consuming need to create a complex range of tests to target systematic errors or to attempt to predict systematic error fault conditions.
  • Confidence that no systematic, corner case bugs exist in the final design, ensuring consistency between the verified RTL code and gate-level final design.
  • Confidence to leverage the most aggressive optimizations available without concern for the introduction of errors, leading to the highest quality design.

The use of EC has a direct bearing on final design quality, reliability, design schedule, and engineering efficiency. Not surprisingly, it is in use at many electronics companies worldwide working with large FPGAs.

FPGA implementation verification

As FPGAs have become larger and more complex, their design and functional verification has tended toward that of an ASIC. This trend is now extending into the area of implementation verification, driven by the advanced nature of the modern FPGA design flow. EC is now a mandatory part of that flow, retaining inherent efficiencies in the FPGA production process.

David Kelf is Vice President of Marketing at OneSpin Solutions.

OneSpin Solutions www.onespin-solutions.com @OneSpinSolutions http://opsy.st/onespinLinkedin

 

David Kelf (OneSpin Solutions)
Categories
Processing