The functional verification space has had more innovation than any other part of the front-end design flow, and yet the amount of time and effort spent in verification continues to grow. The problem stems from rising complexity and the fact that simulation as a technology has failed to scale ever since single processors stopped becoming more powerful. It is compounded by an increasing number of tasks that verification is expected to perform, such as power verification.
The electronic design automation (EDA) industry dealt with the transition from a single-point tool (simulation) at the heart of the verification effort, as well as from languages (SystemVerilog) and methodologies (universal verification methodology (UVM)) developed to verify and validate an accelerator approach to system design. Today, verification is a flow that has to accommodate several execution platforms in addition to subsystems that likely contain one or more processors that cannot easily be taken out of the design for the purpose of verification. When such subsystems are themselves integrated, this becomes an impossible approach.
Another problem is that existing verification methodologies focus on stimulus generation and cannot take into account the intended purpose of the design. This has resulted in a verification methodology that has become increasingly inefficient and ineffective.
Reaching “Portable Stimulus”
When looking at a suitable replacement verification methodology, several important requirements have to be considered:
- First, it has to be able to treat verification as a flow, meaning that it has to be able to work at multiple levels of abstraction and target various execution engines, ranging from virtual prototypes through simulation, emulation, and physical prototypes to actual silicon.
- The second major requirement is that it has to be composable. If a verification model is developed for a subsystem, it must be able to be incorporated easily into a higher level model without requiring extensive rework.
- A number of other requirements address issues such as readability and coverage, in addition to those that define the basic capabilities of such a system, such as the ability to define sequential behaviors and constraints.
As industry demand for such a solution grew, the Accellera Systems Initiative initiated an effort to design such a language in February 2015. Companies such as Breker Verification Systems and Mentor Graphics had commercial offerings in this space for some time and were able to present their market experience to the committee. During the process, new user requirements and concerns were raised and solutions evolved to address those needs.
Two years later that effort is close to the first release of a new verification language that is internally being called “Portable Stimulus.” What makes this different from previous verification languages is that this is not a model that helps with the generation of stimulus, but a definition of verification intent. Instead of concentrating on what legal combinations of inputs look like, it focuses on end-to-end functionality that should exist within a design.
The objective of Portable Stimulus is to be able to write your verification intent once and be able to use it at all stages of silicon realization (Figure 1). With a Portable Stimulus model it is possible to generate constrained random test cases, just as in UVM, but these generated tests can be self-checking instead of requiring a separate scoreboard implementation. It is also possible to produce metrics of design intent coverage directly from the model, which is different from developing a separate functional coverage model that only tells how much of the implementation has been exercised rather than how much of the intended design functionality has actually been tested. In addition, a single Portable Stimulus model can be used as an input to synthesize tests for a variety of target execution platforms, including UVM, simulation, emulation, post-silicon validation, etc.
The Portable Stimulus verification intent model will continue to encapsulate notions of randomization, prove to find issues that nobody thought to write a test for, as well as accept that software plays an increasingly important role in system functionality. Therefore, using Portable Stimulus, the processor can be considered a resource within the design to be exploited as opposed to part of the problem. Models for processors and their implementation tend to be some of the most thoroughly verified pieces of the design, and can be used as trusted agents in the verification of the blocks around them and the connectivity that connects them.
The Portable Stimulus verification solution
The Accellera committee responsible for the Portable Stimulus initiative decided to support two methods of developing a portable stimulus model. The first method is to write models using simple C++ constructs, and the other is a new domain-specific language with specialized declarative semantics. Both approaches have matching semantics so models can be freely passed between one form and the other.
However, three important aspects remain for a viable Portable Stimulus solution:
- First, Portable Stimulus models must be graph-based in order to naturally capture the intended flow of execution through the design. The graph being captured is equivalent to a Unified Modeling Language (UML) activity diagram, which is a flow chart that can be considered a graph.
- Second, portable stimulus must provide an abstraction of the hardware/software interface so that, for example, a tool can map a register write to either a UVM verification IP (VIP) or a software-driven test.
- Third, portable stimulus models must be composable so that lower level models can be seamlessly combined to define higher level use cases.
The general form of the resulting solution is a tool tasked with generating a self-checking test scenario. This includes finding a random, but legal, path through the graph that satisfies the constraints provided, which in turn defines the scenario that will be run. The tool may then generate software that will be compiled and executed on processor cores contained within the system on chip (SoC). In addition, there may be events that need to be injected into the SoC, and it may be necessary to time these with events happening from within the SoC. This requires that a testbench is constructed that is capable of performing such functions.
This approach leaves plenty of room for innovation in the capabilities of tools and the quality of test cases generated for various targets. It also opens up a whole new class of tools that would surround the verification intent model. What is described can be seen as a verification synthesis tool that can take a high-level model as input and generate specific test cases. Other tools could concentrate on coverage or prioritizing the progress of the verification effort, as well as debug tools that could help with identifying insecure paths through a system that could result in security vulnerabilities.
System-level verification, 2017 and beyond
The future is exciting and users have not waited for the completion of the Portable Stimulus standardization effort. Many advanced design houses have developed ways to inject this new methodology into their flows already, and the whole industry continues to learn more about the task of system-level verification. Thankfully, with the support of C++ models, it should be extensible well into the foreseeable future and able to address new challenges as they arise. It also has the advantage that plenty of people already know the language, so getting up to speed should be fast.
2017 could be a great year for verification.
Breker Verification Systems