Functional and performance verification of SoC interconnects

May 1, 2014 OpenSystems Media

Verifying interconnect Intellectual Property (IP) – the "glue" that holds together the cores and IP blocks in a System-on-Chip (SoC) – has become more complicated with advanced SoCs, which require special interconnect IP to perform the on-chip communication function. As a result, functional and performance verification of these SoC interconnects has taken on a new level of complexity. Tools have been developed to simplify verification while providing design engineers the ability to find and fix interconnect problems much earlier in the design cycle.

Remember the days when engineers used to be able to rely on buses to perform the on-chip communication function in chips? Those days are clearly in the past, especially as our increasingly connected world demands so much more functionality from our chips. Today’s advanced SoC calls for an interconnect to serve as the communication hub for various IP cores within the SoC. Verifying the functionality and performance of SoC interconnects can be a complex task, given the amount of masters and slaves, the different protocols, different types of transactions, and multi-layered topology involved. A more holistic approach using tools and technologies can simplify the process of verifying the functionality and performance of SoC interconnects.

Preventing surprises with functional verification

With functional verification, designers want to ensure that the multicore chip implements the functions needed, while handling errors in a relatively smooth manner. From a practical standpoint, designers want to verify the SoC IP blocks together with the chip’s interconnect. There are two steps here. First is verifying that the IP blocks implement the given interface protocol correctly via verification IP, which can alert to any protocol violations. Verification IP monitors simulation results and performs corner-case testing against the protocol specification; during this process, verification IP with embedded assertions can automatically detect protocol violations. Furthermore, test suites and verification plans in the IP can move the verification process quickly to closure.

The second step in verifying the IP blocks with the interconnect is to verify that the commands and data will arrive at the proper destination and in the right format. Designers will want to look out for issues such as data splitting, upsizing, and downsizing. This is important because different interfaces on the interconnect subsystem are using different protocols; for example, a data transaction that entered the interconnect as a series of APB transfers can come out as an AXI burst at the destination port. Operations such as snoop conversations, snoop propagation, snoop filter operation, and cross-cache line should also be verified. In other words, they should be sure that the cache-coherent interconnect performs its role as the coherency manager correctly. To save remote memory access time, the coherent interconnect snoops the caches of relevant masters and, based on their responses, determines whether to return the requested data from the cache or from the remote memory, and updates the cache line status of the relevant masters accordingly. This behavior is defined by the coherent protocol. If the interconnect isn’t following the protocol, the system would soon enter a non-coherent state and most likely crash.

Meeting bandwidth and latency targets with performance verification

Performance verification is where designers should make sure that the design will meet its targeted bandwidth and latency levels. Consider an SoC design with multiple interconnects to prevent localized traffic from affecting the rest of the device’s subsystems. Interconnect IP plays an important role here, as it can tune each port for unique bus widths, address maps, and clock speed. Usually, there are also mechanisms to adjust bandwidth and latency to tune the interconnect IP in each domain.

However, there are still instances where traffic conflicts will occur, as shown in Figure 1. How can traffic in these situations be balanced? Most systems don’t have enough main memory bandwidth to accommodate all IP blocks being active simultaneously. What’s important is preventing one IP block from dominating and overwhelming the others; otherwise, system performance degrades. Performance analysis can be helpful in this situation, minimizing the impact of system performance degradation.

Figure 1: Traffic Management and System Performance. In this diagram, three subsystems are attempting to access the main memory simultaneously. Performance analysis helps assess whether the SoC diagram needs to be reconfigured.

To analyze performance, designers need to compare bandwidth and latency measurements from different SoC architectures or different SoC use cases. This comparison involves modeling, running simulations on, and measuring performance of two or more (typically several) SoC architectures (or implementations of a specific architecture), which is not practical to do manually. After all, a manual effort would entail building testbenches around various SoC architectures under comparison. In the case of complex SoCs – where performance analyzing and tuning are most important – creating the requisite testbenches can easily take a few days for an experienced engineer and much longer for the less experienced.

Five important areas of focus for performance analysis

To make performance analysis as effective and efficient as possible, there are five aspects you should strive to integrate into the process:

  1. Cycle-accurate modeling – With cycle accuracy, the logic simulation yields the same ordering of events with the same timing as will be seen in the actual chip. Cycle-accurate simulation models include the RTL-level Verilog or VHDL created during the SoC design process.
  2. Automatic RTL generation – Automatically generated interconnect RTL is a step toward creating a full SoC cycle-accurate model. To determine the combination that provides the best overall performance, designers need to be able to quickly generate multiple variations of the interconnect IP.
  3. Verification IP – As previously discussed, verification IP helps find protocol violations.
  4. Testbench generation – Generating testbenches automatically saves several weeks that development can otherwise take to create a test environment for interconnects.
  5. In-depth analysis – The ability to gather all simulation data – design assessment, the testbench, and traffic – is necessary to debug performance problems and determine how design changes might affect bandwidth and latency.

Graphical interconnect simulation comparison

A tool has been developed that provides a graphical way to compare interconnect simulation runs, for quick and accurate assessment of interconnect performance. Cadence Interconnect Workbench helps find and fix interconnect problems earlier in the design cycle to achieve the bandwidth and latency levels that the SoC requires. Using the tool, whose flow is illustrated in Figure 2, engineers can throw aside cumbersome spreadsheets and take advantage of a GUI with built-in filters to select masters and/or slaves and the path(s) to evaluate and perform "what if" analyses. The GUI makes it fast and easy to get a view into how design changes impact bandwidth and latency for the simulation results of interest. For example, engineers can compare and find the ideal configuration for a particular use case, or for multiple use cases running on a single configuration. They can quickly see what proportion of traffic goes to each slave and what their latency distribution looks like. Live filtering and analysis features eliminate what can be a very cumbersome process with spreadsheets.

Interconnect Workbench integrates with Cadence Interconnect Validator, a verification IP component that collects all transactions and verifies the correctness and completeness of data as the data passes through the SoC interconnect fabric. Interconnect Validator connects to all of the interface-level verification IP instances (which are monitoring the correct protocol behavior of the IP blocks) and, therefore, has a deep understanding of the data and commands coming in and out of the interconnect. By matching this data, the tool can verify if the data is being delivered to the right destination. If an interconnect doesn’t follow the protocol it issues an error.

Figure 2: The flow of data through Cadence Interconnect Workbench. On the left, RTL, verification IP, and traffic pattern descriptions move into the tool, which automatically generates a testbench for simulation. The tool also generates other testbenches as other variations of the SoC are generated. The performance GUI provides an overview of simulation results.

Efficient and effective interconnect verification

With incessant time-to-market pressures and increasingly complex SoC designs, it’d be hard to find an engineer who doesn’t want to shave off time from his/her design cycle. Particularly at advanced nodes, verifying SoC interconnects has become a time-consuming step. However, tools can now perform cycle-accurate performance analysis and verification of interconnects efficiently and effectively.

Nick Heaton is Distinguished Engineer at Cadence.

Avi Behar is Product Director at Cadence.

Cadence Design Systems @Cadence

Nick Heaton (Cadence) and Avi Behar (Cadence)
Previous Article
Feel "free" to use embedded Linux

Linux has been a mainstream embedded operating system for many years. And yet, the licensing and developmen...

Next Article
Managing SoC complexity with scenario model verification

Graph-based scenario models assist engineers with project management, thorough verification, and other aspe...