We are in the midst of the wave to automate our everyday activities and business processes using “smart” appliances. Maintaining a low-power profile is important, especially for battery-powered devices. Embedded systems designers are logically attracted to using LPDRAM for its low-power budget, performance, and smaller footprint on the board.
Devices in embedded applications use a wide mix of DRAM technology, as shown in Figure 1. LPDDR is the right solution for IoT applications given the density and speed requirements for this segment. LPDDR2 is used in a number of higher bandwidth/performance applications and is expected to be in use for quite some time, according to Micron Technology (Figure 1).
System on chip (SoC) designs make use of lots of third-party IP. So much so that surveys show the percentage of IP content in a typical SoC is at 70 percent or more. This includes the LPDDR memory subsystem, which communicates with off-chip dynamic RAM (DRAM).
Designing high-performance, reliable LPDDR memory subsystems is no small feat because its interface is often the highest frequency signal used in the SoC and, if it fails or is unstable, the system becomes unusable.
SoC semiconductors are the most cost effective in terms of low power and performance when manufactured at the 28 nm process node. Managing static and dynamic variations is one of the many considerations for designers when they’re implementing a SoC design in an advanced process node. These subtle variations are becoming the most important considerations for a number of reasons.
Static variations are a consequence of the chip manufacturing process where no two devices behave exactly the same. Careful design planning and execution to accommodate small differences across the expected behavior from a population of devices is needed to ensure that the finished product performs as expected.
The chip itself is only one piece. The package, printed circuit board (PCB), or system substrate and external components that interface with the chip have their own static variations that must be factored into the design also when considering overall system performance and reliability. A poor choice of board type can greatly reduce the yield of working systems.
When the chip is in operation it experiences dynamic variations due to fluctuations in the system environment. These include temperature or voltage changes, and perhaps other environmental variables difficult to predict. A system must be designed to withstand these dynamic conditions in the field nonetheless. One technique is to use guard-banding to manage a wide range of anticipated operating conditions. However, performance will usually be sacrificed for reliability. If the wrong guard-banding is applied to the design specification, the system could suffer from reliability problems if operating conditions occur outside the expected norm.
A designer’s challenge is to ensure that the device or system meets performance and reliability goals. He or she spends time testing and evaluating a system using examples from different operating conditions with a goal to “tune” the device or system so it will operate across the expected range of static and dynamic variations encountered by the consumer.
An adaptive type of IP can make all the difference because it can measure the relevant parameters critical to performance and reliability, then automatically make adjustments to ensure the parameters are optimized. These precise measurements and corrections would be made during system initialization and again at regular times during system operation.
Adaptive routines run quickly and have little impact on system operation and throughput, and have enough latitude to correct for a wide range of variation. Because adaptive IP is in the chip, each system is optimized for static variations in each component and dynamic variations caused by the system environment. That means the chip continually optimizes its operation to deliver the best performance with robustness and reliability to the consumer.
(Un)predictable DDR IP
Let’s consider the DDR memory subsystem found in most SoCs as adaptive IP. Of course, designers look to a variety of signal-training routines specified within the JEDEC DDR memory specifications. They will not find a solution to the clock-domain crossing (CDC) problem: During a read operation, the data strobe signale (DQS) generated by the DDR SDRAM and other associated data must be correctly synchronized with the SoC system clock. The relationship between the phase and latency of these clock domains is subject to static and dynamic variations and is difficult to predict or model.
Typically, designers deploy a DDR subsystem to bench test and measure multiple systems with multiple components across a variety of operating corners. Decisions are made about how to set up the interface timing once they have enough data, making it likely that all systems will perform across tested scenarios. However, this process can take days and weeks, with no guarantee that each system will perform perfectly in every operating scenario.
The solution is DDR adaptive IP. During system initialization, the adaptive IP measures phase and latency differences between the DQS and the SoC clock and programs the interface to align the two domains for that specific system. During system operation, the adaptive IP periodically rechecks the phase and latency and, if needed, re-calibrates the timing.
Using this approach, system bring-up is automated because the adaptive IP finds the best operating point for each device and system. The use of adaptive IP allows the best system performance to be achieved and ensures that the system maintains stable operation under varying operating conditions. Even when targeting low-power operation in today’s advanced semiconductor process nodes.
Adaptive IP is being broadly adopted. We predict that it will be an essential requirement as we move to future LPDRAM standards that demand ever-higher performance and ever-smaller power footprint.