HBM2 powers the next generation of high bandwidth applications

January 26, 2018

Product

HBM2 powers the next generation of high bandwidth applications

HBM2 can achieve faster data rates across more channels, which equates to higher bandwidths, and the availability of comprehensive subsystem solutions can now mitigate many integration challenges.

Image courtesy of Open-Silicon

Recent advances in HBM2 are setting the stage for new uses cases in high bandwidth applications. Not only can HBM2 achieve faster data rates across more channels, which equates to higher bandwidths, but the availability of comprehensive subsystem solutions can now mitigate many of the previous integration challenges that have plagued designers. The result is greater design flexibility and less risk for the next generation of ASIC’s for applications in networking, deep learning, virtual reality, gaming, cloud computing and data centers.

HBM2 is a high performance, 3D-stacked memory solution that leverages the 2.5D silicon interposer technology. A typical system involving HBM2 has an interposer die on which two or more interfacing dies (known as top-dies) are assembled into a single package. Such a system is commonly called a 2.5D system-in-package (SiP), where 3D stacked memory die (HBM2) and ASIC dies interface through fine pitch routes connecting fine pitched micro-bumps. This results in a wide interface architecture that allows it to achieve very high bandwidths, low power, and a small form factor, making it the preferred architecture for high bandwidth applications. In fact, HBM2 (X1024) offers the maximum possible bandwidth of up to 256 GBps compared to 4 GBps with DDR3 (X16) at 1/3rd of the power efficiency.

While 2.5D SiPs provide a lot of advantages in terms of area reduction, achieving high bandwidth, lower power/pin, and smaller package size, they also present challenges with interoperability, 2.5D design, overall SiP design, packaging, test, and manufacturing. It requires careful planning in the physical design of the interposer, signal integrity analysis and STA, rail analysis, and power integrity analysis. This can be accomplished through multiple built-in test and diagnostic features, such as probe pads and loop-back for issue-isolation within the various IP subsystem components. This can not only address the test and debug challenges, but helps in yield management and yield improvement.

One of the most notable advances enabling the development of HBM2 ASIC SiPs is the HBM2 IP subsystem, which is an IP that consists of the controller, PHY, and die-to-die I/O. The IP can translate user requests into HBM command sequences (ACT, Pre-Charge) and handle memory refresh, bank/page management, and power management on the interface. The high performance, low latency controller leverages the HBM parallel architecture and protocol efficiency to achieve maximum bandwidth. One such subsystem solution is available from Open-Silicon. Their IP includes a scalable and optimized PHY and die-to-die custom I/O needed to drive the interface between the logic-die and the memory die-stack on the 2.5D silicon interposer. The subsystem was silicon proven in 16 nm FinFET technology on a 2.5D HBM2 ASIC SiP platform, which successfully demonstrated high bandwidth data transfer and interoperability between the HBM2 IP subsystem and HBM2 memory die-stack. This particular subsystem solution can achieve data transfer rates of 1.6Gbps/2Gbps, and interposer trace lengths of up to 5 mm. This means that it’s capable of a full 8-channel connection from a 16 nm SoC to a single HBM2 memory stack at 2 Gbps, achieving bandwidths of up to 256GB/s. The company is also working on its next generation HBM2 IP subsystem in 7nm FinFET technology, which they say will feature 2.4 Gbps per-pin data rate and achieve bandwidths of > 300 GB/s.