The biggest concern of any system or SoC architect is risk and how to mitigate it. They are left to wonder:
- Will standards change before their product with hardened system functions hits the market?
- What if newly introduced features don’t quite address the need?
- How can the design be future-proofed?
Traditionally, a system architect would try to isolate at-risk portions of the design to programmable logic. Often, a standalone FPGA cannot deliver the needed performance or meet system power and cost goals. Enter the embedded FPGA (eFPGA), the SoC designer’s secret weapon to meet design goals while mitigating risk. Compared to using a standalone FPGA, adding an eFPGA to an SoC achieves a more flexible design, lower power, higher performance and lower overall system costs.
Advantages of selecting an eFPGA over an FPGA are numerous. First, eFPGAs deliver smaller die areas when compared to standalone FPGAs because the entire I/O functionality that allows chip-to-chip connectivity on a PCB is eliminated and the embedded fabric is sized specifically to application requirements. Because the die area for an eFPGA is minimized, additional cost to the SoC is small.
By foregoing a standalone FPGA and embedding programmable logic functionality as a personalized combination of look-up tables, memory, and DSP blocks, eFPGAs offer fundamental improvements in signal delays, bandwidth, latency, power, and cost. Board design becomes easier while power and cooling requirements are reduced, and system reliability improves. The system BoM is improved from both a cost and component count standpoint, as the discrete FPGA along with all its supporting devices – including level shifters, voltage regulators, and bypass capacitors – are eliminated, and there is significant savings in PCB real estate.
In many instances, the system architect will define his or her own custom block functions to be included in the eFPGA along with the standard logic, embedded memory, and DSP blocks. These customized blocks are integrated into the logic fabric alongside the traditional building blocks of LUTs, RAMs, and DSPs, increasing the capability of the eFPGA by adding functions optimized to decrease area and/or increase performance of targeted applications.
In artificial intelligence (AI) applications where there is a considerable compute workload, requirements on both training and inference side are evolving. It takes significant resources, both financial and in terms of time to market, to get a custom ASIC out in the market dedicated to a specific application. By the time the chip is out in the market, the system architect may already be looking into implementing an optimized version of the current AI algorithm that would not be possible post-tapeout in the ASIC. Traditional FPGAs, even though not ideally suited to meet tomorrow’s AI requirements, will continue to fill that gap with increased flexibility and programmability compared to ASICs.
System architects agree that eFPGA integration is a winning proposition that can make an SoC or ASIC adaptable to a wide range of high-performance compute intensive applications ranging from AI and machine learning, 5G wireless, data center, automotive and high-performance computing (HPC).
Alok Sanghavi is a senior manager of product marketing at Achronix Semiconductor Corporation. Prior to joining Achronix, Sanghavi was at Toshiba, where he led the product definition of several semiconductor chipsets into the market. He holds an MBA from the University of California at Davis and Master of Science degree in Electrical Engineering from New York University.