COM-HPC Scales Heterogeneous Embedded Hardware into High-Performance Edge Computing

November 24, 2020

Press Release

COM-HPC Scales Heterogeneous Embedded Hardware into High-Performance Edge Computing

High-speed interfaces and provisions that allow modules to host PCIe targets mean that COM-HPC can support compute architectures ranging from Arm to GPUs to FPGAs and more.

“According to analysts, by 2023, 75 percent of data will be created outside of the data center,” said John Healy, Vice President of the Internet of Things Group and General Manager of Platform Management and Customer Engineering at Intel. “And more than 50 percent of that data will be processed, stored, and analyzed at the edge.”

In specific terms, that’s about 45 zettabytes of data, which is more than the total amount of data generated worldwide in 2019. That means data center-class performance must proliferate across edge environments. Of course, achieving that is easier said than done.

Efficiency, durability, and determinism are all factors to consider when migrating to an edge computing paradigm. They are often so top-of-mind, in fact, that it’s easy to overlook the sheer amount of processing performance that’s needed for modern applications like 5G networking, autonomous vehicles, test and measurement equipment, and even retail self-checkout systems.

“Consider today’s self-checkout point-of-sale terminal at your local grocery store. It is using multiple compute systems to run various applications such as checking the weight, identifying the bar codes, etc.,” Healy said. “The newer self-checkout terminals are integrating additional functionality such as recognizing the object scanned.

“These trends are driving the need for higher compute and memory along with the requirement to connect to higher speed I/O accelerators,” he continued. “However, current COM solutions have limited scaling to server CPUs and are not able to support newer I/O technologies such as PCIe 5.0 and higher connectivity speeds such as 25 GbE or 100 GbE.

COM-HPC builds upon the success of the COM Express specification and addresses the compute, memory, and I/O needs of emerging use cases such as AI, 5G, and edge applications. The COM-HPC specification supports larger sizes to accommodate high-powered server class SoCs, more memory, and more I/O,” Healy added.

Introducing COM-HPC for Edge Clients and Servers

PICMG’s COM-HPC specification represents a new class of computer-on-module (COM) technology for high-performance edge computing and edge server use cases, defining five different form factors and two pinouts: one designed for Client use cases and the other for Server deployments (Figure 1).

Figure 1. The  COM-HPC specification defines a Client and Server pinout across five different module sizes.

As you can see in Table 1, the two COM-HPC pinouts offer a large number of high-speed interfaces, including 25 Gbps Ethernet and PCI Express. But what’s not represented in the table is that those PCI Express links can be PCIe 5.0, and potentially beyond.

Table 1. The COM-HPC Client and Server pinouts define a generous amount of high-speed interconnects, including PCI Express Gen 5, 25 Gigabit Ethernet, and USB4.

“The COM-HPC client modules have up to four video outputs, a lot of audio stuff, and user interface-related stuff. There is more user or video interaction on this,” explained Christian Eder, Director of Marketing at congatec and Chairman of the COM-HPC Subcommittee. “Servers are headless, and that's why the COM-HPC Server type was defined, with a maximum of 64 PCI Express lanes.

“PCI Express Gen 4, Gen 5, and even Gen 6 are on the roadmap here, but won't be possible with COM Express,” he continued. “So in a nutshell, we going to have about 10 times more performance available on COM-HPC.”

Support for high-speed, next-generation interfaces permits COM-HPC to meet the increasing bandwidth and throughput requirements of edge applications, but, more importantly, allows the specification to host modern high-performance processors. Because COM-HPC defines a maximum power consumption of 358 W of DC power, the modules are capable of accepting fast CPUs that range from Intel Atom and Core processors on COM-HPC Client modules to Xeon-class devices that will support PCIe 5.0 on COM-HPC Server products.

But in addition to Intel x86-based processing options, which served as the foundation of PICMG’s COM Express specification family, the high-speed interfaces and provisions that allow COM-HPC modules to host PCIe targets mean that the specification has been designed from the ground up to support compute architectures ranging from Arm to GPUs to FPGAs and more.

A Form Factor for Heterogeneous Compute

Given the variety of workloads present at the edge, support for a wide selection of processor technologies will enable COM-HPC users to utilize the best compute technology for their application while maximizing performance per watt. These devices can be implemented as either co-processors added to the system as standard plug-in cards or directly onto the COM-HPC carrier board, or as the host itself.

“Apart from ‘faster, better, cheaper,’ COM-HPC takes care to implement vendor and architecture-independent interfaces, for example, by replacing LPC by eSPI, which allows easy integration of non-x86 architectures into COM-HPC,” noted Jens Hagemayer, a researcher at the University of Bielefeld and member of the COM-HPC Subcommittee.

“Because FPGAs and GPUS are different in terms of underlying hardware architecture compared to CPUs, they can be used as accelerators in combination with an x86-based CPU, but also in a standalone manner,” he continued. “Used in the right way, those heterogeneous architectures can help tackle the challenges imposed by the end of Dennard scaling 15 years ago, as well as the progressed slow down of Moore’s Law that we see now and in the coming years.

“In addition, ARM64 has proven to be a vital alternative to x86, especially for edge applications, due to its instruction set being better suited for low-power applications,” Hagemayer added. “With RISC-V, there is another promising architecture alternative available, which will get more attention over the years to come.”

Hagemayer and other researchers at the University of Bielefeld have already begun implementing heterogeneous COM-HPC modules into microservers as part of the LEGaTO project, a publicly-funded hardware and software framework that aims to simplify the programming of energy-efficient, heterogeneous IoT infrastructure for smart homes and smart cities (see Sidebar).

High-Speed Signaling for High-Performance Computing

None of this would be possible without significant advances in the connector technology that carries signals from COM-HPC modules out to a carrier board and beyond. COM-HPC defines a pair of 400-pin connectors that double the amount of available I/O compared to COM Express, and were designed to maintain signal integrity and reduce crosstalk at data rates above 16 Gbps NRZ.

“Theoretically, using only differential pairs in a ground-signal-signal-ground pattern, the connector supports a max aggregate data rate of 4096 Gbps, or 2088 Gbps/in2,” observed Burrell Best, Industry Standards Manager, Signal Integrity Group at Samtec Inc. and another member of the PICMG COM-HPC Subcommittee. “The 10 mm mated connector was designed to specifically support PCIe 5.0, while the 5 mm connector was designed to support even higher data rates including IEEE 802.3cd and OIF 56G PAM4 Ethernet Standards. It will also likely support PCIe 6.0, which is expected to use 64 GTps PAM4 encoding.

“The new connector can support 1.12A per pin when using the recommended power footprint outlined in the specification,” he went on. “The increased power gives embedded system designers enormous flexibility in selecting CPUs, chipsets, and DDR4 memory options within COM-HPC form factors. Increased pin counts double I/O density with increased speeds.

“All if this offers a future proof path for technology improvements with this connector system,” he added.

Despite the attention to detail in the COM-HPC connector design, dealing with high-speed signals can get tricky for engineers developing a carrier board. To assist, Stefan Milnor, Vice President of Engineering at Kontron and Editor of the PICMG COM-HPC specification, noted that the Subcommittee is producing extensive reference documentation.

“If a COM-HPC design supports the highest speed interfaces, such as PCIe Gen 5, designers will have to adapt to practices appropriate for these speeds,” Milnor said. “This includes symmetric stripline pair routing between ground planes, the choice of  suitable high-speed PCB materials, and no-stub vias that are either back-drilled through vias or blind or buried vias.

“These topics are covered fairly thoroughly in the COM-HPC specification document,” he asserted. “COM-HPC has also incorporated an Intel-defined initiative for Ethernet KR sideband signaling, known as the “Common Electrical Interface.” This is different from the COM Express Type 7 KR implementation. The CEI is covered in the PICMG COM-HPC Carrier Design Guide companion document.”

A short-form preview spec is publicly available on the PICMG website, and the COM-HPC Carrier Design Guide will be available shortly after the specification is formally announced in early 2021.

COM-HPC: At the Intersection of Edge and Enterprise

To bridge the gap between enterprise-class performance and the realities of edge environments, COM-HPC integrates a range of additional features. These include a flat, rugged mechanical design that supports heatsinks and a subset of Intelligent Platform Management Interface (IPMI) specification that supports the implementation of Redfish profiles and iKVM solutions so that a single COM-HPC carrier can act as the central management instance for multiple edge server modules.

All of this is the result of thousands of man hours donated by 22 companies who contributed to the COM-HPC specification.

“About a dozen of companies have initial designs, and products are planned to be released in early 2021. However, that is the tip of the iceberg,” said Jessica Isquith, President of PICMG. “We receive weekly calls from members not on the technical subcommittee and nonmembers ready to design and manufacture products as soon as they can access the specification. The interest is coming from traditional COM vendors as well as server manufacturers and large data centers.

“The ecosystem will be in high growth mode for many years,” she continued. “The fact that it accommodates x86, FPGA, GPUs, and other CPUs and accelerators increases the available market for adopting the spec, and the list of potential applications is overwhelming.”

And yes, one of those applications may be a self-checkout retail system coming to a store near you.

“New POS terminals will consolidate all of their functionality onto a single, high-performance compute system for potential capital and operational savings,” Healy projected. “In this example, a checkout terminal vendor can us a COM-HPC architecture to scale performance based on system needs, in addition to integrating next-generation compute modules without a full system redesign.

“COM-HPC provides the scalability and flexibility in system design choices that are unavailable on single board architectures,” he added.

The COM-HPC specification will be made available to PICMG members at no charge, and nonmembers can purchase access for $750 USD.

For more information, visit https://picmg.org/com-hpc-overview.