Scheduled for the first half of 2020, there is still some time to go before the final PICMG ratification of the COM-HPC specification. Nonetheless, the PICMG subcommittee already approved two key aspects in November 2019: the physical footprints and the pinout. This enables companies involved in the definition of the specification to present their first products on the market shortly after its official ratification. The information that may be released to the public until that moment is strictly limited. Embedded Computing Design has been given the opportunity to share further details about the pinout and footprints of the COM-HPC standard, which many developers of high-speed embedded systems are certain to use with coming Intel and AMD processor generations for high-end embedded/mid-class servers. IHS Markit estimates that Computer-on-Modules will account for around 38% of total sales of embedded computing boards, modules and systems in 2020. This explains the significance of changes in this market, which, since the launch of the very first Computer-on-Modules, has created two important standards for high-end embedded computing: ETX and its successor COM Express.
Higher Performance, More Interfaces
The need for a new specification to complement COM Express is easily explained. As a result of the digital transformation, the demand for embedded computers to provide high-speed performance is growing. To serve the new class of embedded edge servers, scalability must be limitless. With its 440 pins COM Express does not have enough interfaces for powerful edge servers. The performance of the COM Express connector is also slowly approaching its limits. While COM Express can easily handle the 8.0 GHz clock speed and 8 Gbit/s throughput of PCIe Gen 3, the verdict is still open whether the connector meet certain technological advances such as PCIe Gen 4.
COM-HPC specifies two different pinouts for embedded computing servers and embedded computing clients.
Headless Embedded Server Performance
The need for ultra-high embedded edge performance and extended connectivity is greatest in the new class of headless edge servers that are increasingly used as distributed systems in industrial applications for harsh environments and extended temperature ranges. To illustrate this need for high edge performance, for example, an autonomous vehicle that uses vision and AI logic to establish situational awareness, it simply cannot wait for an algorithm to be computed in the cloud when things get tricky. It must be able to react instantly. The same applies to collaborative robots. This requires systems to provide at least 10 GbE connectivity as well as the ability to use a large number of parallel computing units, for example, to pre-process imaging sensor data or to execute complex deep learning algorithms. Today, GPGPUs are increasingly being used to execute such flexible and multifunctional tasks. Often replacing FPGAs and DSPs, they need high-speed connectivity towards the central CPU cores. This need increases with the complexity of the tasks. With their many PCIe lanes, COM-HPC systems can accommodate significantly more accelerator cards for further performance increases than COM Express ever could.
COM-HPC specifies five different form factors: two for embedded computing servers and – similar to COM Express Basic/Compact and Mini – three for embedded computing clients. All versions provide 800 pins. COM Express offers only 440 pins, which is almost the only significant difference between COM Express and COM-HPC while there are no market-ready PCIe Gen 4 products. A second major distinguishing feature of COM-HPC server modules are up to 8 DIMM sockets, currently providing up to 1 terabyte of RAM.
Massive Parallel Data Processing
A setup that combines powerful CPUs and massive parallel data processing capacity is also required in medical imaging, where the use of artificial intelligence is increasing to support medical diagnosis on the basis of existing findings. The same performance requirements apply to the countless vision systems used in industrial inspection systems, and to public video surveillance systems. The entire field of Industry 4.0 applications also needs more powerful connectivity as more and more formerly standalone machines and systems are being networked. All this drives up demand for high-speed interfaces in embedded systems to implement high-performance Internet solutions, including TSN support for tactile real-time behavior. In addition, more and more workloads need to be consolidated in a single system. Next to data pre-processing in vision systems and deep learning, this includes firewalls and sniffing systems for intrusion detection, which must process virtually identical loads parallel to the running applications. This doubles requirements and necessitates the use of hypervisor technologies for real-time capable virtual machines such as the RTS Hypervisor from Real-Time Systems. Other applications include data grabbers for automotive test systems and measurement technology for 5G as well as industrial storage systems with fast NVMe memory connected via PCIe. Edge logic for 5G radio towers and modular blades in industrial server racks can also benefit from high-performance Computer-on-Modules.
COM-HPC is a consistent step in the progression of the Computer-on-Module market. However, it will likely be years before COM-HPC reaches market shares similar to COM Express, since COM Express also needed about 5 years to outstrip ETX in terms of quantities. And with ETX modules still being sold today, existing COM Express customers can also expect to be able to purchase COM Express modules for many years to come.
Up to One Terabyte of RAM
COM‑HPC will be covering these high-speed performance requirements with up to 100 GbE, up to 32 Gb/s PCIe Gen 4 and Gen 5, as well as up to 8 DIMM sockets and high-speed processors with more than 200 watts of power. The new standard distinguishes two basic variants: headless COM‑HPC server modules, which can also be called Server-on-Modules, and COM‑HPC client modules, which follow the concept of COM Express Type 6 Computer-on-Modules.
COM‑HPC Server-on-Modules will be able to host a massive 1.0 terabytes of RAM with their 8 DIMM sockets. They will also run up to 8x 25 GbE and support up to 64 PCIe Gen 4 or Gen 5 lanes (i.e., an I/O performance of up to 256 GB/s). Such ultra-fast connectivity falls within the embedded edge server class, with the new PCIe lanes offering transfer rates of more than 32 Gbit/s with PCIe Gen 5. Such performance is needed, and can be directly implemented via high-performance interfaces since components with the ability to transfer 28 Gbs Non-Return-to-Zero (NRZ) are already available. In addition, up to 2 extremely powerful USB 4 interfaces are planned via the 800 pins. Based on Thunderbolt 3.0, these interfaces offer 40 Gigabit per second (Gbps). This corresponds to about 5 Gigabyte (GB) per second and is about twice as fast as USB 3.2 with a maximum of 20 Gbps, which is also supported up to 2x. An additional 4 USB 2.0 interfaces complete the USB choices on COM‑HPC server modules. Next to 2x native SATA, support for eSPI, 2xSPI, SMB, 2x I2C, 2xUART and 12 GPIOs is also provided to integrate simple peripherals and standard communication interfaces, for example for service purposes.
The USB 4.0 interface available on COM-HPC server and client modules integrates Thunderbolt 3, which supports up to 40 Gbps, two 4K displays, up to 100 watts plus PCIe, USB, DisplayPort and Thunderbolt protocols.
Server-Class Board Management
Another new feature of COM-HPC is the integrated system management interface. This software interface, which is currently being defined by the PICMG subcommittee, aims to include a small subset of the powerful and complex IPMI definition in the COM-HPC specification to enable easy implementation of full server functionality. Thanks to this interface, COM-HPC will offer real edge server functions that can be widely expanded by integrating suitable server-class Board Management Controllers (BMC) on carrier boards. Relevant carrier board design guides will be needed to help newcomers to the standard get started. The specification will further offer the possibility to develop COM-HPC device modules for graphics processors or FPGAs. For this purpose, the specification defines PCIe clock inputs, so that COM-HPC modules can also be used as clients. This makes it possible to design flexible and compact heterogeneous computing solutions without a need for complex raiser cards, whereas traditionally, graphics cards are developed for PCIe sockets that are mounted at a 90 degree angle on the motherboard. They also offer significantly fewer connectivity options. The same applies to the alternative of MXM3 graphics cards, as they also have only 314 pins. With COM-HPC enabling extremely thin modular designs, also for the GPGPU, it then becomes possible to design thin slot cards for rack systems that offer both COM-HPC server modules and accelerator modules based on GPGPUs, FPGAs or DSPs. Matching solutions for all three accelerator module variants are already being developed, so that COM-HPC is no longer just a standard for embedded edge server processors, but can also be used for GPGPU, FPGA and DSP expansion.
Compared to a maximum 8 Gb/s supported by COM Express with PCIe 3.0 up to now, COM-HPC will enable two to four times the throughput with PCIe 4.0 and 5.0.
800 Pins Instead of 440
Next to this ultra-high performant embedded edge server class, which sets an entirely new standard for robust embedded computing, the second category of COM-HPC client modules positions itself somewhat more discreetly above the COM Express Type 6 specification. As the smaller footprint can accommodate only up to four SO-DIMM sockets, it is mainly the number of pins that makes a key difference: 800 pins clearly offer significantly more interface options than the 440 pins of COM Express. But as long as COM Express can also handle PCIe Gen 4, which can be assumed at least with regards to downward compatibility, developers of COM Express systems don’t have to switch to COM-HPC client modules. In addition to 49 PCIe lanes (COM Express Type 6 offers only 24), there are now for the first time two 25 GbE KR interfaces and up to two 10 Gb BaseT interfaces, which is significantly more than the current single GbE LAN. Another attractive feature are up to two MIPI-CSI interfaces, which enable cost-effective camera connections for situational awareness and collaborative robotics. Many developers will also appreciate the convenient, versatile and extremely powerful USB 4.0 interfaces that are offered in addition to 4x USB 2.0. There will be up to four of them, to connect ultra-fast memory with up to 40 Gbps, or up to two 4K displays including power supply and integrated 10GbE network connection via a single USB-C cable. The graphics have also been tidied up. Support now includes 3x dedicated DDI interfaces. Specific designs for DisplayPort, DVI-I/VGA and DVI-I, HDMI or DVI to LVDS converters are now executed on the carrier board. Further interfaces include 2x SoundWire and I2S as well as 2x SATA; eSPI, 2xSPI, SMB, 2x I2C, 2x UART and 12 GPIOs round off the feature set.
SoundWire, which has been added as a new interface to the specification, will replace the currently used HDA interface. SoundWire is a MIPI standard that requires only two clock and data lines, with a clock rate of up to 2.288 MHz, to connect up to four audio codecs in parallel. Each codec receives its own ID which is evaluated.
OEMs that have a business relationship with one of the companies involved in the new specification can already start suitable carrier board designs as long as they keep them under NDA and do not share them with third parties. The new specification will only become available as an open standard after the official release. Members of the PICMG COM-HPC subcommittee include Acromag, Adlink, Advantech, AMI, Amphenol, congatec, Elma Electronic, Emerson Machine Automation Solutions, Ept, Fastwel, GE Automation, HEITEC, Intel, Kontron, MEN, MSC Technologies, N.A.T., nVent, Samtec, Seco, TE Connectivity, Trenz Electronic, University Bielefeld, VersaLogic Corp. Adlink, congatec and Kontron are committee sponsors, while congatec Marketing Director Christian Eder acts as Chairman of the COM-HPC committee. He has also played an important role in the development of the existing COM Express standard as draft editor. Stefan Milnor from Kontron and Dylan Lang from Samtec support Christian Eder in their functions as editor and secretary of the PICMG COM-HPC committee.
NRZ and PAM4 eye diagram (source: https://gomeasure.dk/applikationer/data-communications/). Considerable technological progress is required to achieve high Ethernet speeds such as 200G/400G. Two coding schemes are possible: Non-Return-to-Zero (NRZ), also known as Pulse-Amplitude Modulation 2-Level (PAM2), and Pulse-Amplitude Modulation 4-Level (PAM4). Because of NRZ’s higher Nyquist frequency, which causes higher channel dependent loss, PAM4 has become a more viable solution. The COM-HPC connector is future-proof as it supports both modes.