General Micro Systems Launches 6U Dual-Xeon OpenVPX Blade

April 26, 2018

Product

General Micro Systems Launches 6U Dual-Xeon OpenVPX Blade

Lots of functionality is packed into this 6U form factor, including two Intel Xeon processors and storage.

Rackmount servers have their place, yet already-deployed defense platforms and the world’s militaries often prefer tried-and-true OpenVPX-style systems for legacy card- and system interoperability. Until now, upgrading those systems with the best rack style server compute engine wasn’t possible using OpenVPX. General Micro Systems (GMS) recently changed this paradigm with the launch of a 6U, dual-CPU OpenVPX full-functionality server blade with two Intel Xeon processors and storage.

With the Phoenix VPX450 OpenVPX motherboard installed in a deployed and rugged air-cooled chassis, server-room performance is available to airborne, shipboard, vetronics, and battlefield installations where rackmount servers don’t fit or are inappropriate due to size, Hence, there’s no need to deploy platforms with commercial servers and all the baggage that accompanies them.

Phoenix offers the raw server performance, onboard I/O and data transfer to the rest of the OpenVPX system. The single blade server includes 44 cores and 88 virtual machines, 1 Tbyte of ECC DRAM, 80 lanes of PCIe Gen 3 serial interconnect, dual 40 Gig Ethernet, and storage and I/O.

Up to four different types of plug-on modules can be deployed. There are dual SAM I/O PCIe-Mini sites, usually used for MIL-STD-1553 and legacy military I/O. These sites also accept mSATA SSDs for server data storage. An XMC front panel module provides plug-in I/O such as for a video frame grabber or software-defined radio. Finally, an XMC carrier can be equipped with an M.2 site, used for either storage or more add-in I/O.

Besides acting as a traditional OpenVPX slot 1 controller, the VPX450 server blade can be used as part of a compute cluster system, with each blade providing 34,330 PassMark performance. Inter-card communication via the 68 PCIe connections can create a high-performance cluster computing system through symmetric multiprocessing.