Flex Logix Releases InferX X1 Edge Inference Co-Processor

May 6, 2019 Laura Dolan

Mountain View, CA. Flex Logix Technologies, Inc. InferX X1 edge inference co-processor is based on an inference-optimized nnMAX core and integrates embedded FPGA (eFPGA) interconnect technology.

It provides high throughput in edge applications with a single DRAM, yielding much higher throughput/watt than before, and performs optimally at low batch sizes, usually in edge applications where there is only one camera/sensor.

InferX X1 is designed for half-height, half-length PCIe cards for edge servers and gateways and is programmed with the nnMAX Compiler using Tensorflow Lite or ONNX models.

“The difficult challenge in neural network inference is minimizing data movement and energy consumption, which is something our interconnect technology can do amazingly well,” said Flex Logix’s CEO, Geoff Tate. “While processing a layer, the datapath is configured for the entire stage using our reconfigurable interconnect, enabling InferX to operate like an ASIC, then reconfigure rapidly for the next layer. Because most of our bandwidth comes from local SRAM, InferX requires just a single DRAM, simplifying die and package, and cutting cost and power.”

Go to www.flex-logix.com to learn more.

Previous Article
VIA Mobile360 ADAS System Introduced at Embedded World 2019

Via Technologies, Inc. will be attending embedded world 2019 to exhibit VIA Mobile360 ADAS, their Edge AI s...

Next Article
Hyperstone to Attend Embedded World to Showcase Latest SSD Controller

Hyperstone will be demonstrating their new SSD controller and health monitoring solutions at embedded world...