Flex Logix Releases InferX X1 Edge Inference Co-Processor

May 6, 2019 Laura Dolan

Mountain View, CA. Flex Logix Technologies, Inc. InferX X1 edge inference co-processor is based on an inference-optimized nnMAX core and integrates embedded FPGA (eFPGA) interconnect technology.

It provides high throughput in edge applications with a single DRAM, yielding much higher throughput/watt than before, and performs optimally at low batch sizes, usually in edge applications where there is only one camera/sensor.

InferX X1 is designed for half-height, half-length PCIe cards for edge servers and gateways and is programmed with the nnMAX Compiler using Tensorflow Lite or ONNX models.

“The difficult challenge in neural network inference is minimizing data movement and energy consumption, which is something our interconnect technology can do amazingly well,” said Flex Logix’s CEO, Geoff Tate. “While processing a layer, the datapath is configured for the entire stage using our reconfigurable interconnect, enabling InferX to operate like an ASIC, then reconfigure rapidly for the next layer. Because most of our bandwidth comes from local SRAM, InferX requires just a single DRAM, simplifying die and package, and cutting cost and power.”

Go to www.flex-logix.com to learn more.

Previous Article
Hyperstone NAND Flash Use Case Tracker Optimizes Memory Configuration, Management

The Use Case Tracker analyzes data from customer applications using a firmware extension to the Hyperstone’...

Next Article
Dev Kit Weekly – Bitcraze.io Crazyflie 2.0 Drone Development Kit
Dev Kit Weekly – Bitcraze.io Crazyflie 2.0 Drone Development Kit

The kit comes with a battery, motor, and propellers so you can take to the skies in no time, and also allow...