You’ve heard it many times before: AI and machine learning (ML) are moving to the Edge of the IoT. The reasons for such have been discussed numerous times, so I won’t go through it again. Suffice to say, it’s faster and cheaper and provides better performance.
One aspect of this move that is often overlooked is the available power at the Edge. To that end, Microchip Technology is offering a solution that makes it easier to perform those AI/ML functions at the Edge. As part of the company’s Smart Embedded Vision initiative, Microchip is making it easier for software developers to implement their algorithms in its PolarFire FPGAs.
One element of this initiative, the VectorBlox Accelerator SDK, helps developers maximize the FPGAs in low-power, flexible overlay-based neural network applications. With the SDK, developers can write their code in C/C++. The key here is that you don’t have to be an expert in FPGA design. Models can be executed in TensorFlow and the open neural network exchange (ONNX) format. The latter supports many popular frameworks, including Caffe2, MXNet, PyTorch, and MATLAB. In addition, the SDK works with Linux and Windows.
A bit-accurate simulator lets the developer validate the accuracy of the hardware while in the software environment. The neural network IP included with the kit supports the ability to load different network models at run time.
Back to the devices. According to Microchip, the PolarFire FPGAs can potentially deliver up to 50% less power than competitive devices, while offering higher performance (up to 1.5 TOPS). The FPGA IP is available in various sizes to match the performance, power, and package size needed for the application. At the low end, they can be as small as 11 by 11 mm. The VectorBlox Accelerator SDK should be available sometime in the third quarter of this year. The FPGAs are already in production.
About the AuthorFollow on Twitter Follow on Linkedin Visit Website More Content by Rich Nass