Intel Vision Accelerator Solutions Speed Deep Learning and AI on Edge Devices

November 05, 2018

Blog

Intel Vision Accelerator Solutions Speed Deep Learning and AI on Edge Devices

The latest products combine with existing Intel solutions to put artificial intelligence where it's best suited for the application--at the Edge or in the data center.

Intel recently unveiled its family of Vision Accelerator Design Products aimed at artificial intelligence (AI) inference and analytics performance on Edge devices, where data originate and are acted upon. The products come in two forms: one that features an array of Intel Movidius vision processors and one built on the company’s Arria 10 FPGA. The solutions build on the OpenVINO software toolkit that potentially improves neural network performance on various Intel products and gets developers closer to the goal of cost-effective, real-time image analysis and intelligence within their IoT devices.

Intel Vision Accelerator Design Products offload AI inference workloads to purpose-built accelerator cards, also based on Intel products. The deep-learning inference accelerators scale based on the needs of the specific application. Hence, the AI applications could be running in the data center, in on-premise servers, or inside Edge devices. With the OpenVINO toolkit, developers can easily extend their investment in deep learning inference applications on Intel CPUs and integrated GPUs to these new accelerator designs, saving time and money.