Major trends in the deep learning chip industry: Startup investment

August 16, 2018 Sharad Singh, Allied Analytics

Investments have been made in the development of artificial intelligence (AI) accelerators that support deep learning, while acquisitions have targeted the acceleration of machine learning applications.

The deep learning chip industry is gaining traction with an increase in investments by venture capitalists and acquisition by leading tech giants. These entities all share the common goal of speeding the development of new AI-capable chips.

This investment trend is expected to drive significant growth in the AI industry. According to Allied Market Research, the global deep learning chip market is projected to reach $29.36 billion by 2025.

Following are major investments made in startups:

AlphaICs stacking investment dollars

Though the AI accelerator market is becoming saturated with startups and leading companies,  the computing paradigm shift presented by AI continues to drive investment.

AlphaICs has designed a new instruction set architecture (ISA) optimized for deep learning and other machine learning tasks, as well as libraries, a runtime environment, and APIs. The company's target solution today is a 13 W machine learning accelerator for robots, cars, and drones, with a roadmap that includes a family of chips consisting of 16-256 cores on a 2 W to 200 W power scale. The Bangalore-based startup is also active in the creation of machine learning algorithms.

AlphaIC's first product, the 13 W RAP-E accelerator, is being developed for inferencing and learning tasks on edge devices. An FPGA version of RAP-E was compared to NVIDIA's Volta V100 device in extensive image recognition testing using videos and convolutional neural network algorithms. The RAP-E FPGA neural network accelerator delivered a 50 percent to 400 percent performance improvement over the NVIDIA GPU.

A high-end version of RAP-E, the RAP-C, is a 100 W chip that leverages high-bandwidth memory and targets data center applications, such as the construction of large neural networking models. It will also be implemented on FPGA fabric.

AlphaIC has raised nearly $15 million to date and plans to raise more funds in a Series B over the next few months to work on a 7 nm version of RAP-C. The devices are programmed in C code or a TensorFlow framework, with support for other popular deep learning frameworks on the way.

RAP FPGA accelerators are scheduled for production in late 2019. 

Deep learning chip startup acquired by a giant

Leading tech giants have also been acquiring startups to integrate deep learning chip expertise, with one example being Xilinx' recent purchase of DeePhi Tech. DeePhi Tech has been active in neural network and FPGA acceleration engineering over the past few years, using Xilinx FPGA technology alongside its own proprietary software and pruning methods that enable rapid inferencing. In particular, the company's approach lends itself to convolutional neural networks (CNNs) and long-short term memory (LSTM) models.

“DeePhi has the technology to prune LSTM and CNNs in a multi-layered way, making it possible to do image classification with natural language processing at the same time,” says Ashish Sirasao, an engineer at Xilinx.

Xilinx has seen a lot of momentum in merging CNN and LSTM technology, and wants to ensure there will be an optimized implementation. “This new wave in inference is what DeePhi is doing now; we are helping to create more proof points and engagements to drive research,” Sirasao says.

Commenting on the acquisition, Salil Raje, executive vice president of the Software and IP Products Group at Xilinx said that the company will continue its investment in DeePhi to work on deploying accelerated machine learning applications at the edge and in the cloud.

This trend of investments in startups will continue to drive growth in the AI industry.

Previous Article
IoT Helps Beekeepers Raise More Bees

Hive temperature is critical. A healthy hive where bees are brooding generally maintains a temperature of 9...

Next Article
Vecow Launches ECX-1400/1300 Series 8th Generation Intel Coffee Lake Expandable Fanless Embedded Workstation

Powered by 8th gen Intel(r) Coffee Lake workstation-grade platform, fanless -40°C to 75°C operation, outsta...


Stay updated on processing and related topics with the Processing edition of our Embedded Daily newsletter

Subscribed! Look for 1st copy soon.
Error - something went wrong!