Using algorithms derived from its neuroscience research, Numenta announced it has achieved ideal performance improvements on inference tasks in deep learning networks, per the company, without any loss in accuracy.
The advances demonstrated by Numenta are achieved by applying a principle of the brain called sparsity. Numenta compared sparse and dense networks by running its algorithms on Xilinx FPGAs (Field Programmable Gate Array) for a speech recognition task using the Google Speech Commands (GSC) dataset. Using the metric of number of words processed per second, the results show that sparse networks yield more than 50x acceleration over dense networks on a Xilinx Alveo board.
In addition, Numenta demonstrated the GSC network running on a Xilinx Zynq chip, a smaller chip where dense networks are too large to run, enabling a new set of applications that rely on low-cost, low-power solutions. Using the metric of number of words per second per watt, they show that the sparse networks use less power than the most efficient dense network.
This proof-of-concept demonstration validates that sparsity can achieve acceleration and power efficiencies for a variety of deep learning platforms and network configurations, while maintaining competitive accuracy. This approach could enable machine learning technologists to:
- Implement larger and more complex networks using the same resources.
- Implement more network copies on the same resources.
- Implement deep learning networks on edge platforms where resource
- constraints prevent dense networks from running.
- Achieve large energy savings and lower costs due to scaling efficiencies.
For more information, visit: https://numenta.com/
About the AuthorFollow on Twitter Follow on Linkedin Visit Website More Content by Tiera Oliver