Numenta Demonstrates 50x Speed Improvements on Deep Learning Networks Using Brain-Derived Algorithms

November 12, 2020 Tiera Oliver

Using algorithms derived from its neuroscience research, Numenta announced it has achieved ideal performance improvements on inference tasks in deep learning networks, per the company, without any loss in accuracy.

The advances demonstrated by Numenta are achieved by applying a principle of the brain called sparsity. Numenta compared sparse and dense networks by running its algorithms on Xilinx FPGAs (Field Programmable Gate Array) for a speech recognition task using the Google Speech Commands (GSC) dataset. Using the metric of number of words processed per second, the results show that sparse networks yield more than 50x acceleration over dense networks on a Xilinx Alveo board.

In addition, Numenta demonstrated the GSC network running on a Xilinx Zynq chip, a smaller chip where dense networks are too large to run, enabling a new set of applications that rely on low-cost, low-power solutions. Using the metric of number of words per second per watt, they show that the sparse networks use less power than the most efficient dense network.

This proof-of-concept demonstration validates that sparsity can achieve acceleration and power efficiencies for a variety of deep learning platforms and network configurations, while maintaining competitive accuracy. This approach could enable machine learning technologists to:

  • Implement larger and more complex networks using the same resources.
  • Implement more network copies on the same resources.
  • Implement deep learning networks on edge platforms where resource
  • constraints prevent dense networks from running.
  • Achieve large energy savings and lower costs due to scaling efficiencies.

For more information, visit:

About the Author

Tiera Oliver, edtorial intern for Embedded Computing Design, is responsible for web content edits as well as newsletter updates. She also assists in news content as far as constructing and editing stories. Before interning for ECD, Tiera had recently graduated from Northern Arizona University where she received her B.A. in journalism and political science and worked as a news reporter for the university's student led newspaper, The Lumberjack.

Follow on Twitter Follow on Linkedin Visit Website More Content by Tiera Oliver
Previous Article
More Free Money & What NVIDIA Didn't Buy
More Free Money & What NVIDIA Didn't Buy

Stacey Higginbotham, a tech industry expert and editor of the eponymous Stacey on IoT blog, joins Brandon a...

Next Article
AI Compilers and the Race to the Bottom
AI Compilers and the Race to the Bottom

Glow-compiled inputs exhibited a 3x frames/second performance improvement over TensorFlow/TensorFlow Lite, ...