NVIDIA Extends Lead on MLPerf Benchmark with A100

October 22, 2020 Tiera Oliver

NVIDIA announced its AI computing platform has again achieved ideal performance records in the latest round of MLPerf. According to the company, the results led to NVIDIA extending its lead on the industry’s only independent benchmark measuring AI performance of hardware, software, and services.

Per the company, NVIDIA won every test across all six application areas for data center and edge computing systems in the second version of MLPerf Inference. The tests expand beyond the original two for computer vision to include four covering areas in AI: recommendation systems, natural language understanding, speech recognition, and medical imaging.

According to the company, organizations across industries are already tapping into the NVIDIA  A100 Tensor Core GPU’s inference performance to take AI from their research groups into daily operations.

The latest MLPerf results come as NVIDIA’s footprint for AI inference has grown. With NVIDIA’s AI platform available through cloud and data center infrastructure providers, companies representing an array of industries are able to use its AI inference platform to improve their business operations and offer additional services.

NVIDIA and its partners submitted their MLPerf 0.7 results using NVIDIA’s acceleration platform, which includes NVIDIA data center GPUs, edge AI accelerators, and NVIDIA optimized software. 

According to the company, the NVIDIA A100, introduced earlier this year and featuring third-generation Tensor Cores and Multi-Instance GPU technology, increased its lead on the ResNet-50 test, beating CPUs by 30x versus 6x in the last round. Additionally, A100 outperformed the latest CPUs by up to 237x in the newly added recommender test for data center inference, according to the MLPerf Inference 0.7 benchmarks.

Also, according to the company, this means the NVIDIA DGX A100 system could provide the same performance as about 1,000 dual-socket CPU servers, offering customers cost efficiency when taking their AI recommender models from research to production.

The benchmarks also show that NVIDIA T4 Tensor Core GPU continues to be an ideal inference platform for mainstream enterprise, edge servers, and cost-effective cloud instances. NVIDIA T4 GPUs beat CPUs by up to 28x in the same tests, which according to the company, means the NVIDIA Jetson AGX Xavier is ideal among SoC-based edge devices.

Achieving these results required an optimized software stack including NVIDIA TensorRT inference optimizer and NVIDIA Triton inference serving software, both available on NGC, NVIDIA’s software catalog.  

In addition to NVIDIA’s own submissions, 11 NVIDIA partners submitted a total of 1,029 results using NVIDIA GPUs, representing over 85 percent of the total submissions in the data center and edge categories. 

For more information, visit: https://nvidianews.nvidia.com/

About the Author

Tiera Oliver, edtorial intern for Embedded Computing Design, is responsible for web content edits as well as newsletter updates. She also assists in news content as far as constructing and editing stories. Before interning for ECD, Tiera had recently graduated from Northern Arizona University where she received her B.A. in journalism and political science and worked as a news reporter for the university's student led newspaper, The Lumberjack.

Follow on Twitter Follow on Linkedin Visit Website More Content by Tiera Oliver
Previous Article
Percepio Releases Tracealyzer Visual Trace Diagnostics Solution Version 4.4 with Support for Embedded Linux
Percepio Releases Tracealyzer Visual Trace Diagnostics Solution Version 4.4 with Support for Embedded Linux

Percepio launched Tracealyzer 4.4 with support for embedded Linux, enabling developers to speed debugging w...

Next Article
Nokia Announces Analytics-Based Thermal Detection Solution
Nokia Announces Analytics-Based Thermal Detection Solution

The technology uses an open architecture and has a suite of analytics with a set of automation workflows an...