Why NVIDIA’s A100 marks a new era for GPU powered analytics

1st December 2020 by Richard Heyns

NVIDIA’S release of their A100 80GB GPU marks a momentous moment for the advancement of GPU technology. With 3x speed up, 2 terabytes per second memory bandwidth, and the ability to connect 8 GPUs on a single machine, GPUs have now definitively transitioned from graphics rendering devices into purpose-built hardware for immersive enterprise analytics application.

GPUs have been transitioning from exclusively gaming industry technology into the wider market of data analytics as extremely powerful compute devices. NVIDIA have been on the forefront of the technological development behind this transition, making incremental enhancements to GPU technology to support heightened number-crunching performance for business applications.

This approach brought releases like the Tesla GPUs which are optimised for high-performance, general-purpose computing. Nevertheless, these devices were still fundamentally based on designs for gaming and graphics rendering. Therefore, their capacity for large-scale enterprise applications had historically been limited by their amount of on-board memory.

In 2012, the Tesla K10 combined two GPUs with 4GB of memory each paired together for 8GB of total on-board memory. To support the massive datasets enterprises require, from Telecoms to Retail, it has been necessary to create GPU clusters with larger combined RAM. However, there remained limitations to the volume of data organisations could access, and combining the GPUs introduced overheads.

Memory has doubled every few years since, but still hadn’t quite overcome the fact that single GPUs have bandwidth barriers – until NVIDIA’s most recent update.

The A100 release, with 80GB of memory, is undeniably ground-breaking. This size of memory footprint demonstrates the hardware is specifically built for enterprise and number crunching applications, definitively establishing GPUs as the new standard for business analytics tools and opening up greater possibilities for data-centric R&D, innovation and technological transformation.

The NVIDIA A100

The new A100 with HBM2e technology has double the memory of its predecessor and delivers over 2 terabytes per second of memory bandwidth. Compared to the K80’s 480GB/s this is a 4x increase. These devices can be implemented in integrated baseboards of up to 8 GPUs, for a total of 640GB RAM in a single machine.

This advancement puts GPU performance for data workloads through the roof – organisations can load twice as much data with exponentially quicker data access and query response times. Bridging this memory gap opens up much wider possibilities for immersive, responsive, and dynamic data interrogation; enabling real-time, ad hoc analysis of large raw datasets. Calculations, filtering, aggregations, complex scoring and much more can now be adjusted at run time and still provide millisecond performance.

What business benefit does this provide?

For the first time, organisations can access really meaningful data sizes using a single machine.

Retail is a good example. Performance reports that compare results year on year could consist of anything up to 4 terabytes of data. On legacy systems, these datasets would be split into 10% samples performing simultaneously on multiple machines. Getting the results to these reports could take up to two hours, not to mention the hours of pre-aggregation required. At this capacity, performing analysis on ad hoc panels or product categories at run time simply wasn’t possible.

In dramatic contrast, 640GB of RAM will now allow these enterprise-level datasets, consisting of billions of rows of data, to be run on one device for more detailed and meaningful insight in near real-time. Ad hoc queries on raw data without pre-aggregation can be performed with sub-second response time, and bespoke customer panels or product categories can be defined and then analysed within seconds rather than hours.

There are countless sectors with similar datasets – including Telecoms, Logistics, Genomics and Finance – that can now enjoy the same responsive analysis and data manipulation due to this revolutionary boost in memory bandwidth and footprint.

How does Brytlyt leverage the NVIDIA A100?

Our GPU-accelerated database is built on NVIDIA GPU technology. BrytlytDB, our GPU database built on PostgreSQL, implements the latest versions of NVIDIA GPUs into our platform as early as they’re available for testing and certification. We are currently working closely with NVIDIA to run STAC Benchmarks on the A100.

This technology will empower Brytlyt users to greatly improve R&D, innovation, and complex and vast data analysis, as a result of superior database performance.

Data transfer

Data transfer is an integral element of measuring the performance of GPU databases. NVIDIA A100 has increased both memory bandwidth and memory footprint, making data transfer of all kinds – including disk to GPU, across network, and streaming processes – even more seamless. At 2 terabytes per second this GPU’s data bandwidth is 20x faster than CPUs today, and with 8 GPUs working simultaneously the underlying hardware is up to 160x faster than CPU RAM.

AI and machine learning

The speed increases of the A100 mean AI and machine learning models can be trained faster to create more accurate outputs. AI development and training requires more memory and bandwidth than ever before, so this technology will help tackle compute and memory-intensive challenges and empower organisations to make ground-breaking progress in innovative research areas.

Brytlyt has tight integration with PyTorch

Brytlyt users can access, enrich and manipulate this data with PyTorch as well as standard PostgreSQL tools. Organisations can build the models, and perform training, inferencing, testing, validation and more AI workloads, all within the same system. This feature will become even more efficient with NVIDIA’s new GPU hardware, enabling users to access more data through their tools and use it immediately, with no need for extraction or copy.

The future of GPUs

NVIDIA has revolutionised how GPUs can be applied to enterprise applications. Organisations can freely explore their most interesting and largest datasets in detail, to continuously deliver rapid insights for informed, real-time decision-making.

The trajectory of GPU development is shown to be exponential. With memory doubling every two years, growing from 2GB to 80GB in under a decade, GPUs are undoubtedly the future in the world of data analytics and promise to unlock even greater possibilities in future.