We may earn a commission when you buy through links in our articles. Learn more.

Nvidia is building a supercomputer to address society’s biggest problems

This DGX A100-based machine will be the "world's fastest AI supercomputer in academia"

Nvidia Ampere DGX A100 supercomputer ampere chips

We’re all awaiting the release of next-gen Nvidia Ampere graphics cards, but, as the company reminded us in its 2020 online GTC keynote, there’s more to this manufacturing mammoth than making some of the best graphics cards for gaming. Nvidia is proving this now more than ever, as it’s set to build the “world’s fastest AI supercomputer in academia”.

The company’s press release explains that this 700 petaflop machine, in partnership with the University of Florida (UF), will give “faculty and students within and beyond UF the tools to apply AI across a multitude of areas to address major challenges such as rising seas, ageing populations, data security, personalized medicine, urban transportation, and food insecurity”.

Deep learning and AI is certainly Nvidia’s forte – we know this even just looking at the consumer gaming sphere, with RTX 20-series Tensor Cores, DLSS technology being utilised in games, and so on. Indeed, Nvidia itself says, “Deep learning relies on GPU acceleration, both for training and inference, and Nvidia delivers it everywhere you need it – to datacenters, desktops, laptops, the cloud, and the world’s fastest supercomputers.”

In the supercomputer and datacenter sphere, Nvidia’s been expanding its reach with its new A100 GPUs based on the company’s new Ampere architecture. The DGX A100 is its new take on an AI system, formed of eight A100 GPUs and paired with AMD EPYC processors. We’ve recently seen how AMD EPYC CPUs are helping uncover the hidden structure of the universe, and now we’re seeing how Nvidia’s A100 Tensor Cores will be put to use for more local – but no less important – purposes.

YouTube Thumbnail

Nvidia says that “UF is the first institution of higher learning in the US to receive Nvidia DGX A100 systems. These systems are based on the modular architecture of the NVIDIA DGX SuperPOD, which enables the rapid deployment and scaling of massive AI infrastructure.

“UF’s HiPerGator 3 supercomputer will integrate 140 Nvidia DGX A100 systems powered by a combined 1,120 Nvidia A100 Tensor Core GPUs. It will include 4 petabytes of high-performance storage. An NVIDIA Mellanox HDR 200Gb/s InfiniBand network will provide the high throughput and extremely low-latency network connectivity.

“DGX A100 systems are built to make the most of these capabilities as a single software-defined platform. Nvidia DGX systems are already used by eight of the ten top US national universities.”

It’s certainly exciting to see Nvidia make such strides in the wonderful, mind-boggling world of giant supercomputers, artificial intelligence, and deep learning, and to think that the company might be at the forefront of life- and society-improving research. If this is what Ampere is capable of, we can keep our hopes up for next-gen gaming, too, especially where AI and deep learning technologies like Nvidia DLSS are concerned.