Skip to main content

NVIDIA A100

Test now in the MEGWARE Benchmark Center

Powerful End-to-End AI and HPC Data Center Platform

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges.

As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes.

NVIDIA A100 Specifications

Peak FP649.7 TF
Peak FP64 Tensor Core19.5 TF
Peak FP3219.5 TF
Peak TF32 Tensor Core156 TF | 312 TF*
Peak BFLOAT16 Tensor Core312 TF | 624 TF*
Peak FP16 Tensor Core312 TF | 624 TF*
Peak INT8 Tensor Core624 TOPS | 1,248 TOPS*
Peak INT4 Tensor Core1,248 TOPS | 2,496 TOPS*
GPU Memory40 GB
GPU Memory Bandwidth1,555 GB/s
Interconnect

NVIDIA NVLink 600 GB/s**

PCIe Gen4 64 GB/s

 

NVIDIA A100

NVIDIA A100

for PCIe

Multi-instance GPUsVarious instance sizes with up to 7MIGs @5GB
Form FactorPCIe
Max TDP Power250W
Delivered Performance of Top Apps90%

 

Test now NVIDIA A100

in the MEGWARE Benchmark Center