Powerful End-to-End AI and HPC Data Center Platform
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and high-performance computing (HPC) to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale to thousands of GPUs or, with NVIDIA Multi-Instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate workloads of all sizes.
GET YOUR FREE TEST DRIVE
NVIDIA A100 Specifications
Peak FP64 | 9.7 TF |
Peak FP64 Tensor Core | 19.5 TF |
Peak FP32 | 19.5 TF |
Peak TF32 Tensor Core | 156 TF | 312 TF* |
Peak BFLOAT16 Tensor Core | 312 TF | 624 TF* |
Peak FP16 Tensor Core | 312 TF | 624 TF* |
Peak INT8 Tensor Core | 624 TOPS | 1,248 TOPS* |
Peak INT4 Tensor Core | 1,248 TOPS | 2,496 TOPS* |
GPU Memory | 40 GB |
GPU Memory Bandwidth | 1,555 GB/s |
Interconnect | NVIDIA NVLink 600 GB/s** PCIe Gen4 64 GB/s |

NVIDIA A100
for PCIe
Multi-instance GPUs | Various instance sizes with up to 7MIGs @5GB |
Form Factor | PCIe |
Max TDP Power | 250W |
Delivered Performance of Top Apps | 90% |