The Universal System for AI Infrastructure
NVIDIA DGX A100 is the universal system for all AI workloads, from analytics to training to inference.
It is offering unprecedented compute density, performance, and flexibility in the world’s first 5 petaFLOPS AI system.
NVIDIA DGX A100 features the world’s most advanced accelerator, the NVIDIA A100 Tensor Core GPU, enabling enterprises to consolidate training, inference, and analytics into a unified, easy-to-deploy AI infrastructure that includes direct access to NVIDIA AI experts.
NVIDIA DGX A100 Datasheet
Download
DGX A100 Components
- 8x NVIDIA A100 GPUs with 320 GB Total GPU Memory
- 12 NVLinks/GPU, 600 GB/s GPU-to-GPU Bi-directonal Bandwidth
- 12 NVLinks/GPU, 600 GB/s GPU-to-GPU Bi-directonal Bandwidth
- 6x NVIDIA NVSwitches
- 4.8 TB/s Bi-directional Bandwidth, 2X More than Previous Generation NVSwitch
- 4.8 TB/s Bi-directional Bandwidth, 2X More than Previous Generation NVSwitch
- 9x MELLANOX CONNECTX-6 200Gb/S NETWORK INTERFACE
- 450 GB/s Peak Bi-directional Bandwidth
- 450 GB/s Peak Bi-directional Bandwidth
- Dual 64-Core AMD CPUs and 1 TB System Memory
- 3.2X More Cores to Power the Most Intensive AI Jobs
- 3.2X More Cores to Power the Most Intensive AI Jobs
- 15 TB Gen4 NVME SSD
- 25GB/s Peak Bandwidth, 2X Faster than Gen3 NVME SSDs
