The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration capabilities for AI, data analysis, and high-performance computing (HPC) workflows to address the world's most complex computing challenges.
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration capabilities for AI, data analysis, and high performance computing (HPC) workflows to address the world's most complex computing challenges. As the engine of the NVIDIA data center platform, the A100 can help you interconnect thousands of GPUs or, with multi-instance GPU (MIG) technology, be partitioned into seven GPU instances to accelerate all types of workloads. The A100's third-generation NVIDIA Tensor cores now accelerate more levels of preThe third-generation NVIDIA Tensor cores in the A100 now accelerate more levels of precision for different workloads, reducing data access time and time to market.
ACCELERATE AI WORKFLOWS
Memory: 80 GB HBM2 ECC 5120 bits (Bandwidth: 1935 GB/s)
CUDA cores: 6912
FP64: 9.7 TFlops
FP32: 19.5 TFlops
TF32: 312 Tflops
Tensor Float 32 (TF32): 156 TFlops
BFLOAT16 Tensor Core: 312 TFlops
FP16 Tensor Core: 1248 TOPs
INT8 Tensor Core: 624 TOPs
Up to 7 MIG instances at 10 GB
Passive cooling