Ampere (microarchitecture)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Nvidia Ampere
Release dateMay 14, 2020 (2020-05-14)
Fabrication process
History
Predecessor
Successor
Engraving of André-Marie Ampère

Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures, officially announced on May 14, 2020. It is named after French mathematician and physicist André-Marie Ampère.[1][2] Nvidia announced the next-generation GeForce 30 series consumer GPUs at a GeForce Special Event on September 1, 2020.[3][4] Nvidia announced A100 80GB GPU at SC20 on November 16, 2020.[5] Mobile RTX graphics cards and the RTX 3060 were revealed on January 12, 2021.[6][further explanation needed] Nvidia also announced Ampere's successor, Hopper, at GTC 2022, and "Ampere Next Next" for a 2024 release at GPU Technology Conference 2021.

Details[edit]

Architectural improvements of the Ampere architecture include the following:

  • CUDA Compute Capability 8.0 for A100 and 8.6 for the GeForce 30 series[7]
  • TSMC's 7 nm FinFET process for A100
  • Custom version of Samsung's 8 nm process (8N) for the GeForce 30 series[8]
  • Third-generation Tensor Cores with FP16, bfloat16, TensorFloat-32 (TF32) and FP64 support and sparsity acceleration.[9] The individual Tensor cores have with 256 FP16 FMA operations per second 4x processing power (GA100 only, 2x on GA10x) compared to previous Tensor Core generations; the Tensor Core Count is reduced to one per SM.
  • Second-generation ray tracing cores; concurrent ray tracing, shading, and compute for the GeForce 30 series
  • High Bandwidth Memory 2 (HBM2) on A100 40GB & A100 80GB
  • GDDR6X memory for GeForce RTX 3090, RTX 3080 Ti, RTX 3080, RTX 3070 Ti
  • Double FP32 cores per SM on GA10x GPUs
  • NVLink 3.0 with a 50Gbit/s per pair throughput[9]
  • PCI Express 4.0 with SR-IOV support (SR-IOV is reserved only for A100)
  • Multi-instance GPU (MIG) virtualization and GPU partitioning feature in A100 supporting up to seven instances
  • PureVideo feature set K hardware video decoding with AV1 hardware decoding[10] for the GeForce 30 series and feature set J for A100
  • 5 NVDEC for A100
  • Adds new hardware-based 5-core JPEG decode (NVJPG) with YUV420, YUV422, YUV444, YUV400, RGBA. Should not be confused with Nvidia NVJPEG (GPU-accelerated library for JPEG encoding/decoding)

Chips[edit]

  • GA100[11]
  • GA102
  • GA103
  • GA104
  • GA106
  • GA107

Comparison of Compute Capability: GP100 vs GV100 vs GA100[12]

GPU features NVIDIA Tesla P100 NVIDIA Tesla V100 NVIDIA A100
GPU codename GP100 GV100 GA100
GPU architecture NVIDIA Pascal NVIDIA Volta NVIDIA Ampere
Compute capability 6.0 7.0 8.0
Threads / warp 32 32 32
Max warps / SM 64 64 64
Max threads / SM 2048 2048 2048
Max thread blocks / SM 32 32 32
Max 32-bit registers / SM 65536 65536 65536
Max registers / block 65536 65536 65536
Max registers / thread 255 255 255
Max thread block size 1024 1024 1024
FP32 cores / SM 64 64 64
Ratio of SM registers to FP32 cores 1024 1024 1024
Shared Memory Size / SM 64 KB Configurable up to 96 KB Configurable up to 164 KB

Comparison of Precision Support Matrix[13][14]

Supported CUDA Core Precisions Supported Tensor Core Precisions
FP16 FP32 FP64 INT1 INT4 INT8 TF32 BF16 FP16 FP32 FP64 INT1 INT4 INT8 TF32 BF16
NVIDIA Tesla P4 No Yes Yes No No Yes No No No No No No No No No No
NVIDIA P100 Yes Yes Yes No No No No No No No No No No No No No
NVIDIA Volta Yes Yes Yes No No Yes No No Yes No No No No No No No
NVIDIA Turing Yes Yes Yes No No Yes No No Yes No No Yes Yes Yes No No
NVIDIA A100 Yes Yes Yes No No Yes No Yes Yes No Yes Yes Yes Yes Yes Yes

Legend:

  • FPnn: floating point with nn bits
  • INTn: integer with n bits
  • INT1: binary
  • TF32: TensorFloat32
  • BF16: bfloat16

Comparison of Decode Performance

Concurrent streams H.264 decode (1080p30) H.265 (HEVC) decode (1080p30) VP9 decode (1080p30)
V100 16 22 22
A100 75 157 108

A100 accelerator and DGX A100[edit]

Announced and released on May 14, 2020 was the Ampere-based A100 accelerator.[9] The A100 features 19.5 teraflops of FP32 performance, 6912 CUDA cores, 40GB of graphics memory, and 1.6TB/s of graphics memory bandwidth.[15] The A100 accelerator was initially available only in the 3rd generation of DGX server, including 8 A100s.[9] Also included in the DGX A100 is 15TB of PCIe gen 4 NVMe storage,[15] two 64-core AMD Rome 7742 CPUs, 1 TB of RAM, and Mellanox-powered HDR InfiniBand interconnect. The initial price for the DGX A100 was $199,000.[9]

Comparison of accelerators used in DGX:[16][17][18]



Accelerator
H100​
A100 80GB​
A100 40GB​
V100 32GB​
V100 16GB​
P100
Architecture Socket FP32
CUDA
Cores
FP64 Cores
(excl. Tensor)
Mixed
INT32/FP32
Cores
INT32
Cores
Boost
Clock
Memory
Clock
Memory
Bus Width
Memory
Bandwidth
VRAM Single
Precision
(FP32)
Double
Precision
(FP64)
INT8
(non-Tensor)
INT8
Dense Tensor
INT32 FP16 FP16
Dense Tensor
bfloat16
Dense Tensor
TensorFloat-32
(TF32)
Dense Tensor
FP64
Dense Tensor
Interconnect
(NVLink)
GPU L1 Cache Size L2 Cache Size TDP GPU
Die Size
Transistor
Count
Manufacturing Process
Hopper SXM5 16896 4608 16896 N/A 1780 MHz 4.8Gbit/s HBM3 5120-bit 3072GB/sec 80GB 60 TFLOPs 30 TFLOPs N/A 4000 TOPs N/A N/A 2000 TFLOPs 2000 TFLOPs 1000 TFLOPs 60 TFLOPs 900GB/sec GH100 25344KB(192KBx132) 51200 KB 700W 814 mm2 80B TSMC 4 nm N4
Ampere SXM4 6912 3456 6912 N/A 1410 MHz 3.2Gbit/s HBM2 5120-bit 2039GB/sec 80GB 19.5 TFLOPs 9.7 TFLOPs N/A 624 TOPs 19.5 TOPs 78 TFLOPs 312 TFLOPs 312 TFLOPs 156 TFLOPs 19.5 TFLOPs 600GB/sec GA100 20736KB(192KBx108) 40960 KB 400W 826 mm2 54.2B TSMC 7 nm N7
Ampere SXM4 6912 3456 6912 N/A 1410 MHz 2.4Gbit/s HBM2 5120-bit 1555GB/sec 40GB 19.5 TFLOPs 9.7 TFLOPs N/A 624 TOPs 19.5 TOPs 78 TFLOPs 312 TFLOPs 312 TFLOPs 156 TFLOPs 19.5 TFLOPs 600GB/sec GA100 20736KB(192KBx108) 40960 KB 400W 826 mm2 54.2B TSMC 7 nm N7
Volta SXM3 5120 2560 N/A 5120 1530 MHz 1.75Gbit/s HBM2 4096-bit 900GB/sec 32GB 15.7 TFLOPs 7.8 TFLOPs 62 TOPs N/A 15.7 TOPs 31.4 TFLOPs 125 TFLOPs N/A N/A N/A 300GB/sec GV100 10240KB(128KBx80) 6144 KB 350W 815 mm2 21.1B TSMC 12 nm FFN
Volta SXM2 5120 2560 N/A 5120 1530 MHz 1.75Gbit/s HBM2 4096-bit 900GB/sec 16GB 15.7 TFLOPs 7.8 TFLOPs 62 TOPs N/A 15.7 TOPs 31.4 TFLOPs 125 TFLOPs N/A N/A N/A 300GB/sec GV100 10240KB(128KBx80) 6144 KB 300W 815 mm2 21.1B TSMC 12 nm FFN
Pascal SXM N/A 1792 3584 N/A 1480 MHz 1.4Gbit/s HBM2 4096-bit 720GB/sec 16GB 10.6 TFLOPs 5.3 TFLOPs N/A N/A N/A 21.2 TFLOPs N/A N/A N/A N/A 160GB/sec GP100 1344KB(24KBx56) 4096 KB 300W 610 mm2 15.3B TSMC 16 nm FinFET+

Products using Ampere[edit]

  • GeForce MX series
    • GeForce MX570 (mobile) (GA107)
  • GeForce 20 series
    • GeForce RTX 2050 (mobile) (GA107)
  • GeForce 30 series
    • GeForce RTX 3050 (mobile) (GA107)
    • GeForce RTX 3050 (GA106 or GA107)[19]
    • GeForce RTX 3050 Ti (mobile) (GA107)
    • GeForce RTX 3060 (mobile) (GA106)
    • GeForce RTX 3060 (GA106 or GA104)[20]
    • GeForce RTX 3060 Ti (GA104 or GA103)[21]
    • GeForce RTX 3070 (mobile) (GA104)
    • GeForce RTX 3070 (GA104)
    • GeForce RTX 3070 Ti (mobile) (GA104)
    • GeForce RTX 3070 Ti (GA104)
    • GeForce RTX 3080 (mobile) (GA104)
    • GeForce RTX 3080 (GA102)
    • GeForce RTX 3080 12GB (GA102)
    • GeForce RTX 3080 Ti (mobile) (GA103)
    • GeForce RTX 3080 Ti (GA102)
    • GeForce RTX 3090 (GA102)
    • GeForce RTX 3090 Ti (GA102)
  • Nvidia Workstation GPUs (formerly Quadro)
    • RTX A2000 (mobile) (GA107)
    • RTX A2000 (GA106)
    • RTX A3000 (mobile) (GA104)
    • RTX A4000 (mobile) (GA104)
    • RTX A4000 (GA104)
    • RTX A4500 (GA102)
    • RTX A5000 (mobile) (GA104)
    • RTX A5000 (GA102)
    • RTX A5500 (GA102)
    • RTX A6000 (GA102)
  • Nvidia Data Center GPUs (formerly Tesla)
    • Nvidia A2 (GA107)
    • Nvidia A10 (GA102)
    • Nvidia A16 (4 × GA107)
    • Nvidia A30 (GA100)
    • Nvidia A40 (GA102)
    • Nvidia A100 (GA100)
    • Nvidia A100 80GB (GA100)
Products using Ampere (per Chip)
GA107 GA106 GA104 GA103 GA102 GA100
GeForce MX series
GeForce MX570 (mobile)
GeForce 20 series
GeForce RTX 2050 (mobile)
GeForce 30 series
GeForce RTX 3050 (mobile)
GeForce RTX 3050[19]
GeForce RTX 3050
GeForce RTX 3060 (mobile)
GeForce RTX 3060
GeForce RTX 3060[20]
GeForce RTX 3060 Ti
GeForce RTX 3070 (mobile)
GeForce RTX 3070
GeForce RTX 3070 Ti (mobile)
GeForce RTX 3070 Ti
GeForce RTX 3080 (mobile)
GeForce RTX 3060 Ti[21]
GeForce RTX 3080 Ti (mobile)
GeForce RTX 3080
GeForce RTX 3080 Ti
GeForce RTX 3090
GeForce RTX 3090 Ti
Nvidia Workstation GPUs (formerly Quadro)
RTX A2000 (mobile) RTX A2000 RTX A3000 (mobile)
RTX A4000 (mobile)
RTX A4000
RTX A5000 (mobile)
RTX A4500
RTX A5000
RTX A5500
RTX A6000
Nvidia Data Center GPUs (formerly Tesla)
Nvidia A2
Nvidia A16
Nvidia A10
Nvidia A40
Nvidia A30
Nvidia A100
GA107 GA106 GA104 GA103 GA102 GA100

See also[edit]

References[edit]

  1. ^ Newsroom, NVIDIA. "NVIDIA's New Ampere Data Center GPU in Full Production". NVIDIA Newsroom Newsroom.
  2. ^ "NVIDIA Ampere Architecture In-Depth". NVIDIA Developer Blog. May 14, 2020.
  3. ^ Newsroom, NVIDIA. "NVIDIA Delivers Greatest-Ever Generational Leap with GeForce RTX 30 Series GPUs". NVIDIA Newsroom Newsroom.
  4. ^ "NVIDIA GeForce Ultimate Countdown". NVIDIA.
  5. ^ "NVIDIA Doubles Down: Announces A100 80GB GPU, Supercharging World's Most Powerful GPU for AI Supercomputing".
  6. ^ "Join us for an NVIDIA GeForce RTX: Game on Special Broadcast Event".
  7. ^ "I.7. Compute Capability 8.x". docs.nvidia.com. Retrieved September 23, 2020.
  8. ^ B., Dominik. "Samsung's old 8nm tech at the heart of NVIDIA's monstrous Ampere cards". SamMobile. Retrieved September 19, 2020.
  9. ^ a b c d e Smith, Ryan (May 14, 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
  10. ^ "GeForce RTX 30 Series GPUs: Ushering In A New Era of Video Content With AV1 Decode". NVIDIA.
  11. ^ Morgan, Timothy Prickett (May 29, 2020). "Diving Deep Into The Nvidia Ampere GPU Architecture". The Next Platform. Retrieved March 24, 2022.
  12. ^ "NVIDIA A100 Tensor Core GPU Architecture" (PDF). www.nvidia.com. Retrieved September 18, 2020.
  13. ^ "NVIDIA Tensor Cores: Versatility for HPC & AI". NVIDIA.
  14. ^ "Abstract". docs.nvidia.com.
  15. ^ a b Tom Warren; James Vincent (May 14, 2020). "Nvidia's first Ampere GPU is designed for data centers and AI, not your PC". The Verge.
  16. ^ Smith, Ryan (March 22, 2022). "NVIDIA Hopper GPU Architecture and H100 Accelerator Announced: Working Smarter and Harder". AnandTech.
  17. ^ Smith, Ryan (May 14, 2020). "NVIDIA Ampere Unleashed: NVIDIA Announces New GPU Architecture, A100 GPU, and Accelerator". AnandTech.
  18. ^ "NVIDIA Tesla V100 tested: near unbelievable GPU power". TweakTown. September 17, 2017.
  19. ^ a b Igor, Wallossek (February 13, 2022). "The two faces of the GeForce RTX 3050 8GB". Igor's Lab. Retrieved February 23, 2022.
  20. ^ a b Shilov, Anton (September 25, 2021). "Gainward and Galax List GeForce RTX 3060 Cards With GA104 GPU". Tom's Hardware. Retrieved September 23, 2022.
  21. ^ a b Tyson, Mark (February 23, 2022). "Zotac Debuts First RTX 3060 Ti Desktop Cards With GA103 GPU". Tom's Hardware. Retrieved September 23, 2022.

External links[edit]