Turing (microarchitecture)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Nvidia Turing
Release dateSeptember 20, 2018 (2018-09-20)
Fabrication processTSMC 12 nm (FinFET)
Photo of Alan Turing

Turing is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is named after the prominent mathematician and computer scientist Alan Turing. The architecture was first introduced in August 2018 at SIGGRAPH 2018 in the workstation-oriented Quadro RTX cards,[2] and one week later at Gamescom in consumer GeForce RTX 20 series graphics cards.[3] Building on the preliminary work of its HPC-exclusive predecessor, the Turing architecture introduces the first consumer products capable of real-time ray tracing, a longstanding goal of the computer graphics industry. Key elements include dedicated artificial intelligence processors ("Tensor cores") and dedicated ray tracing processors (“RT cores”). Turing leverages DXR, OptiX, and Vulkan for access to ray-tracing. In February 2019, Nvidia released the GeForce 16 series of GPUs, which utilizes the new Turing design but lacks the RT and Tensor cores.

Turing is manufactured using TSMC's 12 nm FinFET semiconductor fabrication process. The high-end TU102 GPU includes 18.6 billion transistors fabricated using this process.[4] Turing also uses GDDR6 memory from Samsung Electronics, and previously Micron Technology.


Die shot of the TU104 GPU used in RTX 2080 cards
Die shot of the TU106 GPU used in RTX 2060 cards
Die shot of the TU116 GPU used in RTX 1660 cards

The Turing microarchitecture combines multiple types of specialized processor core, and enables an implementation of limited real-time ray tracing.[5] This is accelerated by the use of new RT (ray-tracing) cores, which are designed to process quadtrees and spherical hierarchies, and speed up collision tests with individual triangles.

Features in Turing:

The GDDR6 memory is produced by Samsung Electronics for the Quadro RTX series.[7] The RTX 20 series initially launched with Micron memory chips, before switching to Samsung chips by November 2018.[8]


Nvidia reported rasterization (CUDA) performance gains for existing titles of approximately 30–50% over the previous generation.[9][10]


The ray-tracing performed by the RT cores can be used to produce reflections, refractions and shadows, replacing traditional raster techniques such as cube maps and depth maps. Instead of replacing rasterization entirely, however, the information gathered from ray-tracing can be used to augment the shading with information that is much more photo-realistic, especially in regards to off-camera action. Nvidia said the ray-tracing performance increased about 8 times over the previous consumer architecture, Pascal.

Tensor cores[edit]

Generation of the final image is further accelerated by the Tensor cores, which are used to fill in the blanks in a partially rendered image, a technique known as de-noising. The Tensor cores perform the result of deep learning to codify how to, for example, increase the resolution of images generated by a specific application or game. In the Tensor cores' primary usage, a problem to be solved is analyzed on a supercomputer, which is taught by example what results are desired, and the supercomputer determines a method to use to achieve those results, which is then done with the consumer's Tensor cores. These methods are delivered via driver updates to consumers.[9] The supercomputer uses a large number of Tensor cores itself.


  • TU102
  • TU104
  • TU106
  • TU116
  • TU117


Turing's development platform is called RTX. RTX ray-tracing features can be accessed using Microsoft's DXR, OptiX, as well using Vulkan extensions (the last one being also available on Linux drivers).[11] It includes access to AI-accelerated features through NGX. The Mesh Shader, Shading Rate Image functionalities are accessible using DX12, Vulkan and OpenGL extensions on Windows and Linux platforms.[12]

Windows 10 October 2018 update includes the public release of DirectX Raytracing.[13][14]

Products using Turing[edit]

  • GeForce 16 series
    • GeForce GTX 1630
    • GeForce GTX 1650
    • GeForce GTX 1650 (Mobile)
    • GeForce GTX 1650 Max-Q (Mobile)
    • GeForce GTX 1650 (GDDR6)
    • GeForce GTX 1650 Super
    • GeForce GTX 1650 Ti (Mobile)
    • GeForce GTX 1660
    • GeForce GTX 1660 (Mobile)
    • GeForce GTX 1660 Super
    • GeForce GTX 1660 Ti
    • GeForce GTX 1660 Ti (Mobile)
    • GeForce GTX 1660 Ti Max-Q (Mobile)
  • GeForce 20 series
    • GeForce RTX 2060
    • GeForce RTX 2060 12GB
    • GeForce RTX 2060 (Mobile)
    • GeForce RTX 2060 Max-Q (Mobile)
    • GeForce RTX 2060 Super
    • GeForce RTX 2060 Super (Mobile)
    • GeForce RTX 2070
    • GeForce RTX 2070 (Mobile)
    • GeForce RTX 2070 Max-Q (Mobile)
    • GeForce RTX 2070 Max-Q Refresh (Mobile)
    • GeForce RTX 2070 Super
    • GeForce RTX 2070 Super (Mobile)
    • GeForce RTX 2070 Super Max-Q (Mobile)
    • GeForce RTX 2080
    • GeForce RTX 2080 (Mobile)
    • GeForce RTX 2080 Max-Q (Mobile)
    • GeForce RTX 2080 Super
    • GeForce RTX 2080 Super (Mobile)
    • GeForce RTX 2080 Super Max-Q (Mobile)
    • GeForce RTX 2080 Ti
    • Titan RTX
  • Nvidia Quadro
    • Quadro RTX 3000 (Mobile)
    • Quadro RTX 4000
    • Quadro RTX 5000
    • Quadro RTX 6000
    • Quadro RTX 8000
    • Quadro RTX T400
    • Quadro RTX T600
    • Quadro RTX T1000
  • Nvidia Tesla
    • Tesla T4

See also[edit]


  1. ^ Tom Warren; James Vincent (May 14, 2020). "Nvidia's first Ampere GPU is designed for data centers and AI, not your PC". The Verge. New “RTX 3080” cards could be just months away then, but we still don’t know for sure if they’ll be using this new Ampere architecture.
  2. ^ "NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More".
  3. ^ "NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October". Anandtech.
  4. ^ "NVIDIA TURING GPU ARCHITECTURE: Graphics Reinvented" (PDF). Nvidia. 2018. Retrieved June 28, 2019.
  5. ^ "Nvidia announces RTX 2000 GPU series with '6 times more performance' and ray-tracing". The Verge. Retrieved August 20, 2018.
  6. ^ "The NVIDIA Turing GPU Architecture Deep Dive: Prelude to GeForce RTX". AnandTech.
  7. ^ Mujtaba, Hassan (August 14, 2018). "Samsung GDDR6 Memory Powers NVIDIA's Turing GPU Based Quadro RTX Cards". wccftech.com. Retrieved June 19, 2019.
  8. ^ Maislinger, Florian (November 21, 2018). "Faulty RTX 2080 Ti: Nvidia switches from Micron to Samsung for GDDR6 memory". PC Builder's Club. Retrieved July 15, 2019.
  9. ^ a b "#BeForTheGame". Twitch.tv.
  10. ^ Jeff Fisher (August 20, 2018). "GeForce RTX Propels PC Gaming's Golden Age with Real-Time Ray Tracing". Nvidia.
  11. ^ "NVIDIA RTX platform". Nvidia. July 20, 2018.
  12. ^ "Turing Extensions for Vulkan and OpenGL". Nvidia. September 11, 2018.
  13. ^ "Windows 10 October 2018 Update a Catalyst for Ray-Traced Games | NVIDIA Blog". October 2, 2018.
  14. ^ "DirectX Raytracing and the Windows 10 October 2018 Update". October 2, 2018.

External links[edit]