Jump to content

Kepler (microarchitecture)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 142.115.27.201 (talk) at 15:25, 30 November 2022 (fix grammar). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Nvidia Kepler
Release dateApril 2012
Fabrication processTSMC 28 nm
History
PredecessorFermi
SuccessorMaxwell

Kepler is the codename for a GPU microarchitecture developed by Nvidia, first introduced at retail in April 2012,[1] as the successor to the Fermi microarchitecture. Kepler was Nvidia's first microarchitecture to focus on energy efficiency. Most GeForce 600 series, most GeForce 700 series, and some GeForce 800M series GPUs were based on Kepler, all manufactured in 28 nm. Kepler also found use in the GK20A, the GPU component of the Tegra K1 SoC, as well as in the Quadro Kxxx series, the Quadro NVS 510, and Nvidia Tesla computing modules. Kepler was followed by the Maxwell microarchitecture and used alongside Maxwell in the GeForce 700 series and GeForce 800M series.

The architecture is named after Johannes Kepler, a German mathematician and key figure in the 17th century scientific revolution.

Overview

Die shot of a GK110 A1 GPU, found inside GeForce GTX Titan cards

Where the goal of Nvidia's previous architecture was design focused on increasing performance on compute and tessellation, with Kepler architecture Nvidia targeted their focus on efficiency, programmability and performance.[2][3] The efficiency aim was achieved through the use of a unified GPU clock, simplified static scheduling of instruction and higher emphasis on performance per watt.[4] By abandoning the shader clock found in their previous GPU designs, efficiency is increased, even though it requires additional cores to achieve higher levels of performance. This is not only because the cores are more power-friendly (two Kepler cores using 90% power of one Fermi core, according to Nvidia's numbers), but also the change to a unified GPU clock scheme delivers a 50% reduction in power consumption in that area.[5]

Programmability aim was achieved with Kepler's Hyper-Q, Dynamic Parallelism and multiple new Compute Capabilities 3.x functionality. With it, higher GPU utilization and simplified code management was achievable with GK GPUs thus enabling more flexibility in programming for Kepler GPUs.[6]

Finally with the performance aim, additional execution resources (more CUDA cores, registers and cache) and with Kepler's ability to achieve a memory clock speed of 7 GHz, increases Kepler's performance when compared to previous Nvidia GPUs.[5][7]

Features

The GK Series GPU contains features from both the older Fermi and newer Kepler generations. Kepler based members add the following standard features:

  • PCI Express 3.0 interface
  • DisplayPort 1.2
  • HDMI 1.4a 4K x 2K video output
  • Purevideo VP5 hardware video acceleration (up to 4K x 2K H.264 decode)
  • Hardware H.264 encoding acceleration block (NVENC)
  • Support for up to 4 independent 2D displays, or 3 stereoscopic/3D displays (NV Surround)
  • Next Generation Streaming Multiprocessor (SMX)
  • Polymorph-Engine 2.0
  • Simplified Instruction Scheduler
  • Bindless Textures
  • CUDA Compute Capability 3.0 to 3.5
  • GPU Boost (Upgraded to 2.0 on GK110)
  • TXAA Support
  • Manufactured by TSMC on a 28 nm process
  • New Shuffle Instructions
  • Dynamic Parallelism
  • Hyper-Q (Hyper-Q's MPI functionality reserve for Tesla only)
  • Grid Management Unit
  • NVIDIA GPUDirect (GPU Direct's RDMA functionality reserve for Tesla only)

Next Generation Streaming Multiprocessor (SMX)

The Kepler architecture employs a new Streaming Multiprocessor Architecture called "SMX". SMXs are the reason for Kepler's power efficiency as the whole GPU uses a single unified clock speed.[5] Although SMX's usage of a single unified clock increases power efficiency due to the fact that multiple lower clock Kepler CUDA Cores consume 90% less power than multiple higher clock Fermi CUDA Core, additional processing units are needed to execute a whole warp per cycle. Doubling 16 to 32 per CUDA array solve the warp execution problem, the SMX front-end are also double with warp schedulers, dispatch unit and the register file doubled to 64K entries as to feed the additional execution units. With the risk of inflating die area, SMX PolyMorph Engines are enhanced to 2.0 rather than double alongside the execution units, enabling it to spurr polygon[clarification needed] in shorter cycles. There are 192 shaders per SMX.[8] Dedicated FP64 CUDA cores are also used as all Kepler CUDA cores are not FP64 capable to save die space. With the improvement Nvidia made on the SMX, the results include an increase in GPU performance and efficiency. With GK110, the 48KB texture cache are unlocked for compute workloads. In compute workload the texture cache becomes a read-only data cache, specializing in unaligned memory access workloads. Furthermore, error detection capabilities have been added to make it safer for workloads that rely on ECC. The register per thread count is also doubled in GK110 with 255 registers per thread.

Simplified Instruction Scheduler

Additional die space reduction and power saving was achieved by removing a complex hardware block that handled the prevention of data hazards. [3][5][9][10]

GPU Boost

GPU Boost is a new feature which is roughly analogous to turbo boosting of a CPU. The GPU is always guaranteed to run at a minimum clock speed, referred to as the "base clock". This clock speed is set to the level which will ensure that the GPU stays within TDP specifications, even at maximum loads.[3] When loads are lower, however, there is room for the clock speed to be increased without exceeding the TDP. In these scenarios, GPU Boost will gradually increase the clock speed in steps, until the GPU reaches a predefined power target (which is 170 W by default).[5] By taking this approach, the GPU will ramp its clock up or down dynamically, so that it is providing the maximum amount of speed possible while remaining within TDP specifications.

The power target, as well as the size of the clock increase steps that the GPU will take, are both adjustable via third-party utilities and provide a means of overclocking Kepler-based cards.[3]

Microsoft Direct3D Support

Nvidia Fermi and Kepler GPUs of the GeForce 600 series support the Direct3D 11.0 specification. Nvidia originally stated that the Kepler architecture has full DirectX 11.1 support, which includes the Direct3D 11.1 path.[11] The following "Modern UI" Direct3D 11.1 features, however, are not supported:[12][13]

  • Target-Independent Rasterization (2D rendering only).
  • 16xMSAA Rasterization (2D rendering only).
  • Orthogonal Line Rendering Mode.
  • UAV (Unordered Access View) in non-pixel-shader stages.

According to the definition by Microsoft, Direct3D feature level 11_1 must be complete, otherwise the Direct3D 11.1 path can not be executed.[14] The integrated Direct3D features of the Kepler architecture are the same as those of the GeForce 400 series Fermi architecture.[13]

Next Microsoft Direct3D Support

NVIDIA Kepler GPUs of the GeForce 600/700 series support Direct3D 12 feature level 11_0.[15]

TXAA Support

Exclusive to Kepler GPUs, TXAA is a new anti-aliasing method from Nvidia that is designed for direct implementation into game engines. TXAA is based on the MSAA technique and custom resolve filters. It is designed to address a key problem in games known as shimmering or temporal aliasing. TXAA resolves that by smoothing out the scene in motion, making sure that any in-game scene is being cleared of any aliasing and shimmering.[3]

Shuffle Instructions

At a low level, GK110 sees an additional instructions and operations to further improve performance. New shuffle instructions allow for threads within a warp to share data without going back to memory, making the process much quicker than the previous load/share/store method. Atomic operations are also overhauled, speeding up the execution speed of atomic operations and adding some FP64 operations that were previously only available for FP32 data.[9]

Hyper-Q

Hyper-Q expands GK110 hardware work queues from 1 to 32. The significance of this being that having a single work queue meant that Fermi could be under occupied at times as there wasn't enough work in that queue to fill every SM. By having 32 work queues, GK110 can in many scenarios, achieve higher utilization by being able to put different task streams on what would otherwise be an idle SMX. The simple nature of Hyper-Q is further reinforced by the fact that it's easily mapped to MPI, a common message passing interface frequently used in HPC. As legacy MPI-based algorithms that were originally designed for multi-CPU systems that became bottlenecked by false dependencies now have a solution. By increasing the number of MPI jobs, it's possible to utilize Hyper-Q on these algorithms to improve the efficiency all without changing the code itself.[9]

Dynamic Parallelism

Dynamic Parallelism ability is for kernels to be able to dispatch other kernels. With Fermi, only the CPU could dispatch a kernel, which incurs a certain amount of overhead by having to communicate back to the CPU. By giving kernels the ability to dispatch their own child kernels, GK110 can both save time by not having to go back to the CPU, and in the process free up the CPU to work on other tasks.[9]

Grid Management Unit

Enabling Dynamic Parallelism requires a new grid management and dispatch control system. The new Grid Management Unit (GMU) manages and prioritizes grids to be executed. The GMU can pause the dispatch of new grids and queue pending and suspended grids until they are ready to execute, providing the flexibility to enable powerful runtimes, such as Dynamic Parallelism. The CUDA Work Distributor in Kepler holds grids that are ready to dispatch, and is able to dispatch 32 active grids, which is double the capacity of the Fermi CWD. The Kepler CWD communicates with the GMU via a bidirectional link that allows the GMU to pause the dispatch of new grids and to hold pending and suspended grids until needed. The GMU also has a direct connection to the Kepler SMX units to permit grids that launch additional work on the GPU via Dynamic Parallelism to send the new work back to GMU to be prioritized and dispatched. If the kernel that dispatched the additional workload pauses, the GMU will hold it inactive until the dependent work has completed.[10]

NVIDIA GPUDirect

NVIDIA GPUDirect is a capability that enables GPUs within a single computer, or GPUs in different servers located across a network, to directly exchange data without needing to go to CPU/system memory. The RDMA feature in GPUDirect allows third party devices such as SSDs, NICs, and IB adapters to directly access memory on multiple GPUs within the same system, significantly decreasing the latency of MPI send and receive messages to/from GPU memory.[16] It also reduces demands on system memory bandwidth and frees the GPU DMA engines for use by other CUDA tasks. Kepler GK110 also supports other GPUDirect features including Peer‐to‐Peer and GPUDirect for Video.

Video decompression/compression

NVDEC

NVENC

NVENC is Nvidia's power efficient fixed-function encode that is able to take codecs, decode, preprocess, and encode H.264-based content. NVENC specification input formats are limited to H.264 output. But still, NVENC, through its limited format, can support up to 4096x4096 encode.[17]

Like Intel's Quick Sync, NVENC is currently exposed through a proprietary API, though Nvidia does have plans to provide NVENC usage through CUDA.[17]

Performance

The theoretical single-precision processing power of a Kepler GPU in GFLOPS is computed as 2 (operations per FMA instruction per CUDA core per cycle) × number of CUDA cores × core clock speed (in GHz). Note that like the previous generation Fermi, Kepler is not able to benefit from increased processing power by dual-issuing MAD+MUL like Tesla was capable of.

The theoretical double-precision processing power of a Kepler GK110/210 GPU is 1/3 of its single precision performance. This double-precision processing power is however only available on professional Quadro, Tesla, and high-end TITAN-branded GeForce cards, while drivers for consumer GeForce cards limit the performance to 1/24 of the single precision performance.[18] The lower performance GK10x chips are similarly capped to 1/24 of the single precision performance.[19]

Kepler chips

  • GK104
  • GK106
  • GK107
  • GK110
  • GK208
  • GK210
  • GK20A (Tegra K1)

See also

References

  1. ^ Mujtaba, Hassan (18 February 2012). "NVIDIA Expected to launch Eight New 28nm Kepler GPU's in April 2012".
  2. ^ "Inside Kepler" (PDF). Retrieved 2015-09-19.
  3. ^ a b c d e "Introducing The GeForce GTX 680 GPU". Nvidia. March 22, 2012. Retrieved 2015-09-19.
  4. ^ Nividia. "NVIDIA's Next Generation CUDATM Compute Architecture: Kepler TM GK110" (PDF). www.nvidia.com.
  5. ^ a b c d e Smith, Ryan (March 22, 2012). "NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown". AnandTech. Retrieved November 25, 2012.
  6. ^ "Efficiency Through Hyper-Q, Dynamic Parallelism, & More". Nvidia. November 12, 2012. Retrieved 2015-09-19.
  7. ^ "GeForce GTX 770 | Specifications | GeForce". www.nvidia.com. Retrieved 2022-06-07.
  8. ^ "GeForce GTX 680 2 GB Review: Kepler Sends Tahiti On Vacation". Tom;s Hardware. March 22, 2012. Retrieved 2015-09-19.
  9. ^ a b c d "NVIDIA Launches Tesla K20 & K20X: GK110 Arrives At Last". AnandTech. 2012-11-12. Retrieved 2015-09-19.
  10. ^ a b "NVIDIA Kepler GK110 Architecture Whitepaper" (PDF). Retrieved 2015-09-19.
  11. ^ "NVIDIA Launches First GeForce GPUs Based on Next-Generation Kepler Architecture". Nvidia. March 22, 2012. Archived from the original on June 14, 2013.
  12. ^ Edward, James (November 22, 2012). "NVIDIA claims partially support DirectX 11.1". TechNews. Archived from the original on June 28, 2015. Retrieved 2015-09-19.
  13. ^ a b "Nvidia Doesn't Fully Support DirectX 11.1 with Kepler GPUs, But… (Web Archive Link)". BSN. Archived from the original on December 29, 2012.
  14. ^ "D3D_FEATURE_LEVEL enumeration (Windows)". MSDN. Retrieved 2015-09-19.
  15. ^ Henry Moreton (March 20, 2014). "DirectX 12: A Major Stride for Gaming". Retrieved 2015-09-19.
  16. ^ "NVIDIA GPUDirect". NVIDIA Developer. 2015-10-06. Retrieved 2019-02-05.
  17. ^ a b Chris Angelini (March 22, 2012). "Benchmark Results: NVEnc And MediaEspresso 6.5". Tom’s Hardware. Retrieved 2015-09-19.
  18. ^ Angelini, Chris (7 November 2013). "Nvidia GeForce GTX 780 Ti Review: GK110, Fully Unlocked". Tom's Hardware. p. 1. Retrieved 6 December 2015. The card's driver deliberately operates GK110's FP64 units at 1/8 of the GPU's clock rate. When you multiply that by the 3:1 ratio of single- to double-precision CUDA cores, you get a 1/24 rate
  19. ^ Smith, Ryan (13 September 2012). "The NVIDIA GeForce GTX 660 Review: GK106 Fills Out The Kepler Family". AnandTech. p. 1. Retrieved 6 December 2015.