GeForce 600 series
Release date | March 22, 2012 |
---|---|
Codename | GK104, GK106, GK107 |
Models | GeForce Series
|
Transistors | 292M 40 nm (GF119)
|
Cards | |
Entry-level | GT 605M, GT 610M GT 620 GT 620M GT 630M GT 635M GT 640 GT 645M |
Mid-range | GTX 650 GTX 650M GTX 650 Ti GTX 650 Ti Boost |
High-end | GTX 660 GTX 660 Ti GTX 670 |
Enthusiast | GTX 680 GTX 680M GTX 680MX GTX 690 |
API support | |
DirectX | Direct3D 12.0[1] |
OpenCL | OpenCL 1.1 |
OpenGL | OpenGL 4.4 |
History | |
Predecessor | GeForce 500 Series |
Successor | GeForce 700 Series |
The GeForce 600 Series is a family of graphics processing units developed by Nvidia, used in desktop and laptop PCs. It serves as the introduction for the Kepler architecture (GK-codenamed chips), named after the German mathematician, astronomer, and astrologer Johannes Kepler. GeForce 600 series cards were first released in 2012.
Overview
Where the goal of the previous architecture, Fermi, was to increase raw performance (particularly for compute and tessellation), Nvidia's goal with the Kepler architecture was to increase performance per watt, while still striving for overall performance increases.[2] The primary way they achieved this goal was through the use of a unified clock. By abandoning the shader clock found in their previous GPU designs, efficiency is increased, even though it requires more cores to achieve similar levels of performance. This is not only because the cores are more power efficient (two Kepler cores using about 90% of the power of one Fermi core, according to Nvidia's numbers), but also because the reduction in clock speed delivers a 50% reduction in power consumption in that area.[3]
Kepler also introduced a new form of texture handling known as bindless textures. Previously, textures needed to be bound by the CPU to a particular slot in a fixed-size table before the GPU could reference them. This led to two limitations: one was that because the table was fixed in size, there could only be as many textures in use at one time as could fit in this table (128). The second was that the CPU was doing unnecessary work: it had to load each texture, and also bind each texture loaded in memory to a slot in the binding table.[2] With bindless textures, both limitations are removed. The GPU can access any texture loaded into memory, increasing the number of available textures and removing the performance penalty of binding.
Finally, with Kepler, Nvidia was able to increase the memory clock to 6 GHz. To accomplish this, they needed to design an entirely new memory controller and bus. While still shy of the theoretical 7 GHz limitation of GDDR5, this is well above the 4 GHz speed of the memory controller for Fermi.[3]
Architecture
The GeForce 600 Series contains products from both the older Fermi and newer Kepler generations of Nvidia GPUs. Kepler based members of the 600 series add the following standard features to the GeForce family:
- PCI Express 3.0 interface
- DisplayPort 1.2
- HDMI 1.4a 4K x 2K video output
- Purevideo VP5 hardware video acceleration (up to 4K x 2K H.264 decode)
- Hardware H.264 encoding acceleration block (NVENC)
- Support for up to 4 independent 2D displays, or 3 stereoscopic/3D displays (NV Surround)
- Next Generation Streaming Multiprocessor (SMX)
- A New Instruction Scheduler
- Bindless Textures
- CUDA Compute Capability 3.0
- GPU Boost
- TXAA
- Manufactured by TSMC on a 28 nm process
Next Generation Streaming Multiprocessor (SMX)
The Kepler architecture employs a new Streaming Multiprocessor Architecture called SMX. The SMX are the key method for Kepler's power efficiency as the whole GPU uses a single "Core Clock" rather than the double-pump "Shader Clock".[3] Although the SMX usage of a single unified clock increases the GPU power efficiency due to the fact that two Kepler CUDA Cores consume 90% power of one Fermi CUDA Core, consequently the SMX needs additional processing units to execute a whole warp per cycle. Kepler also needed to increase raw GPU performance as to remain competitive. As a result, it doubled the CUDA Cores from 16 to 32 per CUDA array, 3 CUDA Cores Array to 6 CUDA Cores Array, 1 load/store and 1 SFU group to 2 load/store and 2 SFU group. The GPU processing resources are also double. From 2 warp schedulers to 4 warp schedulers, 4 dispatch unit became 8 and the register file doubled to 64K entries as to increase performance. With the doubling of GPU processing units and resources increasing the usage of die spaces, The capability of the PolyMorph Engine aren't double but enhanced, making it capable of spurring out a polygon in 2 cycles instead of 4.[4] With Kepler, Nvidia not only have to work on power efficiency but also on area efficiency, thus Nvidia opted to use 8 dedicated FP64 CUDA cores in a SMX as to save die space while still offering FP64 capabilities since all Kepler CUDA cores are not FP64 capable. With the improvement Nvidia made on Kepler, the results include an increase in GPU graphic performance while downplaying FP64 performance.
A New Instruction Scheduler
Additional die areas are acquired by replacing the complex hardware scheduler with a simple software scheduler. With software scheduling, warps scheduling was moved to Nvidia's compiler and as the GPU math pipeline now has a fixed latency, it now include the utilization of Instruction-Level Parallelism and superscalar execution in addition to Thread-Level Parallelism. As instructions are statically scheduled, scheduling inside a warp becomes redundant since the latency of the math pipeline is already known. This resulted an increase in die area space and power efficiency.[3][5][2]
GPU Boost
GPU Boost is a new feature which is roughly analogous to turbo boosting of a CPU. The GPU is always guaranteed to run at a minimum clock speed, referred to as the "base clock". This clock speed is set to the level which will ensure that the GPU stays within TDP specifications, even at maximum loads.[2] When loads are lower, however, there is room for the clock speed to be increased without exceeding the TDP. In these scenarios, GPU Boost will gradually increase the clock speed in steps, until the GPU reaches a predefined power target (which is 170W by default).[3] By taking this approach, the GPU will ramp its clock up or down dynamically, so that it is providing the maximum amount of speed possible while remaining within TDP specifications.
The power target, as well as the size of the clock increase steps that the GPU will take, are both adjustable via third-party utilities and provide a means of overclocking Kepler-based cards.[2]
Microsoft DirectX Support
This series will support DirectX 12.[6]
NVIDIA will support the DX12 API on all the DX11-class GPUs it has shipped; these belong to the Fermi, Kepler and Maxwell architectural families. With more than 50% market share (65% for discrete graphics) among DX11-based gaming systems, NVIDIA alone will provide game developers the majority of the potential installed base.
TXAA
Exclusive to Kepler GPUs, TXAA is a new anti-aliasing method from Nvidia that is designed for direct implementation into game engines. TXAA is based on the MSAA technique and custom resolve filters. Its design addresses a key problem in games known as shimmering or temporal aliasing; TXAA resolves that by smoothing out the scene in motion, making sure that any in-game scene is being cleared of any aliasing and shimmering.[7]
NVENC
NVENC is Nvidia's power efficient fixed-function encode that is able to take codecs, decode, preprocess, and encode H.264-based content. NVENC specification input formats are limited to H.264 output. But still, NVENC, through its limited format, can support up to 4096x4096 encode.[8]
Like Intel’s Quick Sync, NVENC is currently exposed through a proprietary API, though Nvidia does have plans to provide NVENC usage through CUDA.[8]
New driver features
In the R300 drivers, released alongside the GTX 680, Nvidia introduced a new feature called Adaptive VSync. This feature is intended to combat the limitation of v-sync that, when the framerate drops below 60 FPS, there is stuttering as the v-sync rate is reduced to 30 FPS, then down to further factors of 60 if needed. However, when the framerate is below 60 FPS, there is no need for v-sync as the monitor will be able to display the frames as they are ready. To address this issue (while still maintaining the advantages of v-sync with respect to screen tearing), Adaptive VSync can be turned on in the driver control panel. It will enable VSync if the framerate is at or above 60 FPS, while disabling it if the framerate lowers. Nvidia claims that this will result in a smoother overall display.[2]
While the feature debuted alongside the GTX 680, this feature is available to users of older Nvidia cards who install the updated drivers.[2]
History
In September 2010, Nvidia first announced Kepler.[9]
In early 2012, details of the first members of the 600 series parts emerged. These initial members were entry-level laptop GPUs sourced from the older Fermi architecture.
On March 22, 2012, Nvidia unveiled the 600 series GPU: the GTX 680 for desktop PCs and the GeForce GT 640M, GT 650M, and GTX 660M for notebook/laptop PCs. The GK104 (which powers the GTX680) has 1536 CUDA cores, in eight groups of 192, and 3.5 billion transistors. The GK107 (GT 640M/GT 650M/GTX 660M) has 384 CUDA cores.
On April 29, 2012, the GTX 690 was announced as the first dual-GPU Kepler product. The GTX 690 has two of the GTX 680 GPUs, equalling 3072 CUDA cores and 512-bit memory.[10]
On May 10, 2012, GTX 670 was officially announced. The card features 1344 CUDA cores, 2GB GDDR5 VRAM and 256-bit memory bus.[11]
On June 4, 2012, GTX 680M was officially announced. This mobile GPU based on the powerful GTX 670 features 1344 CUDA cores, 4GB GDDR5 VRAM & 256-bit memory bus.
On August 16, 2012, GTX 660 Ti was officially announced. The card has 1344 CUDA cores along with 2GB GDDR5 VRAM and 192-bit memory bus.[12]
On September 13, 2012, GTX 660 and GTX 650 was officially announced. The GTX 660 has 960 CUDA cores and the GTX 650 has 384 CUDA cores. 2GB GDDR5 VRAM and a 192-bit memory bus for the GTX 660 and 1GB GDDR5 VRAM and a 128-bit memory bus for the GTX 650.[13]
On October 9, 2012, GTX 650 Ti was officially announced. The card features 768 CUDA cores along with 1GB GDDR5 VRAM and 128-bit memory bus.[14]
On March 26, 2013, GTX 650 Ti BOOST was officially announced. The card features 768 CUDA cores along with 1GB or 2GB GDDR5 VRAM and 192-bit memory bus.[15]
Products
GeForce 600 (6xx) series
- 1 SPs – Shader Processors – Unified Shaders : Texture mapping units : Render output units
- 2 The GeForce 605 (OEM) card is a rebranded GeForce 510.
- 3 The GeForce GT 610 card is a rebranded GeForce GT 520.
- 4 The GeForce GT 620 (OEM) card is a rebranded GeForce GT 520.
- 5 The GeForce GT 620 card is a rebranded GeForce GT 530.
- 6 This revision of GeForce GT 630 (DDR3) card is a rebranded GeForce GT 440 (DDR3).
- 7 The GeForce GT 630 (GDDR5) card is a rebranded GeForce GT 440 (GDDR5).
- 8 The GeForce GT 640 (OEM) card is a rebranded GeForce GT 545 (DDR3).
- 9 The GeForce GT 645 (OEM) card is a rebranded GeForce GTX 560 SE.
Model | Launch | Code Name | Fab (nm) | Transistors (Million) | Die Size (mm2) | Die Count | Bus interface | Memory (MiB) | SM Count | Core Configuration1 | Clock Rate | Fillrate | Memory Configuration | API Support (version) | GFLOPS (FMA) | TDP (Watts) | GFLOPS/W | Release Price (USD) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Average Boost (MHz) | Max. Boost (MHz) | Shader (MHz) | Memory (MHz) | Pixel (GP/s) | Texture (GT/s) | Bandwidth (GB/s) | DRAM Type | Bus Width (bit) | DirectX | OpenGL | OpenCL | |||||||||||||||
GeForce 6052 | April 3, 2012 | GF119 | 40 | 292 | 79 | 1 | PCIe 2.0 x16 | 512
1024 |
1 | 48:8:4 | 523 | — | — | 1046 | 1798 | 2.1 | 4.3 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 100.4 | 25 | 4.02 | OEM |
GeForce GT 610 3 | May 15, 2012 | GF119 | 40 | 292 | 79 | 1 | PCIe 2.0 x16, PCI | 1024 | 1 | 48:8:4 | 810 | — | — | 1620 | 1800 | 3.24 | 6.5 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 155.5 | 29 | 5.36 | Retail |
GeForce GT 620 4 | April 3, 2012 | GF119 | 40 | 292 | 79 | 1 | PCIe 2.0 x16, PCI | 512
1024 |
1 | 48:8:4 | 810 | — | — | 1620 | 1798 | 3.24 | 6.5 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 155.5 | 30 | 5.18 | OEM |
GeForce GT 6205 | May 15, 2012 | GF108 | 40 | 585 | 116 | 1 | PCIe 2.0 x16, PCI | 1024 | 2 | 96:16:4 | 700 | — | — | 1400 | 1800 | 2.8 | 11.2 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 268.8 | 49 | 5.49 | Retail |
GeForce GT 625 | February 19, 2013 | GF119 | 40 | 292 | 79 | 1 | PCIe 2.0 x16 | 512
1024 |
1 | 48:8:4 | 810 | — | — | 1620 | 1798 | 3.24 | 6.5 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 155.5 | 30 | 5.18 | OEM |
GeForce GT 630 | April 24, 2012 | GK107 | 28 | 1300 | 118 | 1 | PCIe 3.0 x16 | 1024 2048 |
1 | 192:16:16 | 875 | — | — | 875 | 1782 | 7 | 14 | 28.5 | DDR3 | 128 | 11.0 | 4.4 | 1.1 | 336 | 50 | 6.72 | OEM |
GeForce GT 630 (DDR3)6 | May 15, 2012 | GF108 | 40 | 585 | 116 | 1 | PCIe 2.0 x16, PCI | 1024 | 2 | 96:16:4 | 810 | — | — | 1620 | 1800 | 3.2 | 13 | 28.8 | DDR3 | 128 | 11.0 | 4.4 | 1.1 | 311 | 65 | 4.79 | Retail |
GeForce GT 630 (Rev. 2) | May 29, 2013 | GK208 | 28 | 1300 | 79 | 1 | PCIe 2.0 x8 | 1024 2048 |
2 | 384:16:8 | 902 | — | — | 902 | 1800 | 7.22 | 14.44 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 692.7 | 25 | ||
GeForce GT 630 (GDDR5)7 | May 15, 2012 | GF108 | 40 | 585 | 116 | 1 | PCIe 2.0 x16, PCI | 1024 | 2 | 96:16:4 | 810 | — | — | 1620 | 3200 | 3.2 | 13 | 51.2 | GDDR5 | 128 | 11.0 | 4.4 | 1.1 | 311 | 65 | 4.79 | Retail |
GeForce GT 635 | February 19, 2013 | GK208 | 28 | 79 | 1 | PCIe 3.0 x16 | 1024 2048 |
1 | 192:16:16 | 875 | — | — | 875 | 1782 | 7 | 14 | 28.5 | DDR3 | 128 | 11.0 | 4.4 | 1.1 | 336 | 50 | 6.72 | OEM | |
GeForce GT 6408 | April 24, 2012 | GF116 | 40 | 1170 | 238 | 1 | PCIe 2.0 x16 | 1536 3072 |
3 | 144:24:24 | 720 | — | — | 1440 | 1782 | 17.3 | 17.3 | 42.8 | DDR3 | 192 | 11.0 | 4.4 | 1.1 | 414.7 | 75 | 5.53 | OEM |
GeForce GT 640 (DDR3) | April 24, 2012 | GK107-301-A2 | 28 | 1300 | 118 | 1 | PCIe 3.0 x16 | 1024 2048 |
2 | 384:32:16 | 797 | — | — | 797 | 1782 | 12.8 | 25.5 | 28.5 | DDR3 | 128 | 11.0 | 4.4 | 1.1 | 612.1 | 50 | 12.24 | OEM |
GeForce GT 640 (DDR3) | June 5, 2012 | GK107 | 28 | 1300 | 118 | 1 | PCIe 3.0 x16 | 2048 | 2 | 384:32:16 | 900 | — | — | 900 | 1782 | 14.4 | 28.8 | 28.5 | DDR3 | 128 | 11.0 | 4.4 | 1.1 | 691.2 | 65 | 10.63 | $100 |
GeForce GT 640 (GDDR5) | April 24, 2012 | GK107 | 28 | 1300 | 118 | 1 | PCIe 3.0 x16 | 1024 2048 |
2 | 384:32:16 | 950 | — | — | 950 | 5000 | 15.2 | 30.4 | 80 | GDDR5 | 128 | 11.0 | 4.4 | 1.1 | 729.6 | 75 | 9.73 | OEM |
GeForce GT 640 Rev. 2 | May 29, 2013 | GK208 | 28 | 79 | 1 | PCIe 2.0 x8 | 1024 | 2 | 384:16:8 | 1046 | — | — | 1046 | 5010 | 8.37 | 16.7 | 40.1 | GDDR5 | 64 | 11.0 | 4.4 | 1.1 | 803.3 | 49 | |||
GeForce GT 6459 | April 24, 2012 | GF114-400-A1 | 40 | 1950 | 332 | 1 | PCIe 2.0 x16 | 1024 | 6 | 288:48:24 | 776 | — | — | 1552 | 1914 | 18.6 | 37.3 | 91.9 | GDDR5 | 192 | 11.0 | 4.4 | 1.1 | 894 | 140 | 6.39 | OEM |
GeForce GTX 645 | April 22, 2013 | GK106 | 28 | 2540 | 221 | 1 | PCIe 3.0 x16 | 1024 | 3 | 576:48:16 | 823.5 | 888.5 | — | 823 | 1000 (4000) |
9.88 | 39.5 | 64 | GDDR5 | 128 | 11.0 | 4.4 | 1.1 | 948.1 | 64 | OEM | |
GeForce GTX 650 | September 13, 2012 | GK107-450-A2 | 28 | 1300 | 118 | 1 | PCIe 3.0 x16 | 1024 2048 |
2 | 384:32:16 | 1058 | — | — | 1058 | 5000 | 16.9 | 33.8 | 80 | GDDR5 | 128 | 11.0 | 4.4 | 1.1 | 812.5 | 64 | 12.7 | $110 |
GeForce GTX 650 Ti | October 9, 2012 | GK106-220-A1 | 28 | 2540 | 221 | 1 | PCIe 3.0 x16 | 1024 2048 |
4 | 768:64:16 | 928 | — | — | 928 | 5400 | 14.8 | 59.2 | 86.4 | GDDR5 | 128 | 11.0 | 4.4 | 1.1 | 1420.8 | 110 | 12.92 | $150 |
GeForce GTX 650 Ti Boost | March 26, 2013 | GK106-240-A1 | 28 | 2540 | 221 | 1 | PCIe 3.0 x16 | 1024 2048 |
4 | 768:64:24 | 980 | 1033 | — | 980 | 6002 | 23.5 | 62.7 | 144.2 | GDDR5 | 192 | 11.0 | 4.4 | 1.1 | 1,505.28 | 134 | $170 | |
GeForce GTX 660[16] | September 13, 2012 | GK106-400-A1 | 28 | 2540 | 221 | 1 | PCIe 3.0 x16 | 2048 3072 |
5 | 960:80:24 | 980 | 1033 | 1084 | 980 | 6000 | 23.5 | 78.5 | 144.2 | GDDR5 | 192 | 11.0 | 4.4 | 1.1 | 1881.6 | 140 | 13.44 | $230 |
GeForce GTX 660 (OEM[17]) | August 22, 2012 | GK104-200-KD-A2 | 28 | 3540 | 294 | 1 | PCIe 3.0 x16 | 1536 2048 |
6 | 1152:96:24 1152:96:32 |
823 | 888 | Unknown | 823 | 5800 | 19.8 | 79 | 134 | GDDR5 | 192 256 |
11.0 | 4.4 | 1.1 | 2108.6 | 130 | 16.22 | OEM |
GeForce GTX 660 Ti | August 16, 2012 | GK104-300-KD-A2 | 28 | 3540 | 294 | 1 | PCIe 3.0 x16 | 2048 3072 |
7 | 1344:112:24 | 915 | 980 | 1058 | 915 | 6008 | 22.0 | 102.5 | 144.2 | GDDR5 | 192 | 11.0 | 4.4 | 1.1 | 2460 | 150 | 16.40 | $300 |
GeForce GTX 670 | May 10, 2012 | GK104-325-A2 | 28 | 3540 | 294 | 1 | PCIe 3.0 x16 | 2048 4096 |
7 | 1344:112:32 | 915 | 980 | 1084 | 915 | 6008 | 29.3 | 102.5 | 192.256 | GDDR5 | 256 | 11.0 | 4.4 | 1.1 | 2460 | 170 | 14.47 | $400 |
GeForce GTX 680 | March 22, 2012 | GK104-400-A2 | 28 | 3540 | 294 | 1 | PCIe 3.0 x16 | 2048 4096 |
8 | 1536:128:32 | 1006[2] | 1058 | 1110 | 1006 | 6008 | 32.2 | 128.8 | 192.256 | GDDR5 | 256 | 11.0 | 4.4 | 1.1 | 3090.4 | 195 | 15.85 | $500 |
GeForce GTX 690 | April 29, 2012 | 2× GK104-355-A2 | 28 | 2× 3540 | 2× 294 | 2 | PCIe 3.0 x16 | 2× 2048 | 2× 8 | 2× 1536:128:32 | 915 | 1019 | 1058[18] | 915 | 6008 | 2× 29.28 | 2× 117.12 | 2× 192.256 | GDDR5 | 2× 256 | 11.0 | 4.4 | 1.1 | 2× 2810.88 | 300 | 18.74 | $1000 |
Model | Launch | Code Name | Fab (nm) | Transistors (Million) | Die Size (mm2) | Die Count | Bus interface | Memory (MiB) | SM Count | Core Configuration 1 | Clock Rate | Fillrate | Memory Configuration | API Support (version) | GFLOPS (FMA) | TDP (Watts) | GFLOPS/W | Release Price (USD) | |||||||||
Core (MHz) | Average Boost (MHz) | Max. Boost (MHz) | Shader (MHz) | Memory (MHz) | Pixel (GP/s) | Texture (GT/s) | Bandwidth (GB/s) | DRAM Type | Bus Width (bit) | DirectX | OpenGL | OpenCL |
GeForce 600M (6xxM) series
The GeForce 600M series for notebooks architecture. The processing power is obtained by multiplying shader clock speed, the number of cores and how many instructions the cores are capable of performing per cycle.
Model | Launch | Code Name | Fab (nm) | Bus interface | Memory (MiB) | Core Configuration1 | Clock Speed | Fillrate | Memory | API Support (version) | Processing Power2 (GFLOPS) |
TDP (Watts) | Notes | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Shader (MHz) | Memory (MT/s) | Pixel (GP/s) | Texture (GT/s) | Bandwidth (GB/s) | Bus Type | Bus Width (bit) | DirectX | OpenGL | OpenCL | ||||||||||
GeForce 610M [19] | Dec 2011 | GF119 (N13M-GE) | 40 | PCIe 2.0 x16 | 1024 2048 |
48:8:4 | 900 | 1800 | 1800 | 3.6 | 7.2 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 142.08 | 12 | OEM. Rebadged GT 520MX |
GeForce GT 620M [20] | Apr 2012 | GF117 (N13M-GS) | 28 | PCIe 2.0 x16 | 1024 2048 |
96:16:4 | 625 | 1250 | 1800 | 2.5 | 10 | 14.4 28.8 |
DDR3 | 64 128 |
11.0 | 4.4 | 1.1 | 240 | 15 | OEM. Die-Shrink GF108 |
GeForce GT 625M | October 2012 | GF117 (N13M-GS) | 28 | PCIe 2.0 x16 | 1024 2048 |
96:16:4 | 625 | 1250 | 1800 | 2.5 | 10 | 14.4 | DDR3 | 64 | 11.0 | 4.4 | 1.1 | 240 | 15 | OEM. Die-Shrink GF108 |
GeForce GT 630M[20][21][22] | Apr 2012 | GF108 (N13P-GL) GF117 |
40 28 |
PCIe 2.0 x16 | 1024 2048 |
96:16:4 | 660 800 |
1320 1600 |
1800 4000 |
2.6 3.2 |
10.7 12.8 |
28.8 32.0 |
DDR3 GDDR5 |
128 64 |
12.0 | 4.4 | 1.1 | 258.0 307.2 |
33 | GF108: OEM. Rebadged GT 540M GF117: OEM Die-Shrink GF108 |
GeForce GT 635M[20][23][24] | Apr 2012 | GF106 (N12E-GE2) GF116 |
40 | PCIe 2.0 x16 | 2048 1536 |
144:24:24 | 675 | 1350 | 1800 | 16.2 | 16.2 | 28.8 43.2 |
DDR3 | 128 192 |
12.0 | 4.4 | 1.1 | 289.2 388.8 |
35 | GF106: OEM. Rebadged GT 555M GF116: 144 Unified Shaders |
GeForce GT 640M LE[20] | March 22, 2012 | GF108 GK107 (N13P-LP) |
40 28 |
PCIe 2.0 x16 PCIe 3.0 x16 |
1024 2048 |
96:16:4 384:32:16 |
762 500 |
1524 500 |
3130 1800 |
3 8 |
12.2 16 |
50.2 28.8 |
GDDR5 DDR3 |
128 | 11.0 | 4.4 | 1.1 | 292.6 384 |
32 20 |
GF108: Fermi GK107: Kepler architecture |
GeForce GT 640M[20][25] | March 22, 2012 | GK107 (N13P-GS) | 28 | PCIe 3.0 x16 | 1024 2048 |
384:32:16 | 625 | 625 | 1800 4000 |
10 | 20 | 28.8 64.0 |
DDR3 GDDR5 |
128 | 12.0 | 4.4 | 1.1 | 480 | 32 | Kepler architecture |
GeForce GT 645M | October 2012 | GK107 (N13P-GS) | 28 | PCIe 3.0 x16 | 1024 2048 |
384:32:16 | 710 | 710 | 1800 4000 |
11.36 | 22.72 | 28.8 64.0 |
DDR3 GDDR5 |
128 | 12.0 | 4.4 | 1.1 | 545 | 32 | Kepler architecture |
GeForce GT 650M[20][26][27] | March 22, 2012 | GK107 (N13P-GT) | 28 | PCIe 3.0 x16 | 1024 2048 |
384:32:16 | 835 745 900* |
835 745 900* |
1800 4000 5000* |
13.4 11.9 14.4* |
26.7 23.8 28.8* |
28.8 64.0 80.0* |
DDR3 GDDR5 |
128 | 11.0 | 4.4 | 1.1 | 641.3 572.2 691.2 |
45 | Kepler architecture * |
GeForce GTX 660M[20][27][28][29] | March 22, 2012 | GK107 (N13E-GE) | 28 | PCIe 3.0 x16 | 2048 | 384:32:16 | 835 | 835 | 5000 | 13.4 | 26.7 | 80.0 | GDDR5 | 128 | 11.0 | 4.4 | 1.1 | 641.3 | 50 | Kepler architecture |
GeForce GTX 670M[20] | April 2012 | GF114 (N13E-GS1-LP) | 40 | PCIe 2.0 x16 | 1536 3072 |
336:56:24 | 598 | 1196 | 3000 | 14.35 | 33.5 | 72.0 | GDDR5 | 192 | 11.0 | 4.4 | 1.1 | 803.6 | 75 | OEM. Rebadged GTX 570M |
GeForce GTX 670MX | October 2012 | GK106 (N13E-GR) | 28 | PCIe 3.0 x16 | 1536 3072 |
960:80:24 | 600 | 600 | 2800 | 14.4 | 48.0 | 67.2 | GDDR5 | 192 | 11.0 | 4.4 | 1.1 | 1152 | 75 | Kepler architecture |
GeForce GTX 675M[20] | April 2012 | GF114 (N13E-GS1) | 40 | PCIe 2.0 x16 | 2048 | 384:64:32 | 620 | 1240 | 3000 | 19.8 | 39.7 | 96.0 | GDDR5 | 256 | 11.0 | 4.4 | 1.1 | 952.3 | 100 | OEM. Rebadged GTX 580M |
GeForce GTX 675MX | October 2012 | GK106 (N13E-GSR) | 28 | PCIe 3.0 x16 | 4096 | 960:80:32 | 600 | 600 | 3600 | 19.2 | 48.0 | 115.2 | GDDR5 | 256 | 11.0 | 4.4 | 1.1 | 1152 | 100 | Kepler architecture |
GeForce GTX 680M | June 4, 2012 | GK104 (N13E-GTX) | 28 | PCIe 3.0 x16 | 4096 | 1344:112:32 | 720 | 720 | 3600 | 23 | 80.6 | 115.2 | GDDR5 | 256 | 11.0 | 4.4 | 1.1 | 1935.4 | 100 | Kepler architecture |
GeForce GTX 680MX | October 23, 2012 | GK104 | 28 | PCIe 3.0 x16 | 4096 | 1536:128:32 | 720 | 720 | 5000 | 23 | 92.2 | 160 | GDDR5 | 256 | 11.0 | 4.4 | 1.1 | 2234.3 | 100+ | Kepler architecture |
Model | Launch | Code Name | Fab (nm) | Bus interface | Memory (MiB) | Core Configuration1 | Clock Speed | Fillrate | Memory | API Support (version) | Processing Power2 (GFLOPS) |
TDP (Watts) | Notes | |||||||
Core (MHz) | Shader (MHz) | Memory (MT/s) | Pixel (GP/s) | Texture (GT/s) | Bandwidth (GB/s) | Bus Type | Bus Width (bit) | DirectX | OpenGL | OpenCL |
Chipset table
See also
References
- ^ http://blogs.nvidia.com/blog/2014/03/20/directx-12/
- ^ a b c d e f g h Template:PDFlink, page 6 of 29
- ^ a b c d e Smith, Ryan (March 22, 2012). "NVIDIA GeForce GTX 680 Review: Retaking The Performance Crown". AnandTech. Retrieved November 25, 2012.
- ^ "GK104: The Chip And Architecture GK104: The Chip And Architecture". Tom;s Hardware. March 22, 2012.
- ^ "NVIDIA Kepler GK110 Architecture Whitepaper" (PDF).
- ^ http://blogs.nvidia.com/blog/2014/03/20/directx-12/
- ^ "Introducing The GeForce GTX 680 GPU". Nvidia. March 22, 2012.
- ^ a b "Benchmark Results: NVEnc And MediaEspresso 6.5". Tom’s Hardware. March 22, 2012.
- ^ Yam, Marcus (September 22, 2010). "Nvidia roadmap". Tom's Hardware US.
- ^ "Performance Perfected: Introducing the GeForce GTX 690". GeForce. April 1, 2012. Retrieved March 1, 2014.
- ^ "Introducing The GeForce GTX 670 GPU". GeForce. March 19, 2012. Retrieved March 1, 2014.
- ^ "Meet Your New Weapon: The GeForce GTX 660 Ti. Borderlands 2 Included". GeForce. August 15, 2012. Retrieved March 1, 2014.
- ^ "Kepler For Every Gamer: Meet The New GeForce GTX 660 & 650". GeForce. September 12, 2012. Retrieved March 1, 2014.
- ^ "Kepler Family Complete : Introducing the GeForce GTX 650 Ti". GeForce. October 9, 2012. Retrieved March 1, 2014.
- ^ "GTX 650 Ti BOOST: Tuned For Sweet Spot Gaming". GeForce. March 26, 2013. Retrieved March 1, 2014.
- ^ "Test: NVIDIA GeForce GTX 660". Hardwareluxx.com. September 13, 2012. Retrieved May 7, 2013.
- ^ "GeForce GTX 660 (OEM)". GeForce.com. Retrieved September 13, 2012.
- ^ "NVIDIA GeForce GTX 690 Review: Ultra Expensive, Ultra Rare, Ultra Fast". AnandTech. Retrieved May 7, 2013.
- ^ "GeForce 610M Graphics Card with Optimus technology | NVIDIA". Nvidia.in. Retrieved May 7, 2013.
- ^ a b c d e f g h i "NVIDIA's GeForce 600M Series: Mobile Kepler and Fermi Die Shrinks". AnandTech. Retrieved May 7, 2013.
- ^ "GeForce GT 630M Graphics Card with Optimus technology | NVIDIA". Nvidia.in. Retrieved May 7, 2013.
- ^ "GT 630M GPU with NVIDIA Optimus Technology". GeForce. Retrieved May 7, 2013.
- ^ "GeForce GT 635M GPU with NVIDIA Optimus technology | NVIDIA". Nvidia.in. Retrieved May 7, 2013.
- ^ "GT 635M GPU with NVIDIA Optimus Technology". GeForce. Retrieved May 7, 2013.
- ^ "Acer Aspire TimelineU M3: Life on the Kepler Verge". AnandTech. Retrieved May 7, 2013.
- ^ "HP Lists New Ivy Bridge 2012 Mosaic Design Laptops, Available April 8th". Laptopreviews.com. March 18, 2012. Retrieved May 7, 2013.
- ^ a b "Help Me Choose | Dell". Content.dell.com. April 13, 2012. Retrieved May 7, 2013.
- ^ Wollman, Dana (January 8, 2012). "Lenovo unveils six mainstream consumer laptops (and one desktop replacement)". Engadget.com. Retrieved May 7, 2013.
- ^ 660m power draw tested in Asus G75VW
External links
- Introducing the GeForce GTX 680 GPU
- Kepler Whitepaper
- Introducing The GeForce GTX 680M Mobile GPU
- GeForce 600M Notebooks: Powerful and Efficient
- GeForce GTX 690
- GeForce GTX 680
- GeForce GTX 670
- GeForce GTX 660 Ti
- GeForce GTX 660
- GeForce GTX 650 Ti BOOST
- GeForce GTX 650 Ti
- GeForce GTX 650
- GeForce GT 640
- GeForce GTX 680MX
- GeForce GTX 680M
- GeForce GTX 675MX
- GeForce GTX 670MX
- GeForce GTX 660M
- GeForce GT 650M
- GeForce GT 645M
- GeForce GT 640M
- A New Dawn
- Nvidia Nsight