|Introduced||August 31, 1999|
|Type||Consumer graphics cards|
GeForce is a brand of graphics processing units (GPUs) designed by Nvidia. As of 2013[update], there have been twelve iterations of the design. The first GeForce products were discrete GPUs designed for use on add-on graphics boards, intended for the high-margin PC gaming market. Later diversification of the product-line covered all tiers of the PC graphics market, from cost-sensitive, motherboard-integrated GPUs to mainstream, add-in, retail boards. Most recently, GeForce technology has been introduced into Nvidia's line of embedded application processors, designed for electronic handhelds and mobile handsets.
With respect to discrete GPUs, found in add-in graphics-boards, Nvidia's GeForce and AMD's Radeon GPUs are the only remaining competitors in the high-end market. Along with its nearest competitor, the AMD Radeon, the GeForce architecture is moving toward GPGPU (General Purpose-Graphics Processor Unit). GPGPU is expected to expand GPU functionality beyond the traditional rasterization of 3D graphics, to turn it into a high-performance computing device able to execute arbitrary programming code in the same way a CPU does, but with different strengths (highly parallel execution of straightforward calculations) and weaknesses (worse performance for complex decision-making code).
The "GeForce" name originated from a contest held by Nvidia in early 1999 called "Name That Chip". The company called out to the public to name the successor to the RIVA TNT2 line of graphics boards. There were over 12,000 entries received and 7 winners received a RIVA TNT2 Ultra graphics card as a reward.
Graphics processor generations
- GeForce 256
- Launched on August 31, 1999, the GeForce 256 (NV10) was the first consumer-level PC graphics chip with hardware transform, lighting, and shading although 3D games utilizing this feature did not appear until later. Initial GeForce 256 boards shipped with SDR SDRAM memory, and later boards shipped with faster DDR SDRAM memory.
- GeForce 2 Series
- Launched in April 2000, the first GeForce2 (NV15) was another high-performance graphics chip. Nvidia moved to a twin texture processor per pipeline (4x2) design, doubling texture fillrate per clock compared to GeForce 256. Later, Nvidia released the GeForce2 MX (NV11), which offered performance similar to the GeForce 256 but at a fraction of the cost. The MX was a compelling value in the low/mid-range market segments and was popular with OEM PC manufacturers and users alike. The GeForce 2 Ultra was the high-end model in this series.
- GeForce 3 Series
- Launched in February 2001, the GeForce3 (NV20) introduced programmable vertex and pixel shaders to the GeForce family and to consumer-level graphics accelerators. It had good overall performance and shader support, making it popular with enthusiasts although it never hit the midrange price point. The NV2A developed for the Microsoft Xbox game console is a derivative of the GeForce 3.
- GeForce 4 Series
- Launched in February 2002, the high-end GeForce4 Ti (NV25) was mostly a refinement to the GeForce3. The biggest advancements included enhancements to anti-aliasing capabilities, an improved memory controller, a second vertex shader, and a manufacturing process size reduction to increase clock speeds. Another "family member," the budget GeForce4 MX, was based on the GeForce2, with a few additions from the new GeForce4 Ti line. It targeted the value segment of the market and lacked pixel shaders. Most of these models used the AGP4x interface, but a few began the transition to AGP8x.
- GeForce FX Series
- Launched in 2003, the GeForce FX (NV30) was a huge change in architecture compared to its predecessors. The GPU was designed not only to support the new Shader Model 2 specification but also to perform well on older titles. However, initial models like the GeForce FX 5800 Ultra suffered from weak floating point shader performance and excessive heat which required infamously noisy two-slot cooling solutions. Products in this series carry the 5000 model number, as it is the fifth generation of the GeForce, though Nvidia marketed the cards as GeForce FX instead of GeForce 5 to show off "the dawn of cinematic rendering".
- GeForce 6 Series
- Launched in April 2004, the GeForce 6 (NV40) added Shader Model 3.0 support to the GeForce family, while correcting the weak floating point shader performance of its predecessor. It also implemented high dynamic range imaging and introduced SLI (Scalable Link Interface) and PureVideo capability (integrated partial hardware MPEG-2, VC-1, Windows Media Video, and H.264 decoding and fully accelerated video post-processing).
- GeForce 7 Series
- The 7th generation GeForce (G70/NV47) was launched in June 2005 and was the last Nvidia video card series that could support the AGP bus. The design was a refined version of GeForce 6, with the major improvements being a widened pipeline and an increase in clock speed. The GeForce 7 also offers new transparency supersampling and transparency multisampling anti-aliasing modes (TSAA and TMAA). These new anti-aliasing modes were later enabled for the GeForce 6 series as well. The GeForce 7950GT featured the highest performance GPU with an AGP interface in the nVidia line. This era began the transition to the PCI-Express interface.
- A 128-bit, 8 ROP variant of the 7950 GT, called the RSX 'Reality Synthesizer', is used as the main GPU in the Sony PlayStation 3.
- GeForce 8 Series
- Released on November 8, 2006, the 8th generation GeForce (G80 originally) was the first ever GPU to fully support Direct3D 10. Manufactured in 80 nm and built on brand new Tesla microarchitecture, it implemented the unified shader model. Originally just the 8800GTX, the GTS was released months into the product line's life, and it took nearly 6 months for mid-range and OEM/mainstream cards to be integrated into the 8-series. The Die-shrink down to 65 nm and a revision to the G80 design, codenamed G92, were implemented into the 8 series with the 8800GS, the 8800GT, and 8800GTS-512. First released on October 29, 2007, almost one whole year after the initial G80 release.
- GeForce 9 Series / GeForce 100 Series
- The first product was released on February 21, 2008. Not even four months older than the initial G92 release, all 9-series designs, both speculated and currently out, are simply revisions to existing late 8-series products. The 9800GX2 uses two G92 GPUs, as used in later 8800 cards, in a dual PCB configuration while still only requiring a single PCI-Express 16x slot. The 9800GX2 utilizes two separate 256-bit memory busses, one for each GPU and its respective 512MB of memory, which equates to an overall of 1GB of memory on the card (although the SLI configuration of the chips necessitates mirroring the frame buffer between the two chips, thus effectively halving the memory performance of a 256-bit/512MB configuration). The later 9800GTX features a single G92 GPU, 256-bit data bus, and 512MB of GDDR3 memory. Prior to the release, no concrete information was known except officials claiming the next generation products having close to 1 TFLOPS performance while the GPU cores still being manufactured in the 65 nm process, and reports about Nvidia downplaying the significance of Direct3D 10.1. On March 2009, several sources reported that nVidia had quietly launched a new series of GeForce products, designated GeForce 100 Series, which consists of rebadged 9 Series parts. GeForce 100 products are not available for individual purchase.
- GeForce 200 Series / GeForce 300 Series
- Based on the GT200 graphics processor consisting of 1.4 billion transistors, codenamed Tesla, the 200 series was launched on June 16, 2008. The next generation of the GeForce series takes the card-naming scheme in a new direction, by replacing the series number (such as 8800 for 8-series cards) with the GTX or GTS suffix (which used to go at the end of card names, denoting their 'rank' among other similar models), and then adding model-numbers such as 260 and 280 after that. The series features the new GT200 core on a 65nm die. The first products were the GeForce GTX 260 and the more expensive GeForce GTX 280. The GeForce 310 was released on November 27, 2009, which is a rebrand of GeForce 210. The 300 series cards are rebranded DirectX 10.1 compatible GPUs from the 200 series.
- GeForce 400 Series / GeForce 500 Series
- Nvidia announced and released the GeForce GTX 470 and GTX 480, the first cards based on the new Fermi architecture codenamed GF100, and the first Nvidia GPU to utilize 1GB or more of newer GDDR5 memory. The GTX 470 and GTX 480 were heavily criticized due to high power use, high temperatures, and very loud noise that were not balanced by the performance offered, even though the GTX 480 was the fastest DirectX 11 card as of its introduction. However, they are not as noisy as the GeForce FX 5800 Ultra. They were released on April 7, 2010. Later that year, Nvidia introduced the GeForce GTX 465 as a cutdown, cheaper version of the GF100 chip to target at mainstream users. The GTX 465 was quickly replaced by the GTX 460, based on the GF104 architecture, which featured lower power consumption and better performance. Soon after, Nvidia released mainstream versions of Fermi architecture, also known as GF106 and GF108, for consumers as well as OEMs. NVIDIA also released a flagship GPU based on an enhanced GF100 architecture (GF110), called the GTX 580, that featured higher performance, less power utilization, less heat, and less noise than the GTX 480. This GPU received much better reviews than the GTX 480. Nvidia also recently released two updates to the GTX470 and GTX460, the GTX570 and GTX560 Ti, both of which also feature better performance than their predecessors. They have now phased out the GTX480 and GTX470, while keeping the GTX460 in production as a lower budget high end card. Then came the GTX590, a combination of 2 GTX 580 on one single card.
- GeForce 600 series / GeForce 700 series / GeForce 800M series
- In September 2010, NVIDIA announced that the successor to Fermi microarchitecture would be the Kepler microarchitecture architecture, manufactured with the TSMC 28 nm fabrication process. Earlier, NVIDIA had been contracted to supply their top-end GK110 cores for use in the Oak Ridge National Laboratory's "Titan" supercomputer, leading to a shortage of GK110 cores. After AMD launched their own annual refresh in early 2012, the Radeon HD 7000 series, and the performance was well below that of GK110, NVIDIA began the release of the GeForce 600 series in March 2012. The GK104 core, originally intended for their mid-range segment of their lineup, became the flagship GTX 680. It introduced significant improvements in performance, heat, and power efficiency compared to the Fermi architecture and closely matched AMD's flagship Radeon HD 7970. It was quickly followed by the dual-GK104 GTX 690 and the GTX 670, which featured only a slightly cut-down GK104 core and was very close in performance to the GTX 680.
- In the following months, NVIDIA released the GTX 660 Ti based on a further cut-down GK104 core, the GTX 660 and 650 Ti, based on the GK106 core, and the GTX 650 and GT 640, based on the GK107 core. In February 2013, 11 months after the launch of the 600 series, NVIDIA announced a new card based on the GK110 core branded as the GeForce GTX TITAN, named for the Titan supercomputer in which the cores were initially used. With 2688 CUDA cores, it easily outperformed the previous top-end cards, the 1536-core GTX 680 and 2048-core Radeon HD 7970, by 60-70% on average, nearly rivaling the performance of two GTX 680 cores working together. It was equipped with 6GB of memory, twice as much as the Radeon HD 7970 and three times as much as the GTX 680. The TITAN also retained the full double-precision capabilities of the Kepler architecture, which are typically crippled on GeForce cards and only left untouched for Quadro and Tesla cards. It was paired with an advanced cooler that was both quiet and effective in cooling the card, despite the high power of the GK110 core.
- With the GTX TITAN, NVIDIA also released GPU Boost 2.0, which would allow the GPU clock speed to increase indefinitely until a user-set temperature limit was reached without passing a user-specified maximum fan speed. The final GeForce 600 series release was the GTX 650 Ti BOOST based on the GK106 core, in response to AMD's Radeon HD 7790 release. At the end of May 2013, NVIDIA announced the 700 series, which was still based on the Kepler architecture, however it featured a GK110-based card at the top of the lineup. The GTX 780 was a slightly cut-down TITAN that achieved nearly the same performance for two-thirds of the price. It featured the same advanced reference cooler design, but did not have the unlocked double-precision cores and was equipped with 3 GB of memory.
- At the same time, NVIDIA announced ShadowPlay, a screen capture solution that used an integrated H.264 encoder built into the Kepler architecture that NVIDIA had not revealed previously. It could be used to record gameplay without a capture card, and with negligible performance decrease compared to software recording solutions, and was available even on the previous generation GeForce 600 series cards. The software beta for ShadowPlay, however, experienced multiple delays and would not be released until the end of October 2013. A week after the release of the GTX 780, NVIDIA announced the GTX 770 to be a rebrand of the GTX 680. It was followed by the GTX 760 shortly after, which was also based on the GK104 core and similar to the GTX 660 Ti. No more 700 series cards were set for release in 2013, although NVIDIA announced G-Sync, another feature of the Kepler architecture that NVIDIA had left unmentioned, which allowed the GPU to dynamically control the refresh rate of G-Sync-compatible monitors which would release in 2014, to combat tearing and judder. However, in October, AMD released the R9 290X, which came in at $100 less than the GTX 780. In response, NVIDIA slashed the price of the GTX 780 by $150 and released the GTX 780 Ti, which featured a full 2880-core GK110 core even more powerful than the GTX TITAN, along with enhancements to the power delivery system which improved overclocking, and managed to pull ahead of AMD's new release.
- The GeForce 800M series consists of rebranded 700M series parts based on the Kepler architecture and some lower-end parts based on the newer Maxwell architecture.
- GeForce 900 series
- In March 2013, Nvidia announced that the successor to Kepler would be the Maxwell microarchitecture. It was released in 2014.
- GeForce 1000 Series / GeForce 1100 Series
- In March 2014, Nvidia announced that the successor to Maxwell would be the Pascal microarchitecture, due in 2016; this successor microarchitecture was initially called Volta. Architectural improvements include the following:
- 3D memory – layers of DRAM chips stacked into dense modules with wide buses, and moved onto the same package with the GPU.
- Unified memory – memory architecture unified so CPU and GPU can access both main system memory and memory on the graphics card.
- NVLink – a power-efficient high-speed bus between the CPU and GPU, and between multiple GPUs. Allows much higher transfer speeds than those achievable by using PCI Express; estimated to provide between 80 and 200 GB/s.
Since the GeForce2, Nvidia has produced a number of graphics chipsets for notebook computers under the GeForce Go branding. Most of the features present in the desktop counterparts are present in the mobile ones. These GPUs have a lower power consumption but perform worse than their desktop counterpart.
Beginning with the GeForce 8 series, the GeForce Go brand was discontinued and the mobile GPUs were integrated with the main line of GeForce GPUs, but their name suffixed with an M.
Small form factor GPUs
Similar to the mobile GPUs, Nvidia also released a few GPUs in "small form factor" format, for use in all-in-one desktops. These GPUs are suffixed with an S, similar to the M used for mobile products.
Integrated desktop motherboard GPUs
Beginning with the nForce 4, Nvidia started including onboard graphics solutions in their motherboard chipsets. These onboard graphics solutions were called mGPUs (motherboard GPUs). Nvidia discontinued the nForce range, including these mGPUs, in 2009.
After the nForce range was discontinued, Nvidia released their Ion line in 2009, which consisted of an Intel Atom CPU partnered with an low-end GeForce 9 series GPU, fixed on the motherboard. Nvidia released an upgraded Ion 2 in 2010, this time containing a low-end GeForce 300 series GPU.
|This section needs additional citations for verification. (February 2014)|
From the GeForce 4 series until the GeForce 9 series, the naming scheme below was used.
|Category||Number range||Suffix[a]||Price range[b] (USD)||Shader amount[c]||Memory||Example products|
|Low-end (mainstream) graphics cards||000–550||SE, LE, No suffix, GS, GT, Ultra||<$100||<25%||DDR, DDR2||25–50%||~25%||GeForce 9400GT, GeForce 9500GT|
|Mid-range (performance) graphics cards||600–750||VE, LE, XT, No suffix, GS, GSO, GT, GTS, Ultra||$100–$175||25–50%||DDR2, GDDR3||50–75%||50–75%||GeForce 9600GT, GeForce 9600GSO|
|High-end (enthusiast) graphics cards||800–950||VE, LE, ZT, XT, No suffix, GS, GSO, GT, GTO, GTS, GTX, GTX+, Ultra, Ultra Extreme, GX2||>$175||50–100%||GDDR3||75–100%||50–100%||GeForce 9800GT, GeForce 9800GTX|
Since the release of the GeForce 100 series of GPUs, NVIDIA changed their product naming scheme to the one below.
|Category||Prefix||Number range (Last 2 digits)||Price range[b] (USD)||Shader amount[c]||Memory||Example products|
|Low-end (mainstream) graphics cards||No prefix, G, GT||00–35||<$100||<25%||DDR2, GDDR3, GDDR5||25–50%||~25%||GeForce GT 620, GeForce GT 630|
|Mid-range (performance) graphics cards||GT, GTS, GTX||40–65||$100–$300||25–50%||GDDR3, GDDR5||50–75%||75–100%||GeForce GTX 650, GeForce GTX 760|
|High-end (enthusiast) graphics cards||GTX||70–95||>$300||50–100%||GDDR3, GDDR5||75–100%||50–100%||GeForce GTX 690, GeForce GTX 770|
- Suffixes indicate its performance layer, and those listed are in order from weakest to most powerful. Suffixes from lesser categories can still be used on higher performance cards, example: GeForce 8800 GT.
- Price range only applies to the most recent generation and is a generalization based on pricing patterns.
- Shader amount compares the number of shaders pipelines or units in that particular model range to the highest model possible in the generation.
- Earlier cards such as the GeForce4 follow a similar pattern.
- cf. Nvidia's Performance Graph here.
Graphics device drivers
Nvidia develops and publishes GeForce drivers for Windows XP x86/x86-64 and later, Linux x86/x86-64/ARMv7-A, OS X 10.5 and later, Solaris x86/x86-64 and FreeBSD x86/x86-64. A current version can be downloaded from the Internet and some Linux distributions contain it in their repositories. Nvidia GeForce driver 340.24 from 8 July 2014 supports the EGL interface enabling support for Wayland in conjunction with this driver. This may be different for the Nvidia Quadro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers.
Nvidia GeForce driver supports all features advertised for the GeForce brand.
Free and open-source
Community-created, free and open-source drivers exist an alternative to the drivers released by Nvidia. Open-source drivers are developed primarily for Linux, however there may be ports to other operating systems. The most prominent alternative driver is the reverse-engineered free and open-source nouveau graphics device driver. Nvidia has publicly announced to not provide any support for such additional device drivers for their products, although Nvidia has contributed code to the Nouveau driver.
Free and open-source drivers support a large portion (but not all) of the features available in GeForce-branded cards. For example, as of January 2014[update] nouveau driver lacks support for the GPU and memory clock frequency adjustments, and for associated dynamic power management. In comparison benchmarks, Nvidia's proprietary drivers consistently perform better than nouveau. However, as of August 2014[update] and version 3.16 of the Linux kernel mainline, contributions by Nvidia allowed partial support for GPU and memory clock frequency adjustments to be implemented.
After Maxwell, the next architecture is code-named Pascal.
After Pascal, the next architecture is code-named Volta.
- "GeForce Graphics Cards". Nvidia. Retrieved July 7, 2012.
- "Winners of the Nvidia Naming Contest". Nvidia. 1999. Archived from the original on June 8, 2000. Retrieved May 28, 2007.
- Taken, Femme (April 17, 1999). "Nvidia "Name that chip" contest". Tweakers.net. Retrieved May 28, 2007.
- Brian Caulfield (January 7, 2008). "Shoot to Kill". Forbes.com. Retrieved December 26, 2007.
- "NVIDIA GeForce 9800 GTX". Retrieved May 31, 2008.
- DailyTech report: Crytek, Microsoft and Nvidia downplay Direct3D 10.1, retrieved December 4, 2007
- "Nvidia quietly launches GeForce 100-series GPUs". April 6, 2009.
- "nVidia Launches GeForce 100 Series Cards". March 10, 2009.
- "Nvidia quietly launches GeForce 100-series GPUs". March 24, 2009.
- "NVIDIA GeForce GTX 280 Video Card Review". Benchmark Reviews. June 16, 2008. Retrieved June 16, 2008.
- "GeForce GTX 280 to launch on June 18th". Fudzilla.com. Archived from the original on May 17, 2008. Retrieved May 18, 2008.
- "Detailed GeForce GTX 280 Pictures". VR-Zone. June 3, 2008. Retrieved June 3, 2008.
- "– News :: NVIDIA kicks off GeForce 300-series range with GeForce 310 : Page – 1/1". Hexus.net. 2009-11-27. Retrieved 2013-06-30.
- "Every PC needs good graphics". Nvidia.com. Retrieved 2013-06-30.
- "Twitter / NVIDIAGeForce: Fun Fact of the Week: GeForce". Twitter.com. Retrieved 2013-06-30.
- "Update: NVIDIA’s GeForce GTX 400 Series Shows Up Early – AnandTech :: Your Source for Hardware Analysis and News". Anandtech.com. Retrieved 2013-06-30.
- Smith, Ryan (March 19, 2013). "NVIDIA Updates GPU Roadmap; Announces Volta Family For Beyond 2014". AnandTech. Retrieved March 19, 2013.
- Gupta, Sumit (2014-03-21). "NVIDIA Updates GPU Roadmap; Announces Pascal". Blogs.nvidia.com. Retrieved 2014-03-25.
- "Parallel Forall". NVIDIA Developer Zone. Devblogs.nvidia.com. Retrieved 2014-03-25.
- Denis Foley (2014-03-25). "NVLink, Pascal and Stacked Memory: Feeding the Appetite for Big Data". nvidia.com. Retrieved 2014-07-07.
- "NVIDIA Small Form Factor". NVIDIA. Retrieved 2014-02-03.
- "NVIDIA Motherboard GPUs". NVIDIA. Retrieved 2010-03-22.
- "Support for EGL". 2014-07-08. Retrieved 2014-07-08.
- "lib32-nvidia-utils 340.24-1 File List". 2014-07-15.
- "Nvidia's Response To Recent Nouveau Work". Phoronix. 2009-12-14.
- Larabel, Michael (2014-07-11). "NVIDIA Contributes Re-Clocking Code To Nouveau For The GK20A". Phoronix. Retrieved 2014-09-09.
- "Nouveau 3.14 Gets New Acceleration, Still Lacking PM". Phoronix. 2014-01-23. Retrieved 2014-07-25.
- "Benchmarking Nouveau and Nvidia's proprietary GeForce driver on Linux". Phoronix. 2014-07-28.
- 1.1. Nvidia graphics performance improvements, initial support for GK20A devices and GK110B. "Linux kernel 3.16". kernelnewbies.org. 2014-08-03. Retrieved 2014-09-17.
- "Nvidia's Pascal to use stacked memory, proprietary NVLink interconnect". The Tech Report. Retrieved 2014-03-26.