GeForce 3 series: Difference between revisions
→Programmable shaders and new features: Describe MSAA and Quincunx. MSAA is a good balance of performance and quality. Quincunx is blurry trash. |
clean up of editorial-sounding content, redundancies and some inaccuracies |
||
Line 24: | Line 24: | ||
[[Image:Geforce3gpu.jpg|thumb|GeForce3 Ti 200 GPU]] |
[[Image:Geforce3gpu.jpg|thumb|GeForce3 Ti 200 GPU]] |
||
Introduced three months after NVIDIA acquired [[3dfx]] and marketed as the ''nFinite FX Engine'', the GeForce 3 was the first [[Microsoft Direct3D]] 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft [[Shader]] language 1.1. With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2 |
Introduced three months after NVIDIA acquired [[3dfx]] and marketed as the ''nFinite FX Engine'', the GeForce 3 was the first [[Microsoft Direct3D]] 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft [[Shader]] language 1.1. With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2, excluding the slower GeForce 2 MX line. |
||
To take better advantage of available memory performance, the GeForce 3 has a |
To take better advantage of available memory performance, the GeForce 3 has a memory subsystem dubbed ''Lightspeed Memory Architecture'' (LMA). This is composed of several mechanisms that reduce overdraw, conserve memory bandwidth by compressing the [[z-buffering|z-buffer]] (depth buffer) and better manage interaction with the DRAM. |
||
Other architectural changes include improvements to [[Spatial anti-aliasing|anti-aliasing]] functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and [[Quincunx]] anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. |
Other architectural changes include improvements to [[Spatial anti-aliasing|anti-aliasing]] functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and [[Quincunx]] anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. With multi-sampling, the render output units super-sample only the Z buffers and stencil buffers, and using that information get greater geometry detail needed to determine if a pixel covers more than one polygonal object. This saves the pixel/fragment shader from having to render multiple fragments for pixels where the same object covers all of the same sub-pixels in a pixel. This method fails with texture maps which have varying transparency (e.g. a texture map that represents a chain link fence). Quincunx anti-aliasing is a combination of 2x MSAA and a form of blur filter that shifts the rendered image a half-pixel up and a half-pixel left in order to create sub-pixels which are then averaged together in a diagonal cross pattern, destroying both jagged edges but also some overall image detail. Finally, the GeForce 3's texture sampling units were upgraded to support 8-tap [[anisotropic filtering]], compared to the previous limit of 2-tap with GeForce 2. With 8-tap anisotropic filtering enabled, distant textures can be noticeably sharper. |
||
==Performance== |
==Performance== |
||
Regardless of the various improvements made to GeForce 3, the original GeForce 3 card and the Ti200 sometimes lose to the GeForce 2 Ultra. This is because the GeForce 3 GPU has the same pixel and texel throughput per clock as the GeForce 2 (NV15). The GeForce 2 is less efficient than the GeForce 3 overall, but the GeForce 2 Ultra GPU is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200. The GeForce 2 Ultra also has considerable memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when anti-aliasing is enabled the GeForce 3 is clearly superior because of its improvements in anti-aliasing support, and in memory bandwidth and fill-rate management. |
Regardless of the various improvements made to GeForce 3, the original GeForce 3 card and the Ti200 sometimes lose to the GeForce 2 Ultra. This is because the GeForce 3 GPU has the same pixel and texel throughput per clock as the GeForce 2 (NV15). The GeForce 2 is less efficient than the GeForce 3 overall, but the GeForce 2 Ultra GPU is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200. The GeForce 2 Ultra also has considerable memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when anti-aliasing is enabled the GeForce 3 is clearly superior because of its improvements in anti-aliasing support, and in memory bandwidth and fill-rate management. |
||
GeForce 3 did not have DirectX 8-compliant competition until the arrival of the [[Radeon R200|Radeon 8500]]. The Radeon 8500, when clocked at retail specifications, is superior to the original GeForce 3 and to the Ti200, but the Ti500 is similar to it. GeForce 3 also has multi-sampling anti-aliasing support, a feature not available with the Radeon 8500, which is much less demanding than super-sampling and is thus more usable in contemporary games. |
|||
==Product positioning== |
==Product positioning== |
||
NVIDIA refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the |
NVIDIA refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the [[Radeon R200|Radeon 8500]] and [[Radeon R100|Radeon 7500]]. The Ti500 has higher core and memory clocks (240 MHz core/250 MHz RAM) than the original GeForce 3 (200 MHz/230 MHz), and generally matches the Radeon 8500. The Ti200 was the slowest, and lowest-priced GeForce3 release. It is clocked lower (175 MHz/200 MHz) yet it surpasses the Radeon 7500 in speed and feature set besides dual-monitor implementation. |
||
The [[GeForce 2]] and GeForce3 lines were replaced in early 2002 by the [[GeForce 4 Series|GeForce 4]] MX and Ti lines, respectively. The GeForce 4 Ti was very similar to its predecessor; the main differences were higher core and memory speeds, a revised memory controller, improved vertex and [[pixel shader]]s, hardware anti-aliasing and DVD playback. Proper dual-monitor support was also brought over from the GeForce 2 MX. With the GeForce 4 Ti 4600 as the new flagship product, this was the beginning of the end of the GeForce 3 Ti 500 which was already difficult to produce due to poor yields, and it was later completely replaced by the Ti 4200. |
|||
However, the GeForce3 Ti200 was still kept in production for a short while as it occupied a spot between the (delayed) GeForce 4 Ti 4200 and GeForce 4 MX 460 in performance. Despite the Ti200's positioning, which would have kept the chip going until the end of 2002, it was discontinued due to naming confusion with the GeForce 4 MX and Ti lines. The discontinuing of the GeForce3 Ti200 and Radeon 8500LE disappointed many enthusiasts, because the performance-oriented Ti 4200 had not yet fallen to midrange prices, while the mass-market [[Radeon R200|Radeon 9000]] was not as fast as the Ti200 and 8500LE. |
|||
The original GeForce3 and the Ti500 derivative were only released in 64 MiB configurations throughout their lifetimes. |
The original GeForce3 and the Ti500 derivative were only released in 64 MiB configurations throughout their lifetimes. Some third parties sold 128 MiB versions of the Ti200. |
||
==Specifications== |
==Specifications== |
Revision as of 17:55, 31 July 2014
Release date | 2001 |
---|---|
Codename | NV20 |
Models |
|
Cards | |
Entry-level | None |
Mid-range | GeForce 3, Ti 200 |
High-end | GeForce 3, Ti 500 |
API support | |
DirectX | Direct3D 8.0 Vertex Shader 1.1 Pixel Shader 1.1 |
OpenGL | OpenGL 1.2 |
History | |
Predecessor | GeForce 2 Series |
Successor | GeForce 4 Series |
The GeForce 3 (NV20) is the third-generation of NVIDIA's GeForce graphics processing units. Introduced in March 2001, it advanced the GeForce architecture by adding programmable pixel and vertex shaders, multisample anti-aliasing and improved the overall efficiency of the rendering process.
The GeForce 3 was unveiled during the 2001 Macworld conference and powered realtime demos of Pixar's Junior Lamp and id software's Doom 3. Apple would later announce launch rights for its new line of computers.
The GeForce 3 family comprises 3 consumer models: the GeForce 3, the GeForce 3 Ti200, and the GeForce 3 Ti500. A separate professional version, with a feature-set tailored for computer aided design, was sold as the Quadro DCC. A derivative of the GeForce 3, known as the NV2A, is used in the Microsoft Xbox game console.
Programmable shaders and new features
Introduced three months after NVIDIA acquired 3dfx and marketed as the nFinite FX Engine, the GeForce 3 was the first Microsoft Direct3D 8.0 compliant 3D-card. Its programmable shader architecture enabled applications to execute custom visual effects programs in Microsoft Shader language 1.1. With respect to pure pixel and texel throughput, the GeForce 3 has four pixel pipelines which each can sample two textures per clock. This is the same configuration as GeForce 2, excluding the slower GeForce 2 MX line.
To take better advantage of available memory performance, the GeForce 3 has a memory subsystem dubbed Lightspeed Memory Architecture (LMA). This is composed of several mechanisms that reduce overdraw, conserve memory bandwidth by compressing the z-buffer (depth buffer) and better manage interaction with the DRAM.
Other architectural changes include improvements to anti-aliasing functionality. Previous GeForce chips could perform only super-sampled anti-aliasing (SSAA), a demanding process that renders the image at a large size internally and then scales it down to the end output resolution. GeForce 3 adds multi-sampling anti-aliasing (MSAA) and Quincunx anti-aliasing methods, both of which perform significantly better than super-sampling anti-aliasing at the expense of quality. With multi-sampling, the render output units super-sample only the Z buffers and stencil buffers, and using that information get greater geometry detail needed to determine if a pixel covers more than one polygonal object. This saves the pixel/fragment shader from having to render multiple fragments for pixels where the same object covers all of the same sub-pixels in a pixel. This method fails with texture maps which have varying transparency (e.g. a texture map that represents a chain link fence). Quincunx anti-aliasing is a combination of 2x MSAA and a form of blur filter that shifts the rendered image a half-pixel up and a half-pixel left in order to create sub-pixels which are then averaged together in a diagonal cross pattern, destroying both jagged edges but also some overall image detail. Finally, the GeForce 3's texture sampling units were upgraded to support 8-tap anisotropic filtering, compared to the previous limit of 2-tap with GeForce 2. With 8-tap anisotropic filtering enabled, distant textures can be noticeably sharper.
Performance
Regardless of the various improvements made to GeForce 3, the original GeForce 3 card and the Ti200 sometimes lose to the GeForce 2 Ultra. This is because the GeForce 3 GPU has the same pixel and texel throughput per clock as the GeForce 2 (NV15). The GeForce 2 is less efficient than the GeForce 3 overall, but the GeForce 2 Ultra GPU is clocked 25% faster than the original GeForce 3 and 43% faster than the Ti200. The GeForce 2 Ultra also has considerable memory bandwidth available to it, only matched by the GeForce 3 Ti500. However, when anti-aliasing is enabled the GeForce 3 is clearly superior because of its improvements in anti-aliasing support, and in memory bandwidth and fill-rate management.
Product positioning
NVIDIA refreshed the lineup in October 2001 with the release of the GeForce 3 Ti200 and Ti500. This coincided with ATI's releases of the Radeon 8500 and Radeon 7500. The Ti500 has higher core and memory clocks (240 MHz core/250 MHz RAM) than the original GeForce 3 (200 MHz/230 MHz), and generally matches the Radeon 8500. The Ti200 was the slowest, and lowest-priced GeForce3 release. It is clocked lower (175 MHz/200 MHz) yet it surpasses the Radeon 7500 in speed and feature set besides dual-monitor implementation.
The original GeForce3 and the Ti500 derivative were only released in 64 MiB configurations throughout their lifetimes. Some third parties sold 128 MiB versions of the Ti200.
Specifications
Discontinued support
Nvidia has ceased driver support for GeForce 3 series.
Successor
The GeForce 4 Series (Non-MX), introduced in April 2002, was a revision of the GeForce 3 architecture. The budget variant, dubbed the GeForce 4 MX, was closer in terms of design to the GeForce 2.
Final drivers include
- Windows 9x & Windows Me: 81.98 released on December 21, 2005; Download;
- Windows 2000, 32-bit Windows XP & Media Center Edition: 93.71 released on November 2, 2006; Download.
(Despite claims in the documentation that 94.24 supports the Geforce 3 series, it does not)
- (Products supported list also on this page)
Windows 95/98/Me Driver Archive
Windows XP/2000 Driver Archive