GeForce 256

From Wikipedia, the free encyclopedia
  (Redirected from Geforce 256)
Jump to: navigation, search
GeForce 256 Series
Geforce256logo.jpg
Release date October 11, 1999 (SDR)
February 1, 2000 (DDR)
Codename NV10
Cards
Mid-range GeForce 256 SDR
High-end GeForce 256 DDR
API support
Direct3D Direct3D 7.0
OpenGL OpenGL 1.3 (T&L)
History
Predecessor RIVA TNT2
Successor GeForce 2 Series

The GeForce 256 is the original release in Nvidia's "GeForce" product-line. Announced on August 31, 1999 and released on October 11, 1999, the GeForce 256 improves on its predecessor (RIVA TNT2) by increasing the number of fixed pixel pipelines, offloading host geometry calculations to a hardware transform and lighting (T&L) engine, and adding hardware motion-compensation for MPEG-2 video. It offered a notably large leap in 3D gaming performance and was the first fully Direct3D 7-compliant 3D accelerator.

Architecture[edit]

GeForce 256 (NV10) GPU
Quadro (NV10GL) GPU

GeForce 256 was marketed as "the world's first 'GPU', or Graphics Processing Unit", a term Nvidia defined at the time as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second."[1]

The "256" in its name stems from the "256-bit QuadPipe Rendering Engine", a term describing the four 64-bit pixel pipelines of the NV10 chip. In single-textured games NV10 could put out 4 pixels per cycle, while a two-textured scenario would limit this to two multitextured pixels per cycle, as the chip still had only one TMU per pipeline, just as TNT2.[2] In terms of rendering features, GeForce 256 also added support for Cube Environment Mapping[3] and Dot Product (Dot3) Bump Mapping.[4]

The integration of the transform and lighting hardware into the GPU itself set the GeForce 256 apart from older 3D accelerators that relied on the CPU to perform these calculations (also known as software transform and lighting). This reduction of 3D graphics solution complexity brought the cost of such hardware to a new low and made it accessible to cheap consumer graphics cards instead of being limited to the previous expensive professionally oriented niche designed for computer-aided design (CAD). NV10's T&L engine also allowed Nvidia to enter the CAD market with dedicated cards for the first time, with a product called Quadro. The Quadro line uses the same silicon chips as the GeForce cards, but has different driver support and certifications tailored to the unique requirements of CAD applications.[5]

Product comparisons[edit]

Compared to previous high-end 3D game accelerators, such as 3dfx Voodoo3 3500 and Nvidia RIVA TNT2 Ultra, GeForce provided up to a 50% or greater improvement in frame rate in some game titles (ones specifically written to take advantage of the hardware T&L) when coupled with a very low budget CPU. The later release and widespread adoption of GeForce 2 MX/4 MX cards with the same featureset meant unusually long support for the GeForce 256, until approximately 2006, in games such as Star Wars: Empire at War or Half-Life 2, the latter of which featured a Direct3D 7 path, targeting the fixed-function pipeline of these GPUs.

Without broad application support at the time, critics pointed out that the T&L technology had little real-world value. Initially, it was only somewhat beneficial in certain situations in a few OpenGL-based 3D first-person shooter titles, most notably Quake III Arena. Benchmarks using low budget CPUs like the Celeron 300A would give favourable results for the GeForce 256, but benchmarks done with some CPUs such as the Pentium II 300 would give better results with some older graphics cards like the 3dfx Voodoo 2. 3dfx and other competing graphics card companies pointed out that a fast CPU could more than make up for the lack of a T&L unit. Software support for hardware T&L was not commonplace until several years after the release of the first Geforce. Early drivers were buggy and slow, while 3dfx cards enjoyed efficient, high speed, mature Glide API and/or MiniGL support for the majority of games. Only after the GeForce 256 was replaced by the GeForce 2, and ATI's T&L-equipped Radeon was also on the market, did hardware T&L become a widely utilized feature in games.

The GeForce 256 was also quite expensive for the time and it didn't offer tangible advantages over competitors' products outside of 3D acceleration. For example, its GUI and video playback acceleration were not significantly better than that offered by competition or even older Nvidia products. Additionally, some GeForce cards were plagued with poor analog signal circuitry that caused display output to be blurry.[citation needed]

As CPUs became faster, the GeForce 256 demonstrated that the disadvantage of hardware T&L is that, if a CPU is fast enough, it can perform T&L functions faster than the GPU, thus making the GPU a hindrance to rendering performance. This changed the way the graphics market functioned, encouraging shorter graphics card lifetimes, and placing less emphasis on the CPU for gaming.

Motion Compensation[edit]

The GeForce 256 introduced[6] Motion Compensation as a functional unit of the NV10 chip,[7][8][9] this first generation unit would be succeeded by Nvidia's HDVP (High-Definition Video Processor) in GeForce 2 GTS.

Specifications[edit]

Discontinued support[edit]

NVIDIA has ceased driver support for the GeForce 256 series.

VisionTek GeForce 256 DDR

Final Drivers Include[edit]

  • Windows 9x & Windows Me: 71.84 released on March 11, 2005; Download;
Product Support List Windows 95/98/Me – 71.84.
  • Windows 2000 & 32-bit Windows XP: 71.89 released on April 14, 2005; Download.
Product Support List Windows XP/2000 - 71.84.
  • The Windows 2000/XP drivers may be installed on later versions of Windows, such as Windows 7. They do not support the "Aero"-effects of Windows 7, however.

Competitors[edit]

See also[edit]

References[edit]

External links[edit]