Jump to content

Shader: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Reverted edits by 99.174.165.143 (talk) to last revision by ClueBot NG (HG)
Replaced "literally infinite" with "seemingly infinite". Computers are finite, not infinite; they cannot produce a "literally infinite" range of effects.
Line 8: Line 8:
Shaders calculate [[Rendering (computer graphics)|rendering]] effects on graphics hardware with a high degree of flexibility. Shading languages are used to program the [[graphics processing unit]] (GPU) programmable [[rendering pipeline]], which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; with shaders, customized effects can be used. The position, hue, saturation, brightness, and contrast of all [[pixel]]s, [[Vertex (optics)|vertices]], or [[texture (visual arts)|texture]]s used to construct a final image can be altered on the fly, using [[algorithm]]s defined in the shader, and can be modified by external [[Variable (computer science)|variable]]s or textures introduced by the program calling the shader.
Shaders calculate [[Rendering (computer graphics)|rendering]] effects on graphics hardware with a high degree of flexibility. Shading languages are used to program the [[graphics processing unit]] (GPU) programmable [[rendering pipeline]], which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; with shaders, customized effects can be used. The position, hue, saturation, brightness, and contrast of all [[pixel]]s, [[Vertex (optics)|vertices]], or [[texture (visual arts)|texture]]s used to construct a final image can be altered on the fly, using [[algorithm]]s defined in the shader, and can be modified by external [[Variable (computer science)|variable]]s or textures introduced by the program calling the shader.


Shaders are used widely in [[Filmmaking|cinema]] postprocessing, [[computer-generated imagery]], and [[video games]] to produce a literally infinite range of effects. Beyond just simple lighting models - see [[List of common shading algorithms]] - more complex uses include altering the [[hue]], [[Colorfulness|saturation]], [[brightness]] and/or [[contrast (vision)|contrast]] of an image, producing [[Defocus aberration|blur]], [[bokeh]], [[cel shading]], [[posterization]], [[bump mapping]], [[distortion (optics)|distortion]], [[chroma key]]ing (so-called "[[Chroma key|bluescreen]]/ [[greenscreen]]" effects), [[edge detection]] and [[motion detection]], [[psychedelic]] effects, and a wide range of others.
Shaders are used widely in [[Filmmaking|cinema]] postprocessing, [[computer-generated imagery]], and [[video games]] to produce a seemingly infinite range of effects. Beyond just simple lighting models - see [[List of common shading algorithms]] - more complex uses include altering the [[hue]], [[Colorfulness|saturation]], [[brightness]] and/or [[contrast (vision)|contrast]] of an image, producing [[Defocus aberration|blur]], [[bokeh]], [[cel shading]], [[posterization]], [[bump mapping]], [[distortion (optics)|distortion]], [[chroma key]]ing (so-called "[[Chroma key|bluescreen]]/ [[greenscreen]]" effects), [[edge detection]] and [[motion detection]], [[psychedelic]] effects, and a wide range of others.


==History==
==History==

Revision as of 00:14, 17 January 2013

Shaders are most commonly used to produce lighting and shadow in 3D modeling. This image illustrates Phong shading, one of the first computer shading models ever developed.
Shaders can also be used for special effects. An example of a digital photograph from a webcam unshaded on the left, and the same image with a special effects shader applied on the right which replaces all light areas of the image with white and the dark areas with a brightly colored texture.

In the field of computer graphics, a shader is a computer program that runs on the graphics processing unit and is used to do shading - the production of appropriate levels of light and darkness within an image - or, in the modern era, also to produce special effects or do postprocessing.

Shaders calculate rendering effects on graphics hardware with a high degree of flexibility. Shading languages are used to program the graphics processing unit (GPU) programmable rendering pipeline, which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; with shaders, customized effects can be used. The position, hue, saturation, brightness, and contrast of all pixels, vertices, or textures used to construct a final image can be altered on the fly, using algorithms defined in the shader, and can be modified by external variables or textures introduced by the program calling the shader.

Shaders are used widely in cinema postprocessing, computer-generated imagery, and video games to produce a seemingly infinite range of effects. Beyond just simple lighting models - see List of common shading algorithms - more complex uses include altering the hue, saturation, brightness and/or contrast of an image, producing blur, bokeh, cel shading, posterization, bump mapping, distortion, chroma keying (so-called "bluescreen/ greenscreen" effects), edge detection and motion detection, psychedelic effects, and a wide range of others.

History

The modern use of "shader" was introduced to the public by Pixar with their "RenderMan Interface Specification, Version 3.0" originally published in May, 1988.

As graphics processing units evolved, major graphics software libraries such as OpenGL and Direct3D began to support shaders. The first shader-capable GPUs only supported pixel shading, but vertex shaders were quickly introduced once developers realized the power of shaders. Geometry shaders were recently introduced with Direct3D 10 and OpenGL 3.2, but are currently supported only by high-end video cards.

Technology overview

Shaders are simple programs that describe the traits of either a vertex or a pixel. Vertex shaders describe the traits (position, texture coordinates, colors, etc.) of a vertex, while pixel shaders describe the traits (color, z-depth and alpha value) of a pixel. A vertex shader is called for each vertex in a primitive (possibly after tessellation); thus one vertex in, one (updated) vertex out. Each vertex is then rendered as a series of pixels onto a surface (block of memory) that will eventually be sent to the screen.

Shaders replace a section of video hardware typically called the Fixed Function Pipeline (FFP) – so-called because it performs lighting and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach.[1]

Simplified graphic processing unit pipeline

  • The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card.
  • Within the vertex shader, the geometry is transformed.
  • If a geometry shader is in the graphic processing unit and active, some changes of the geometries in the scene are performed.
  • If a tessellation shader is in the graphic processing unit and active, the geometries in the scene can be subdivided.
  • The calculated geometry is triangulated (subdivided into triangles).
  • Triangles are broken down into fragment quads (one fragment quad is a 2 × 2 fragment primitive).
  • Fragment quads are modified according to the fragment shader.
  • The depth test is performed, fragments that pass will get written to the screen and might get blended into the frame buffer.

The graphic pipeline uses these steps in order to transform three dimensional (and/or two dimensional) data into useful two dimensional data for displaying. In general, this is a large pixel matrix or "frame buffer".

Types of shaders

There are three types of shaders in common use. While older graphics cards utilize separate processing units for each shader type, newer cards feature unified shaders which are capable of executing any type of shader. This allows graphics cards to make more efficient use of processing power.

Vertex shaders

Vertex shaders are run once for each vertex given to the graphics processor. The purpose is to transform each vertex's 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer). Vertex shaders can manipulate properties such as position, color, and texture coordinate, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present, or the pixel shader and rasterizer otherwise. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving 3D models.

Geometry shaders

Geometry shaders are a relatively new type of shader, introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions.[2] This type of shader can generate new graphics primitives, such as points, lines, and triangles, from those primitives that were sent to the beginning of the graphics pipeline.[3]

Geometry shader programs are executed after vertex shaders. They take as input a whole primitive, possibly with adjacency information. For example, when operating on triangles, the three vertices are the geometry shader's input. The shader can then emit zero or more primitives, which are rasterized and their fragments ultimately passed to a pixel shader.

Typical uses of a geometry shader include point sprite generation, geometry tessellation, shadow volume extrusion, and single pass rendering to a cube map. A typical real world example of the benefits of geometry shaders would be automatic mesh complexity modification. A series of line strips representing control points for a curve are passed to the geometry shader and depending on the complexity required the shader can automatically generate extra lines each of which provides a better approximation of a curve.

Pixel shaders

Pixel shaders, also known as fragment shaders, compute color and other attributes of each fragment. Pixel shaders range from always outputting the same color, to applying a lighting value, to doing bump mapping, shadows, specular highlights, translucency and other phenomena. They can alter the depth of the fragment (for Z-buffering), or output more than one color if multiple render targets are active. In 3D graphics, a pixel shader alone cannot produce very complex effects, because it operates only on a single fragment, without knowledge of a scene's geometry. However, pixel shaders do have knowledge of the screen coordinate being drawn, and can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader. This technique can enable a wide variety of two-dimensional postprocessing effects, such as blur, or edge detection/enhancement for cartoon/cel shaders. Pixel shaders may also be applied in intermediate stages to any two-dimensional images in the pipeline, whereas vertex shaders always require a 3D model. For instance, a pixel shader is the only kind of shader that can act as a postprocessor or filter for a video stream after it has been rasterized.

Parallel processing

Shaders are written to apply transformations to a large set of elements at a time, for example, to each pixel in an area of the screen, or for every vertex of a model. This is well suited to parallel processing, and most modern GPUs have multiple shader pipelines to facilitate this, vastly improving computation throughput.

Programming shaders

The language in which shaders are programmed depends on the target environment. The official OpenGL and OpenGL ES shading language is OpenGL Shading Language, also known as GLSL, and the official Direct3D shading language is High Level Shader Language, also known as HLSL. However, Cg is a third-party shading language developed by Nvidia that outputs both OpenGL and Direct3D shaders.

Example: GLSL program for shading normals without light or texture

// Vertex Shader
varying vec4 color;

void main()
{
  // Treat the normal (x, y, z) values as (r, g, b) color components.
  color = vec4(clamp(abs((gl_Normal + 1.0) * 0.5), 0.0, 1.0), 1.0);

  gl_Position = ftransform();
}

// Fragment Shader
varying vec4 color;

void main()
{
  gl_FragColor = color;
}

See also

References

  1. ^ Search ARB_shader_objects for the issue "32) Can you explain how uniform loading works?". This is an example of how a complex data structure must be broken in basic data elements.
  2. ^ Required machinery has been introduced in OpenGL by ARB_multitexture but this specification is no more available since its integration in core OpenGL 1.2.
  3. ^ Search again ARB_shader_objects for the issue "25) How are samplers used to access textures?". You may also want to check out "Subsection 2.14.4 Samplers".
  4. See http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter01.html for more details on Cg

Further reading