Texture mapping

From Wikipedia, the free encyclopedia
Jump to: navigation, search
"Texture maps" redirects here. For the 2003 ambient album, see Texture Maps: The Lost Pieces Vol. 3.
1 = 3D model without textures
2 = 3D model with textures

Texture mapping[1][2][3] is a method for adding detail, surface texture (a bitmap or raster image), or color to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.

Originally a method that simply wrapped and mapped pixels from a texture to a 3D surface - now more technically called diffuse mapping to distinguish it from more complex mappings - in recent decades the advent of multi-pass rendering and complex mapping such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, mipmaps, occlusion mapping, and many other complex variations on the technique have made it possible to simulate near-photorealism in real time, by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene.

Examples of multitexturing (click for larger image);
1: Untextured sphere, 2: Texture and bump maps, 3: Texture map only, 4: Opacity and texture maps.

A texture map[4][5] is applied (mapped) to the surface of a shape or polygon.[6] This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition. Image sampling locations are then interpolated across the face of a polygon to produce a visual result that seems to have more richness than could otherwise be achieved with a limited number of polygons. Multitexturing is the use of more than one texture at a time on a polygon.[7] For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it in real-time.

The way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture filtering. The fastest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped.

Terminology[edit]

Polygon mesh[edit]

Collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling. The faces usually consist of triangles (triangle mesh), quadrilaterals, or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes.

Pixel[edit]

In a digital imaging, a pixel, pel,[1] or picture element[2] is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen. The address of a pixel corresponds to its physical coordinates.

Texel[edit]

Texture element, or texture pixel is the fundamental unit of texture space,[1] used in computer graphics. Textures are represented by arrays of texels, just as pictures are represented by arrays of pixels.

Texture[edit]

OpenGL Object that contains one or more images that all have the same image format.A texture is a container of one or more images. But textures do not store arbitrary images; a texture has specific constraints on the images it can contain. There are three defining characteristics of a texture, each of them defining part of those constraints: the texture type, texture size, and the image format used for images in the texture.A texture can be used in two ways. It can be the source of a texture access from a Shader, or it can be used as a render target.

Texture type[edit]

defines the arrangement of images within the texture.

  • GL_TEXTURE_1D: Images in this texture all are 1-dimensional. They have width, but no height or depth.
  • GL_TEXTURE_2D: Images in this texture all are 2-dimensional. They have width and height, but no depth.
  • GL_TEXTURE_3D: Images in this texture all are 3-dimensional. They have width, height, and depth.
  • GL_TEXTURE_RECTANGLE: The image in this texture (only one image. No mipmapping) is 2-dimensional. Texture coordinates used for these textures are not normalized.
  • GL_TEXTURE_BUFFER: The image in this texture (only one image. No mipmapping) is 1-dimensional. The storage for this data comes from a Buffer Object.
  • GL_TEXTURE_CUBE_MAP: There are exactly 6 distinct sets of 2D images, all of the same size. They act as 6 faces of a cube.
  • GL_TEXTURE_1D_ARRAY: Images in this texture all are 1-dimensional. However, it contains multiple sets of 1-dimensional images, all within one texture. The array length is part of the texture's size.
  • GL_TEXTURE_2D_ARRAY: Images in this texture all are 2-dimensional. However, it contains multiple sets of 2-dimensional images, all within one texture. The array length is part of the texture's size.
  • GL_TEXTURE_CUBE_MAP_ARRAY: Images in this texture are all cube maps. It contains multiple sets of cube maps, all within one texture. The array length * 6 (number of cube faces) is part of the texture size.
  • GL_TEXTURE_2D_MULTISAMPLE: The image in this texture (only one image. No mipmapping) is 2-dimensional. Each pixel in these images contains multiple samples instead of just one value.
  • GL_TEXTURE_2D_MULTISAMPLE_ARRAY: Combines 2D array and 2D multisample types. No mipmapping.

Texture size[edit]

defines the size of the images in the texture.

Texture image format[edit]

defines the format that all of these images share.

Texture filtering/Texture smoothing[edit]

Method used to determine the texture color for a texture mapped pixel, using the colors of nearby texels (pixels of the texture).

OpenGL Object[edit]

OpenGL construct that contains some state. When they are bound to the context, the state that they contain is mapped into the context's state. Thus, changes to context state will be stored in this object, and functions that act on this context state will use the state stored in the object.OpenGL is defined as a "state machine". The various API calls change the OpenGL state, query some part of that state, or cause OpenGL to use its current state to render something.Objects are always containers for state. Each particular kind of object is defined by the particular state that it contains. An OpenGL object is a way to encapsulate a particular group of state and change all of it in one function call.

Shader[edit]

Program designed to run on some stage of a graphics processor. Its purpose is to execute one of the programmable stages of the rendering pipeline.Shaders are written in the OpenGL Shading Language. Shaders have access to a wide variety of resources. They can access Textures, uniforms, uniform blocks, image variables, atomic counters, shader storage buffers, and potentially other information. The OpenGL rendering pipeline defines the following shader stages, with their enumerator name:

  • Vertex Shaders: GL_VERTEX_SHADER
  • Tessellation Control and Evaluation Shaders: GL_TESS_CONTROL_SHADER and GL_TESS_EVALUATION_SHADER. (requires GL 4.0 or ARB_tessellation_shader)
  • Geometry Shaders: GL_GEOMETRY_SHADER
  • Fragment Shaders: GL_FRAGMENT_SHADER
  • Compute Shaders: GL_COMPUTE_SHADER. (requires GL 4.3 or ARB_compute_shader)

Renderbuffer Objects[edit]

OpenGL Objects that contain images. They are created and used specifically with Framebuffer Objects. They are optimized for use as render targets, while Textures may not be

Framebuffer Objects[edit]

OpenGL Objects, which allow for the creation of user-defined Framebuffers. With them, one can render to non-Default Framebuffer locations, and thus render without disturbing the main screen.

Framebuffer[edit]

collection of buffers that can be used as the destination for rendering. OpenGL has two kinds of framebuffers: the Default Framebuffer, which is provided by the OpenGL Context; and user-created framebuffers called Framebuffer Objects (FBOs). The buffers for default framebuffers are part of the context and usually represent a window or display device. The buffers for FBOs reference images from either Textures or Renderbuffers; they are never directly visible.

Image Format[edit]

describes the way that the images in Textures and renderbuffers store their data. They define the meaning of the image's data.There are three basic kinds of image formats: color, depth, and depth/stencil.

Mip maps[edit]

The problem is with animation. When you slowly zoom out on a texture, you start to see aliasing artifacts appear. These are caused by sampling fewer than all of the texels; the choice of which texels are sampled changes between different frames of the animation. Even with linear filtering (see below), artifacts will appear as the camera zooms out.To solve this problem, we employ mip maps. These are pre-shrunk versions of the full-sized image. Each mipmap is half the size of the previous one in the chain, using the largest dimension of the image . So a 64x16 2D texture can have 6 mip-maps: 32x8, 16x4, 8x2, 4x1, 2x1, and 1x1. OpenGL does not require that the entire mipmap chain is complete; you can specify what range of mipmaps in a texture are available.

UV Mapping technique![edit]

When texturing a mesh, you need a way to tell to OpenGL which part of the image has to be used for each triangle. This is done with UV coordinates.Each vertex has, on top of its position, a couple of floats, U and V. These coordinates are used to access the texture, in the following way.

The most flexible way of mapping a 2D texture over a 3D object is a process called "UV mapping". In this process, you take your three-dimensional (X,Y & Z) mesh and unwrap it to a flat two-dimensional (X & Y ... or rather, as we shall soon see, "U & V") image. image is thus in two dimensions (2D). We use U and V to refer to these "texture-space coordinates" instead of the normal X and Y, which are always used (along with Z) to refer to "3D space." A UV map describes what part of the texture should be attached to each polygon in the model. Each polygon's vertex gets assigned to 2D coordinates that define which part of the image gets mapped. These 2D coordinates are called UVs (compare this to the XYZ coordinates in 3D). The operation of generating these UV maps is also called "unwrap", since it is as if the mesh were unfolded onto a 2D plane. For most simple 3D models, Blender has an automatic set of unwrapping algorithms that you can easily apply. For more complex 3D models, regular Cubic, Cylindrical or Spherical mapping, is usually not sufficient. For even and accurate projection, use seams to guide the UV mapping. This can be used to apply textures to arbitrary and complex shapes, like human heads or animals. Often these textures are painted images, created in image editing and manipulation software like Gimp, Photoshop, etc.

This process projects a texture map onto a 3D object. The letters "U" and "V" denote the axes of the 2D texture[note 1] because "X", "Y" and "Z" are already used to denote the axes of the 3D object in model space. UV texturing permits polygons that make up a 3D object to be painted with color from an image. The image is called a UV texture map,[1] but it's just an ordinary image. The UV mapping process involves assigning pixels in the image to surface mappings on the polygon, usually done by "programmatically" copying a triangle shaped piece of the image map and pasting it onto a triangle on the object.[2] UV is the alternative to XY, it only maps into a texture space rather than into the geometric space of the object. But the rendering computation uses the UV texture coordinates to determine how to paint the three-dimensional surface. UV texturing permits polygons that make up a 3D object to be painted with color from an image. The image is called a UV texture map,[1] but it's just an ordinary image. The UV mapping process involves assigning pixels in the image to surface mappings on the polygon, usually done by "programmatically" copying a triangle shaped piece of the image map and pasting it onto a triangle on the object When a model is created as a polygon mesh using a 3D modeler, UV coordinates can be generated for each vertex in the mesh. One way is for the 3D modeler to unfold the triangle mesh at the seams, automatically laying out the triangles on a flat page. If the mesh is a UV sphere, for example, the modeler might transform it into an equirectangular projection. Once the model is unwrapped, the artist can paint a texture on each triangle individually, using the unwrapped mesh as a template. When the scene is rendered, each triangle will map to the appropriate texture from the "decal sheet". The UV Mapping process at its simplest requires three steps: unwrapping the mesh, creating the texture, and applying the texture.

Assigning Texture Coordinates (UV or STRQ)[edit]

As you draw your texture-mapped scene, you must provide both object coordinates and texture coordinates for each vertex. After transformation, the object coordinates determine where on the screen that particular vertex is rendered. The texture coordinates determine which texel in the texture map is assigned to that vertex. In exactly the same way that colors are interpolated between two vertices of shaded polygons and lines, texture coordinates are also interpolated between vertices. (Remember that textures are rectangular arrays of data.) Texture coordinates can comprise one, two, three, or four coordinates. They're usually referred to as the s, t, r, and q coordinates to distinguish them from object coordinates (x, y, z, and w) and from evaluator coordinates (u and v; see Chapter 12). For one-dimensional textures, you use the s coordinate; for two-dimensional textures, you use s and t. In Release 1.1, the r coordinate is ignored. (Some implementations have 3D texture mapping as an extension, and that extension uses the r coordinate.) The q coordinate, like w, is typically given the value 1 and can be used to create homogeneous coordinates; it's described as an advanced feature in "The q Coordinate."

A Bézier curve is a vector-valued function of one variable C(u) = [X(u) Y(u) Z(u)] where u varies in some domain (say [0,1]). A Bézier surface patch is a vector-valued function of two variables S(u,v) = [X(u,v) Y(u,v) Z(u,v)] where u and v can both vary in some domain. The range isn't necessarily three-dimensional as shown here. You might want two-dimensional output for curves on a plane or texture coordinates, or you might want four-dimensional output to specify RGBA information. Even one-dimensional output may make sense for gray levels.

UV coordinates[edit]

When texturing a mesh, you need a way to tell to OpenGL which part of the image has to be used for each triangle. This is done with UV coordinates. Each vertex has, on top of its position, a couple of floats, U and V. These coordinates are used to access the texture, in the following way

Reflection mapping/Environment mapping technique[edit]

image-based lighting technique for approximating the appearance of a reflective surface by means of a precomputed texture image. The texture is used to store the image of the distant environment surrounding the rendered object. more efficient than the classical ray tracing approach of computing the exact reflection by tracing a ray and following its optical path.

Cube mapping/Sky Box technique![edit]

Cube Mapping

method of environment mapping that uses the six faces of a cube as the map shape. The environment is projected onto the sides of a cube and stored as six square textures, or unfolded into six regions of a single texture. The cube map is generated by first rendering the scene six times from a viewpoint, with the views defined by an 90 degree view frustum representing each cube face.

Cube mapping projects s mesh onto six separate planes, creating 6 UV islands. In the UV editor, these will appear overlapped, but can be moved.

Sphere mapping technique[edit]

Blender icosphere textured.png

reflection mapping that approximates reflective surfaces by considering the environment to be an infinitely far-away spherical wall. This environment is stored as a texture depicting what a mirrored sphere would look like if it were placed into the environment, using an orthographic projection (as opposed to one with perspective). This texture contains reflective data for the entire environment, except for the spot directly behind the sphere.

Normal mapping technique[edit]

used to add details without using more polygons. A common use of this technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map.

Heightmap/heightfield[edit]

raster image used to store values, such as surface elevation data, for display in 3D computer graphics. used in bump mapping to calculate where this 3D data would create shadow in a material, in displacement mapping to displace the actual geometric position of points over the textured surface, or for terrain where the heightmap is converted into a 3D mesh.

Bump mapping technique[edit]

simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations.

Displacement mapping technique[edit]

Displacement Heightmap

Displacement mapping is a geometrical technique allowing intricate surface detail with moderate storage requirements. Unlike texture mapping, displacement mapping deforms the basic object geometry and creates a new dense triangle mesh for the object. With trilinear displacement mapping the mesh density can be made dependent on the object depth and geomorphing allows to smoothly interpolate the mesh density between different levels of detail.

Displacement mapping allows a texture input to manipulate the position of vertices on rendered geometry. Unlike Normal or Bump mapping, where the shading is distorted to give an illusion of a bump (discussed on the previous page), Displacement Maps create real bumps, creases, ridges, etc. in the actual mesh. Thus, the mesh deformations can cast shadows, occlude other objects, and do everything that changes in real geometry can do.

Perspective correctness[edit]

Because affine texture mapping does not take into account the depth information about a polygon's vertices, where the polygon is not perpendicular to the viewer it produces a noticeable defect.

Texture coordinates are specified at each vertex of a given triangle, and these coordinates are interpolated using an extended Bresenham's line algorithm. If these texture coordinates are linearly interpolated across the screen, the result is affine texture mapping. This is a fast calculation, but there can be a noticeable discontinuity between adjacent triangles when these triangles are at an angle to the plane of the screen (see figure at right – textures (the checker boxes) appear bent).

Perspective correct texturing accounts for the vertices' positions in 3D space, rather than simply interpolating a 2D triangle. This achieves the correct visual effect, but it is slower to calculate. Instead of interpolating the texture coordinates directly, the coordinates are divided by their depth (relative to the viewer), and the reciprocal of the depth value is also interpolated and used to recover the perspective-correct coordinate. This correction makes it so that in parts of the polygon that are closer to the viewer the difference from pixel to pixel between texture coordinates is smaller (stretching the texture wider), and in parts that are farther away this difference is larger (compressing the texture).

Affine texture mapping directly interpolates a texture coordinate u^{}_{\alpha} between two endpoints u^{}_0 and u^{}_1:
u^{}_{\alpha}= (1 - \alpha ) u_0 + \alpha u_1 where 0 \le \alpha \le 1
Perspective correct mapping interpolates after dividing by depth z^{}_{}, then uses its interpolated reciprocal to recover the correct coordinate:
u^{}_{\alpha}= \frac{ (1 - \alpha ) \frac{ u_0 }{ z_0 } + \alpha \frac{ u_1 }{ z_1 } }{ (1 - \alpha ) \frac{ 1 }{ z_0 } + \alpha \frac{ 1 }{ z_1 } }

All modern 3D graphics hardware implements perspective correct texturing.

Doom renders vertical spans (walls) with affine texture mapping.

Development[edit]

Classic texture mappers generally did only simple mapping with at most one lighting effect, and the perspective correctness was about 16 times more expensive. To achieve two goals - faster arithmetic results, and keeping the arithmetic mill busy at all times - every triangle is further subdivided into groups of about 16 pixels. For perspective texture mapping without hardware support, a triangle is broken down into smaller triangles for rendering, which improves details in non-architectural applications. Software renderers generally preferred screen subdivision because it has less overhead. Additionally they try to do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not fit into the low number of registers of the x86 CPU; the 68000 or any RISC is much more suited). For instance, Doom restricted the world to vertical walls and horizontal floors/ceilings. This meant the walls would be a constant distance along a vertical line and the floors/ceilings would be a constant distance along a horizontal line. A fast affine mapping could be used along those lines because it would be correct. A different approach was taken for Quake, which would calculate perspective correct coordinates only once every 16 pixels of a scanline and linearly interpolate between them, effectively running at the speed of linear interpolation because the perspective correct calculation runs in parallel on the co-processor.[8] The polygons are rendered independently, hence it may be possible to switch between spans and columns or diagonal directions depending on the orientation of the polygon normal to achieve a more constant z, but the effort seems not to be worth it.

Screen space sub division techniques. Top left: Quake-like, top right: bilinear, bottom left: const-z

Another technique was subdividing the polygons into smaller polygons, like triangles in 3d-space or squares in screen space, and using an affine mapping on them. The distortion of affine mapping becomes much less noticeable on smaller polygons. Yet another technique was approximating the perspective with a faster calculation, such as a polynomial. Still another technique uses 1/z value of the last two drawn pixels to linearly extrapolate the next value. The division is then done starting from those values so that only a small remainder has to be divided,[9] but the amount of bookkeeping makes this method too slow on most systems. Finally, some programmers extended the constant distance trick used for Doom by finding the line of constant distance for arbitrary polygons and rendering along it.

See also[edit]

References[edit]

  1. ^ http://web.cse.ohio-state.edu/~whmin/courses/cse5542-2013-spring/15-texture.pdf
  2. ^ http://www.inf.pucrs.br/flash/tcg/aulas/texture/texmap.pdf
  3. ^ http://www.cs.uregina.ca/Links/class-info/405/WWW/Lab5/#References
  4. ^ http://www.microsoft.com/msj/0199/direct3d/direct3d.aspx
  5. ^ http://homepages.gac.edu/~hvidsten/courses/MC394/projects/project5/texture_map_guide.html
  6. ^ Jon Radoff, Anatomy of an MMORPG, http://radoff.com/blog/2008/08/22/anatomy-of-an-mmorpg/
  7. ^ Blythe, David. Advanced Graphics Programming Techniques Using OpenGL. Siggraph 1999. (see: Multitexture)
  8. ^ Abrash, Michael. Michael Abrash's Graphics Programming Black Book Special Edition. The Coriolis Group, Scottsdale Arizona, 1997. ISBN 1-57610-174-6 (PDF) (Chapter 70, pg. 1282)
  9. ^ US 5739818, "Apparatus and method for performing perspectively correct interpolation in computer graphics", issued 1998-04-14 

External links[edit]