Jump to content

Computer graphics (computer science): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Moved history and related subjects more to the end.
Reverting sweeping changes -- the new descriptions given seem aimed at a non-technical audience and are hence better suited to the main Computer Graphics page
Line 7: Line 7:


Computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the ''mathematical'' and ''computational'' foundations of image generation and processing rather than purely [[aesthetic]] issues. Computer graphics is often differentiated from the field of [[visualization (graphic)|visualization]], although the two fields have many similarities.
Computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the ''mathematical'' and ''computational'' foundations of image generation and processing rather than purely [[aesthetic]] issues. Computer graphics is often differentiated from the field of [[visualization (graphic)|visualization]], although the two fields have many similarities.

Connected studies include:
* [[Scientific visualization]]
* [[Information visualization]]
* [[Computer vision]]
* [[Image processing]]
* [[Computational Geometry]]
* [[Computational topology|Computational Topology]]
* [[Applied mathematics]]

Applications of computer graphics include:
*[[Special effect]]s
*[[Visual effects]]
*[[Video games]]
*[[Digital art]]

== History ==
One of the first displays of computer animation was ''[[Futureworld]]'' (1976), which included an [[animation]] of a human face and hand — produced by [[Edwin Catmull|Ed Catmull]] and [[Fred Parke]] at the [[University of Utah]].

There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the [[SIGGRAPH]] and [[Eurographics]] conferences and the [[Association for Computing Machinery]] (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: [http://www.geometryprocessing.org Symposium on Geometry Processing],[http://www.eg.org/events Symposium on Rendering, and Symposium on Computer Animation]. As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates)<ref name="cra memo">[http://www.cra.org/reports/tenure_review.html Best Practices Memo<!-- Bot generated title -->]</ref><ref name="ernst note">[http://people.csail.mit.edu/mernst/advice/conferences-vs-journals.html Choosing a venue: conference or journal?<!-- Bot generated title -->]</ref><ref name="graphics acceptance rates">[http://vrlab.epfl.ch/~ulicny/statistics/ Graphics/vision publications acceptance rates statistics<!-- Bot generated title -->]</ref>.<ref> An extensive history of computer graphics can be found at [http://accad.osu.edu/~waynec/history/lessons.html this page].</ref>


== Subfields in computer graphics ==
== Subfields in computer graphics ==
Line 31: Line 51:


=== Animation ===
=== Animation ===
The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently [[physical simulation]] has become more popular as computers have become more powerful computationally.
The subfield of [[Computer Animation | animation]] studies how to create objects that move or deform over time. Characters are often animated by rigging the character’s skin to a [[Skeletal animation | skeleton]]. As an animator moves the skeleton, the skin deforms to reflect this motion. Another useful animation tool is [[motion capture]]. Motion capture uses markers that are placed a real human actor. The motion of these markers are captured by cameras and used to animate a virtual character. Besides character animation, many objects are best described using [[physics | physical simulations]] such as [[rigid body]] simulations, [[cloth modeling]], and [[fluid dynamics]].

;Subfields
* Performance capture
* Character animation
* Physical simulation (e.g. [[cloth modeling]], animation of [[fluid dynamics]], etc.)


=== Rendering ===
=== Rendering ===
Line 37: Line 62:
[[Image:Cornellbox_pathtracing_irradiancecaching.png|thumb|right|250px|Indirect diffuse scattering simulated using [[path tracing]] and [[irradiance caching]].]]
[[Image:Cornellbox_pathtracing_irradiancecaching.png|thumb|right|250px|Indirect diffuse scattering simulated using [[path tracing]] and [[irradiance caching]].]]


The process of drawing images from 3D models is called [[rendering (computer graphics) | rendering]]. Rendering algorithms simulates how light interacts with objects in a scene. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light).
Rendering generates images from a model. Rendering may simulate [[light transport theory|light transport]] to create realistic images or it may create images that have a particular artistic style in [[non-photorealistic rendering]]. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See [[Rendering (computer graphics)]] for more information.


;Transport
Humans [[perception | perceive]] light that travels through the [[lens (anatomy) | lens]] of our [[human eye| eye]] and [[graphical projection | projects]] onto the surface of our [[retina]]. There is a [[map (mathematics) | mapping]] between the 3D world and the 2D image it forms within a [[camera]] or the [[eye]] which is explained by [[perspective projection]].
[[light transport theory|Transport]] describes how illumination in a scene gets from one place to another. [[visibility (geometry)|Visibility]] is a major component of light transport.


;Scattering
[[Opacity (optics) | Opaque]] objects block light traveling from objects behind them. To determine if part of an object is [[visibility (geometry) | visible]], various [[hidden surface removal]] algorithms can be used, such as [[z-buffering]]. Objects above, below, or otherwise outside the [[viewport]] are not visible and can be [[Clipping (computer graphics) | clipped]] using various algorithms.
Models of ''scattering'' and ''shading'' are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Shading can be broken down into two orthogonal issues, which are often studied independently:


# '''scattering''' - how light interacts with the surface ''at a given point''
The [[irradiance | amount of light]] that bounces off an object depends on how its surface is shaped, on what material it is made of, and on how it is oriented with respect to any light sources and the camera. These factors all influence the appearance of an object and can be computed using various [[shading]] techniques such as [[Phong shading]].
# '''shading''' - how material properties vary across the surface


The former problem refers to [[scattering]], i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a [[bidirectional scattering distribution function]] or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a [[shader]]. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local ''geometric'' variation.)
The material properties of an object, such as its [[color]], often vary over its surface. For example, a poster may not have a uniform color, but may have different colors at different points on its surface. Changes in color and other material properties can be rendered using [[texture mapping]].


;Other subfields
Various techniques are designed to render [[shadow]]s using [[Shadow mapping | shadow maps]] and [[Ray tracing (graphics) | ray tracing]].
* physically-based rendering - concerned with generating images according to the laws of [[geometric optics]]

* [[Real Time rendering|real time rendering]] - focuses on rendering for interactive applications, typically using specialized hardware like [[graphics processing unit|GPUs]]
;Advanced Rendering Techniques
* [[non-photorealistic rendering]]

* relighting - recent area concerned with quickly re-rendering scenes
[[Ray tracing (graphics) | Ray tracing]] is used to render [[reflection (mathematics) | reflections]] off [[gloss (material appearance) | gloss]] shiny [[specular]] objects, [[refraction]]s through [[transparency (optics) | transparent]] objects like [[glass]], and [[shadow]]s. [[Specular reflection]]s and refractions are rendered by tracing rays of light through a 3D scene [[recursion | recursively]].

While ray tracing is suited for rendering specular reflections off multiple shiny objects, light can also reflect off [[gloss (material appearance) | dull]] objects through [[diffuse reflection]]s. Diffuse reflections can be rendered using [[radiosity]].
The most advanced rendered techniques are often called [[global illumination]] techniques and attempt to fully solve the [[rendering equation]]. These methods render both diffuse and specular reflections and can create effects like [[caustic (optics) | caustics]] which can not be generated using [[Ray tracing (graphics) | ray tracing]] or [[radiosity]]. Global illumination techniques can produce nearly photorealistic results, but may require enormous computing power. Popular global illumination methods include [[path tracing]], [[Metropolis light transport]], and [[photon mapping]].

;Alternative Rendering Techniques

Most rendering techniques render the 2D surfaces of objects, but [[volume rendering]] is for rendering 3D data sets such as data acquired from a [[computed axial tomography|CT]] or [[magnetic resonance imaging|MRI]] scanner. Usually, this data is stored in a regular volumetric grid with each volume element or [[voxel]] recording some measured value. Surfaces can be extracted from volume data using the [[Marching cubes]] algorithm.

[[Image-based modeling and rendering | Image-based rendering]] techniques do not use 3D geometric models or volume data, but instead render images from collections of other images. While most rendering techniques are focused on achieving photorealistic results, [[non-photorealistic rendering]] methods are intended to produce images in an artistic style such as [[toon shading]] which produces images that resemble [[cartoon]]s.

== Related Subjects ==
Connected studies include:
* [[Scientific visualization]]
* [[Information visualization]]
* [[Computer vision]]
* [[Image processing]]
* [[Computational Geometry]]
* [[Computational topology|Computational Topology]]
* [[Applied mathematics]]

Applications of computer graphics include:
*[[Special effect]]s
*[[Visual effects]]
*[[Video games]]
*[[Digital art]]

== History ==
One of the first displays of computer animation was ''[[Futureworld]]'' (1976), which included an [[animation]] of a human face and hand &mdash; produced by [[Edwin Catmull|Ed Catmull]] and [[Fred Parke]] at the [[University of Utah]].

There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the [[SIGGRAPH]] and [[Eurographics]] conferences and the [[Association for Computing Machinery]] (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: [http://www.geometryprocessing.org Symposium on Geometry Processing],[http://www.eg.org/events Symposium on Rendering, and Symposium on Computer Animation]. As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates)<ref name="cra memo">[http://www.cra.org/reports/tenure_review.html Best Practices Memo<!-- Bot generated title -->]</ref><ref name="ernst note">[http://people.csail.mit.edu/mernst/advice/conferences-vs-journals.html Choosing a venue: conference or journal?<!-- Bot generated title -->]</ref><ref name="graphics acceptance rates">[http://vrlab.epfl.ch/~ulicny/statistics/ Graphics/vision publications acceptance rates statistics<!-- Bot generated title -->]</ref>.<ref> An extensive history of computer graphics can be found at [http://accad.osu.edu/~waynec/history/lessons.html this page].</ref>


== Notable researchers in computer graphics ==
== Notable researchers in computer graphics ==

Revision as of 04:56, 21 May 2009

A modern render of the Utah teapot, an iconic model in 3D computer graphics created by Martin Newell in 1975.

Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.

Overview

Computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.

Connected studies include:

Applications of computer graphics include:

History

One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and hand — produced by Ed Catmull and Fred Parke at the University of Utah.

There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing,Symposium on Rendering, and Symposium on Computer Animation. As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates)[1][2][3].[4]

Subfields in computer graphics

A broad classification of major subfields in computer graphics might be:

  1. Geometry: studies ways to represent and process surfaces
  2. Animation: studies with ways to represent and manipulate motion
  3. Rendering: studies algorithms to reproduce light transport
  4. Imaging: studies image acquisition or image editing

Geometry

Successive approximations of a surface computed using quadric error metrics.

The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example[5]).

Geometry Subfields
  • Implicit surface modeling - an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.
  • Digital geometry processing - surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.[6][7][8]
  • Discrete differential geometry - a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.[9]
  • Point-based graphics - a recent field which focuses on points as the fundamental representation of surfaces.
  • Subdivision surfaces
  • Out-of-core mesh processing - another recent field which focuses on mesh datasets that do not fit in main memory.

Animation

The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally.

Subfields

Rendering

Indirect diffuse scattering simulated using path tracing and irradiance caching.

Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information.

Transport

Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.

Scattering

Models of scattering and shading are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Shading can be broken down into two orthogonal issues, which are often studied independently:

  1. scattering - how light interacts with the surface at a given point
  2. shading - how material properties vary across the surface

The former problem refers to scattering, i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)

Other subfields

Notable researchers in computer graphics

Template:Multicol

Template:Multicol-break

Template:Multicol-break

Template:Multicol-end

See also

Template:Multicol

Template:Multicol-break

Template:Multicol-break

Template:Multicol-end

References

University Groups

Industry

Industrial labs doing "blue sky" graphics research include:

Major film studios notable for graphics research include:

Template:Link FA