Virtual cinematography

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Virtual cinematography is the set of cinematographic techniques performed in a computer graphics environment. This includes a wide variety of subjects like photographing real objects, often with stereo or multi-camera setup, for the purpose of recreating them as three-dimensional objects and algorithms for automated creation of real and simulated camera viewpoints.

History[edit]

Virtual Cinematography came into prominence following the release of the The Matrix films especially the last two, Matrix Reloaded and Matrix Revolutions. The directors, Andy & Larry Wachowski, tasked visual effects supervisor John Gaeta (whom coined the phrase) with developing techniques to allow for virtual "filming" of realistic computer-generated imagery. Gaeta, along with George Borshukov, Kim Libreri and his crew at ESC Entertainment succeeded in creating photo-realistic CGI versions of performers, sets, and action. Their work was based on the findings of Paul Debevec et al. of the acquisition and subsequent simulation of the reflectance field over the human face which was acquired using the simplest of light stages in 2000.[1] Famous scenes that would have been impossible or exceedingly time consuming to do within traditional cinematography include the burly brawl in Matrix Reloaded where Neo fights up-to-100 Agent Smiths and at the start of the end showdown in Matrix Revolutions where Agent Smith's cheekbone gets punched in by Neo leaving the digital look-alike naturally unhurt. Another series of films of the same era that utilizes virtual cinematography heavily with trademark typical virtual camera runs that could not be achieved with conventional cinematography is the The Lord of the Rings filmatization. Other studios and graphics houses with ability or near the ability to do digital look-alikes are in the early 2000s include: Sony Pictures Imageworks (Superman 2 and 3 2003), Square Pictures (Animatrix - Final Flight of the Osiris prequel to Matrix Reloaded 2003), Image Metrics (Digital Emily 2009) and then later on in 2010s Disney (the antagonist CLU in movie Tron: Legacy 2010) and Activision (Digital Ira 2013)

Virtual Cinematography has evolved greatly since this time and can be found in use prolifically across a spectrum of digital media formats. Subsets technology components of Virtual Cinematography include "computational photography, machine vision, sensor based volumetric video and image based rendering.

Methods[edit]

Once the 3-D geometry, textures, reflectance field and motion capture are done and an adequate capture and simulation of the BSDF over all needed surfaces and the virtual content has been assembled into a scene within a 3D engine, it can then be creatively composed, relighted and re-photographed from other angles by a virtual camera as if the action was happening for the first time.

  • Geometry can be acquired from an XYZRGB 3D scanner such as Arius3D or Cyberware or from multiple photographs using machine vision technology called photogrammetry. The people at ESC entertainment used Arius3D scanner in the making of the Matrix sequels to acquire details of size of 100 µm such as fine wrinkles and skin pores.[1]
  • Textures can be captured easily from photographs.
  • Reflectance field is captured into BSDF's over the surface of the XYZRGB object using a light stage.
  • Dense aka. markerless motion capture and multi-camera setup (similar to the bullet time rig) photogrammetric capture technique called optical flow was used in making digital look-alikes for the Matrix movies.[2]

Modification, re-direction and enhancements to the scene are possible as well. The rendered result can appear highly realistic, or rather, "photo-realistic". Virtual cinematography is the creation process. Virtual effects are stylistic modifications being applied within this format. Virtual cinema is the result. Its main applications are in movie, video game, leisure and disinformation industries.

The art of "photographing" any computer-generated imagery content with a virtual camera is still virtual cinematography by means of taking a 2D photo of a three-dimensional model, where as virtual cinematography is a capturing process of four-dimensional (XYZT) events into higher dimension functions such as a bidirectional texture function (7D) or a collection of BSDF over the target.

The advent of virtual worlds has given a new push to this concept since they allow the creation of real-time animation by using camera moves and avatar control techniques that are not possible by using traditional film-making methods.

Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking. It is a form of motion estimation.

Related[edit]

References[edit]

  1. ^ a b "Debevec", "Paul"; "Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar" (2000). ["http://dl.acm.org/citation.cfm?id=344855" "Acquiring the reflectance field of a human face"]. ACM. doi:10.1145/344779.344855. Retrieved 2013-07-21. 
  2. ^ "Debevec", "Paul"; "J. P. Lewis" (2005). ["http://dl.acm.org/citation.cfm?id=1198593" "Realistic human face rendering for "The Matrix Reloaded""]. ACM. doi:10.1145/1198555.1198593. Retrieved 2013-08-10.