Jump to content

Digital cinematography

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 216.15.63.67 (talk) at 15:09, 8 August 2010 (Article reads like an opinion essay advocating digital, complete with ample ungrammatical industry-speak such as "shot digital". -berr). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Digital Cinematography is the process of capturing motion pictures as digital images, rather than on film. Digital capture may occur on tape, hard disks, flash memory, or other media which can record digital data. As digital technology has improved, this practice has become increasingly common. Many mainstream Hollywood movies now are shot partly or fully digitally.

Many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, as well as new vendors like RED and Silicon Imaging, and companies which have traditionally focused on consumer and broadcast video equipment, like Sony and Panasonic.

Digital cinematography's acceptance was cemented 2009 when Slumdog Millionaire became the first movie shot mainly in digital to be awarded the Academy Award for Best Cinematography[1] and the highest grossing movie in the history of cinema, Avatar, not only was shot on digital cameras as well, but also made the main revenues at the box office no longer by film, but digital projection. In 2010 the Academy Award for Best Cinematography again was won by a movie shot digital, and the Academy Award for the Best Foreign Language Film, El secreto de sus ojos, as well was won by a movie shot digitally.

History

Beginning in the late 1980s, Sony began marketing the concept of "electronic cinematography," utilizing its analog HDTV cameras. The effort met with very little success. In 1998, with the introduction of HDCAM recorders and 1920 × 1080 pixel digital video cameras based on CCD technology, the idea, now re-branded as "digital cinematography," finally began to gain traction in the market.

In May 2002 Star Wars Episode II: Attack of the Clones became the first high-profile, high-budget movie released that was shot on 24 frame-per-second high-definition digital video, using a Sony HDW-F900 camera. Two lesser-known movies, Vidocq (2001) and Russian Ark (2002), had previously been shot with the same camera, the latter notably consisting of a single shot (no cuts).

In parallel with these developments in the world of traditional high-budget cinematography, a digital cinema revolution was occurring from the bottom up, among low budget filmmakers outside of the Hollywood system. Beginning in the mid-1990s, with the introduction of Sony's DCR-VX1000, the digital MiniDV format began to emerge. MiniDV offered much greater quality than the analog formats that preceded it, at the same price point. While its quality was not considered as good as film, these MiniDV camcorders, in conjunction with non-linear editing software that could run on personal computers, allowed a large number of people to begin making movies who were previously prevented from doing so by the high costs involved with shooting on film.

Today, cameras from companies like Sony, Panasonic, JVC and Canon offer a variety of choices for shooting high-definition video with less than $10,000 worth of camera equipment. Additionally, some digital SLR photo cameras from vendors like Canon and Nikon have started adding 24 or 30 frame per second video modes. While these photo cameras still presently have significant limitations as motion imaging devices, their large sensors, good low-light performance, interchangeable lenses and low cost have made them attractive to some low budget moviemakers.

At the high-end of the market, there has been an emergence of cameras aimed specifically at the digital cinema market. These cameras from Sony, Vision Research, Arri, Silicon Imaging, Panavision, Grass Valley and Red offer resolution and dynamic range that exceeds that of traditional video cameras, which are designed for the limited resolution and dynamic range of broadcast television.

Technology

Digital cinematography captures motion pictures digitally, in a process analogous to digital photography. While there is no clear technical distinction that separates the images captured in digital cinematography from video, the term "digital cinematography" is usually applied only in cases where digital acquisition is substituted for film acquisition, such as when shooting a feature film. The term is not generally applied when digital acquisition is substituted for analog video acquisition, as with live broadcast television programs.

Sensors

Digital cinematography cameras capture images using CMOS or CCD sensors, usually in one of two arrangements.

Single chip cameras designed specifically for the digital cinematography market often use a single sensor (much like digital photo cameras), with dimensions similar in size to a 16 or 35 mm film frame or even (as with the Vision 65) a 65 mm film frame. An image can be projected onto a single large sensor exactly the same way it can be projected onto a film frame, so cameras with this design can be made with PL, PV and similar mounts, in order to use the wide range of existing high-end cinematography lenses available. Their large sensors also let these cameras achieve the same shallow depth of field as 35 or 65 mm motion picture film cameras, which is important because many cinematographers consider selective focus an essential visual tool.

Other cameras use three 1/3" or 2/3" sensors in conjunction with a prism, with each sensor capturing a different color. Camera vendors like Sony and Panasonic, have leveraged their experience with these designs into three-chip products targeted specifically at the digital cinematography market. The Thomson Viper also uses a three-chip design. These designs offer benefits in terms of color separation, and rolling shutter, but are incompatible with traditional cinematography lenses and are incapable of achieving 35 mm depth of field unless used with depth-of-field adaptors, which result in some loss of light. New lines of high-end lenses such as the Zeiss DigiPrimes have been developed with these cameras in mind.

Video Formats

Unlike other video formats, which are specified in terms of vertical resolution (e.g. 1080p, which is 1920x1080 pixels), digital cinema formats are usually specified in terms of horizontal resolution. As a shorthand, these resolutions are often given in "nK" notation, where n is the multiplier of 1024 such that the horizontal resolution of a corresponding full-aperture, digitized film frame is exactly pixels. Here the 'K' has a customary, improper meaning: it should be the binary prefix "kibi" (ki) instead.

For instance, a 2K image is 2048 pixels wide, and a 4K image is 4096 pixels wide. Vertical resolutions vary with aspect ratios though; so a 2K image with a HDTV (16:9) aspect ratio is 2048x1152 pixels, while a 2K image with a SDTV or Academy ratio (4:3) is 2048x1536 pixels, and one with a Cinemascope ratio (2.35:1) would be 2048x872 pixels, and so on. Due to the "nK" notation not corresponding to specific horizontal resolutions per format a 2K image lacking, for example, the typical 35mm film soundtrack space, is only 1828 pixels wide, with vertical resolutions rescaling accordingly. This led to a plethora of motion-picture related video resolutions, which is quite confusing and often redundant with respect to nowadays few projection standards.

All formats designed for digital cinematography are progressive scan, and capture usually occurs at the same 24 frame per second rate established as the standard for 35mm film.

The DCI standard for cinema usually relies on a 1.89:1 aspect ratio, thus defining the maximum container size for 4K as 4096x2160 pixels and for 2K as 2048x1080 pixels [1] (either 24fps or 48fps). When distributed in the form of a Digital Cinema Package (DCP), content is letterboxed or pillarboxed as appropriate to fit within one of these container formats.

In the last few years, 2K has been the most common format for digitally acquired major motion pictures however, as new camera systems gain acceptance, 4K is becoming more prominent (as the 1080p format has been before). During 2009 at least two major Hollywood films, Knowing and District 9, were shot in 4K on the RED ONE camera.

Data Storage

Broadly, there are two paradigms used for data acquisition and storage in the digital cinematography world.

Tape-based workflows

With video tape based workflow video is recorded to video tape on set. This video is then ingested into a computer running non-linear editing software, using a deck. Upon ingestion, a digital video stream from tape is converted to computer files. These files can be edited directly or converted to an intermediate format for editing. Then video is output in its final format, possibly to a film recorder for theatrical exhibition, or back to video tape for broadcast use. Original video tapes are kept as an archival medium. The files generated by the non-linear editing application contain the information necessary to retrieve footage from the proper tapes, should the footage stored on the computer's hard disk be lost.

File-based workflows

Digital cinematography is gradually shifting towards "tapeless" or "file-based" workflow. This trend has accelerated with increased capacity and reduced cost of non-linear storage solutions like hard disk drives, optical discs and solid-state memory. With tapeless workflow digital video is recorded as digital files onto random-access media like optical discs, hard disk drives or flash memory-based digital "magazines". These files can be easily copied to another storage device, typically to a large RAID connected to an editing system. Such RAID arrays, both of "managed" (e.g. SANs and NASs) and "unmanaged" (e.g. JBoDs) type, are necessary due to the enormous throughput required for real-time (320 MB/s for 2K @ 24fps) or near-real-time playback in post-production, compared to one from a single, yet fast, hard disk drive. Such requirements are often termed as "on-line" storage. Post-production not requiring real-time playback performances (typical for lettering, subtitling, versioning and other similar visual effects) can be settled on slightly slower RAID storages. Once data is copied from the digital magazines, they are erased and returned to the set for more shooting.

Short-term archival, if ever, is accomplished by moving the digital files into "slower" RAID arrays (still of either managed and unmanaged type, but with lower performances), where playback capability is poor to inexistent (unless via proxy images), but minimal editing and metadata harvesting still feasible. Such intermediate requirements easily fall into the "mid-line" storage category. Long-term archival is accomplished by backing up the digital files from the RAID, using standard practices and equipment for data backup from the IT industry, often to data tapes (like LTOs).

Compression

Digital cinema cameras are capable of generating extremely large amounts of data; often hundreds of megabytes per second [2].To help manage this huge data flow, many cameras or recording devices designed to be used in conjunction with them offer compression. Prosumer cameras typically use high compression ratios in conjunction with chroma subsampling. While this allows footage to be comfortably handled even on fairly modest personal computers, the convenience comes at the expense of image quality.

High-end digital cinematography cameras or recording devices typically support recording at much lower compression ratios, or in uncompressed formats. Additionally, digital cinematography camera vendors are not constrained by the standards of the consumer or broadcast video industries, and often develop proprietary compression technologies that are optimized for use with their specific sensor designs or recording technologies.

Lossless vs. lossy compression

A lossless compression system is capable of reducing the size of digital data in a fully reversible way—that is, in a way that allows the original data to be completely restored, byte for byte. This is done by removing redundant information from a signal. Digital cinema cameras rarely use only lossless compression methods, because much higher compression ratios (lower data rates) can be achieved with lossy compression. With a lossy compression scheme, information is discarded to create a simpler signal. Due to limitations in human visual perception, it is possible to design algorithms which do this with little visual impact.

Chroma subsampling

Most digital cinematography systems further reduce data rate by subsampling color information. Because the human visual system is much more sensitive to luminance than to color, lower resolution color information can be overlaid with higher resolution luma (brightness) information, to create an image that looks very similar to one in which both color and luma information are sampled at full resolution. This scheme may cause pixelation or color bleeding under some circumstances. High quality digital cinematography systems are capable of recording full resolution color data (4:4:4) or raw sensor data.

Bitrate

Video and audio compression systems are often characterized by their bitrates. Bitrate describes how much data is required to represent one second of media. One cannot directly use bitrate as a measure of quality, because different compression algorithms perform differently. A more advanced compression algorithm at a lower bitrate may deliver the same quality as a less advanced algorithm at a higher bitrate.

Intra- vs. Inter-frame compression

Most compression systems used for acquisition in the digital cinematography world compress footage one frame at a time, as if a video stream is a series of still images. This is called intra-frame compression. Inter frame compression systems can further compress data by examining and eliminating redundancy between frames. This leads to higher compression ratios, but displaying a single frame will usually require the playback system to decompress a number of frames from before & after it. In normal playback this is not a problem, as each successive frame is played in order, so the preceding frames have already been decompressed. In editing, however, it is common to jump around to specific frames and to play footage backwards or at different speeds. Because of the need to decompress extra frames in these situations, inter-frame compression can cause performance problems for editing systems. Inter-frame compression is also disadvantageous because the loss of a single frame (say, due to a flaw writing data to a tape) will typically ruin all the frames until the next keyframe occurs. In the case of the HDV format, for instance, this may result in as many as 6 frames being lost with 720p recording, or 15 with 1080i recording [3]. An inter-frame compressed video stream consists of groups of pictures (GOPs), each of which has only one full frame, and a handful of other frames referring to this frame. If the full frame, called I-frame, is lost due to transmission or media error, none of the P-frames of B-frames (the referenced images) can be displayed. In this case, the whole GOP is lost.

Digital acquisition codecs compared

Format Bit depth Resolution Chroma sampling Bitrate File size Inter-frame? Algorithm type
DV 8 bits 720×480 (NTSC), 720×576 (PAL) 4:1:1 or 4:2:0 25 Mb/s 217 MB/min. No DCT (lossy)
DVCPRO50 8 bits 720×480 (NTSC), 720×576 (PAL) 4:2:2 50 Mb/s 423 MB/min. No DCT (lossy)
AVCHD 8 bits 1920x1080, 1440x1080, 1280x720 4:2:0 24 Mb/s Yes DCT (lossy)
AVC Intra 10 bits 1920x1080, 1440x1080, 1280x720 4:2:2 50 or 100 Mb/s No DCT (lossy)
HDV 8 bits 1280×720, 1440×1080 4:2:0 19-25 Mb/s 142 MB/min. (720p), 190 MB/min. (1080i) Yes DCT (lossy)
XDCAM HD422 8 bits 1280x720, 1920×1080 4:2:2 50 Mb/s Yes DCT (lossy)
XDCAM EX 8 bits 1280x720, 1920×1080, 1440×1080 4:2:0 25-35 Mb/s 190 MB/min., 262 MB/min. Yes DCT (lossy)
DVCPRO HD 8 bits 960×720, 1280×1080, 1440×1080 4:2:2 100 Mbit/s 423 MB/min. (720p60), 835 MB/min. (1080i60) No DCT (lossy)
HDCAM 8 bits 1440×1080 3:1:1 144 Mb/s No DCT (lossy)
HDCAM SR 10 bit 1920×1080 4:2:2 or 4:4:4 440 or 880 Mb/s No DCT (lossy)
Panavision SSR 10-bit PanaLog 1920×1080 4:2:2 or 4:4:4 up to 3 Gb/s No Uncompressed
CineForm RAW (SI-2K) 10-bit Log 2048×1152 Raw Bayer 100-140 Mb/s 900 MB/min. No Wavelet (lossy)
REDCODE RAW 12 bits 4520x2540, 4480×1920, 4096×2304 Raw Bayer 224-336 Mb/s 1.6-2.5 GB/min. No Wavelet (lossy)
ARRIRAW 12 bits 2880×2160 Raw Bayer ~ 5.6 Gb/s 42 GB/min. No Uncompressed
"DALSA" RAW 16 bits 4096×2048 Raw Bayer ~ 3.2 Gb/s No Uncompressed

Distribution Formats

Movies shot digitally may be released theatrically, on DVD or in a High Definition format like Blu-Ray.

Digital Theatrical Distribution

For the over 4,000 theaters with digital projectors in the USA, digital films may be distributed digitally, either shipped to theaters on hard drives or sent via the Internet or satellite networks. Digital Cinema Initiatives, LLC, a joint venture of Disney, Fox, MGM, Paramount, Sony Pictures Entertainment, Universal and Warner Bros. Studios, has established standards for digital cinema projection. In July 2005, they released the first version of the Digital Cinema System Specification, which encompasses 2K and 4K theatrical projection. They also offer compliance testing for exhibitors and equipment suppliers.

Distributors prefer digital distribution, because it saves them the expense of making film prints, which may cost as much as $2000 each. Digital projection also offers advantages over traditional film projection such as lack of jitter, flicker, dust, scratches, and grain.

Theater owners initially balked at installing digital projection systems because of high cost and concern over increased technical complexity. However new funding models, in which distributors pay a "digital print" fee to theater owners, have helped to alleviate these concerns. Digital projection also offers increased flexibility with respect to showing trailers and pre-show advertisements and allowing theater owners to more easily move films between screens or change how many screens a film is playing on, and the higher quality of digital projection provides a better experience to help attract consumers who can now access high-definition content at home. These factors have resulted in digital projection becoming an increasingly attractive prospect for theater owners, and the pace of adoption has increased.

In the UK 300 cinema screens were converted to digital projectors as part of a UK film council initiative, the Digital Screen Network (DSN) to advance digital theatrical distribution in the UK funded by National lottery money. The first film to be screened digitally on the DSN was King's Game, a Danish film.

Film-based Theatrical Distribution

Since not all theaters currently have digital projection systems, even if a movie is shot and post-produced digitally, it must be transferred to film if a large theatrical release is planned. Typically, a film recorder will be used to print digital image data to film, to create a 35 mm internegative. After that the duplication process is identical to that of a traditional negative from a film camera.

Digital cinematography cameras

Professional cameras include the Sony HDCAM Series, RED ONE, Arriflex D-20 and D-21, Panavisions Genesis, Silicon Imaging SI-2K, Thomson Viper, Vision Research Phantom, Weisscam HS-1 and HS-2, GS Vitec noX, and the Fusion Camera System. Independent filmmakers have also pressed low-cost consumer and prosumer cameras into service for digital filmmaking.

Digital vs. film cinematography

Technical considerations

Predictability

When shooting on film, response to light is determined by what film stock is chosen. A cinematographer can choose a film stock he is familiar with, and expose film on set with a high degree of confidence about how it will turn out. Because the film stock is the main determining factor, results will be substantially similar regardless of what camera model is being used. In contrast, when shooting digitally, response to light is determined by the CMOS or CCD sensor(s) in the camera, so the cinematographer needs familiarity with the specific camera model.

With digital cinematography, however, on-set monitoring allows the cinematographer to see the actual images that are captured, immediately on the set, which is impossible with film. With a properly calibrated high-definition display, on-set monitoring, in conjunction with data displays such as histograms, waveforms, RGB parades, and various types of focus assist, can give the cinematographer a far more accurate picture of what is being captured than is possible with film, where a final image cannot be viewed until the film stock is processed. However, some of this equipment may impose costs in terms of time and money, and may not be possible to utilize in difficult shooting situations.

Film cameras do often have a video assist that captures video through the camera lens to allow for on-set playback, but its usefulness is largely restricted to judging action and framing. Because this video is not derived from the image that is actually captured to film, it is not very useful for judging lighting, and because it is typically only NTSC-resolution, it is often useless for judging focus.

Portability

35 mm film cameras cannot be sized down below a certain size and weight, as they require space for a film magazine and a film transport mechanism that have a minimum size effectively determined by the physical size of the film. While some digital cinematography cameras are large and bulky, even compared to full-sized film cameras, others, such as the SI-2K, are extremely compact, and offer features such as the ability to detach the camera head from the rest of the camera, allowing high quality images to be captured with an extremely compact package. The tapes, hard drives and flash memory magazines that digital cameras record onto are also far more compact than the film magazines used by film cameras. These factors can result in substantial portability advantages for digital cinematography systems. Moreover, as future announcements from several digital cinema camera vendors illustrate (both Arri and Red have announced new cameras more compact than their current models), it seems likely that, as with cell phones, laptops, and innumerable other electronics devices, the minimum size of digital cinematography cameras will continue to shrink.

Dynamic Range

The sensors in most high-end digital video cameras have less exposure latitude (dynamic range) than modern motion picture film stocks. In particular, they tend to 'blow out' highlights, losing detail in very bright parts of the image. If highlight detail is lost, it is impossible to recapture in post-production. Cinematographers can learn how to adjust for this type of response using techniques similar to those used when shooting on reversal film, which has a similar lack of latitude in the highlights. They can also use on-set monitoring and image analysis to ensure proper exposure. In some cases it may be necessary to 'flatten' a shot, or reduce the total contrast that appears in the shot, which may require more lighting to be used.

Many people also believe that highlights are less visually pleasing with digital acquisition, because digital sensors tend to 'clip' them very sharply, whereas film produces a 'softer' roll-off effect with over-bright regions of the image. Some more recent digital cinema cameras attempt to more closely emulate the way film handles highlights and are used by many A-Budget productions intercut with film. A few cinematographers have started deliberately using the 'harsh' look of digital highlights for aesthetic purposes. One notable example of such use is Battlestar Galactica.

Digital acquisition typically offers better performance than film in low-light conditions, allowing less lighting and in some cases completely natural or practical lighting to be used for shooting, even indoors. This low-light sensitivity also tends to bring out shadow detail. Some directors have tried a "best for the job" approach, using digital acquisition for indoor or night shoots, and traditional film for daylight exteriors.

Resolution

Substantive debate over the subject of film resolution vs. digital image resolution is clouded by the fact that it is difficult to meaningfully and objectively determine the resolution of either. However the huge majority of all blockbuster-movie of the 2000s decade have been finished in 2K D.I. resolution - which can easily be surpassed by mechanical as well as digital camera systems.

Unlike a digital sensor, a film frame does not have a regular grid of discrete pixels. Rather, it has an irregular pattern of differently sized grains. As a film frame is scanned at higher and higher resolutions, image detail is increasingly masked by grain, but it is difficult to determine at what point there is no more useful detail to extract. Moreover, different film stocks have widely varying ability to resolve detail.

Determining resolution in digital acquisition seems straightforward, but is significantly complicated by the way digital camera sensors work in the real world. This is particularly true in the case of high-end digital cinematography cameras that use a single large bayer pattern CMOS sensor. A bayer pattern sensor does not sample full RGB data at every point; each pixel is biased toward red, green or blue [4], and a full color image is assembled from this checkerboard of color by processing the image through a demosaicing algorithm. Generally with a bayer pattern sensor, actual resolution will fall somewhere between the "native" value and half this figure, with different demosaicing algorithms producing different results. Additionally, most digital cameras (both bayer and three-chip designs) employ optical low-pass filters to avoid aliasing. Such filters reduce resolution.

In general, it is widely accepted that an original film camera negative exceeds the resolution of HDTV formats and the 2K digital cinema format, but there is still significant debate about whether 4K digital acquisition can match the results achieved by scanning 35 mm film at 4K, as well as whether 4K scanning actually extracts all the useful detail from 35 mm film in the first place. However, from 2000 to 2009, the overwhelming majority of films that used a digital intermediate were mastered at 2K, independent of their budget. Additionally, 2K projection is chosen for most permanent digital cinema installations, often even when 4K projection is available. These trends indicate that 2K is widely considered to be acceptable resolution for digital mastering and distribution of feature films.

One important thing to note is that the process of optical duplication, used to produce theatrical release prints for movies that originate both on film and digitally, causes significant loss of resolution. If a 35 mm negative does capture more detail than 4K digital acquisition, ironically this may only be visible when a 35 mm movie is scanned and projected on a 4K digital projector. The most limiting factor when not using digital cinema however is the end of the exhibition chain: For mechanical projection, the SMPTE allows flutter and weave up to 0.2%, which reduces projected resolution down to 1K. Well maintained mechanical projectors however can operate at 0.05%, which can almost reach 2K resolution.

Grain & noise

Film has a characteristic grain structure. Visible grain is regarded as an undesirable imaging artifact by some people, and, perhaps because of its association with the look of major motion pictures, as an aesthetically pleasing phenomenon by others. Different film stocks have different grain, and cinematographers may use this for artistic effect.

Digitally acquired footage lacks this grain structure. Electronic noise is sometimes visible in digitally acquired footage, particularly in dark areas of an image or when footage was shot in low lighting conditions and gain was used. Some people believe such noise is a workable aesthetic substitute for film grain, while others believe it has a harsher look that detracts from the image.

Well-shot, well-lit images from high-end digital cinematography cameras can look almost eerily clean. Some people believe this makes them look "plasticky" or computer-generated, while others find it to be an interesting new look, and argue that film grain can be emulated in post-production if desired.

Since most theatrical exhibition still occurs via film prints, the super-clean look of digital acquisition is often lost before moviegoers get to see it, because of the grain in the film stock of the release print.

Digital Intermediate Workflow

The process of using digital intermediate workflow, where movies are color graded digitally instead of via traditional photochemical finishing techniques, has become common, largely because of the greater artistic control it provides to filmmakers. In 2007, all of the 10 most successful movies released used the digital intermediate process.

In order to utilize digital intermediate workflow with film, the camera negative must first be processed and then scanned to a digital format. High quality film scanning is expensive (up to $4 a frame, although the costs of this are continually dropping). With digital acquisition, the scanning step is not necessary. Footage can go directly into a digital intermediate pipeline as digital data, although with some digital acquisition systems, it may need to be processed into suitable formats before it can be worked with.

Some filmmakers have years of experience achieving their artistic vision using the techniques available in a traditional photochemical workflow, and prefer that finishing/editing process. While it would be theoretically possible to use such a process with digital acquisition by creating a film negative on a film recorder, in general digital acquisition is not a suitable choice if a traditional finishing process is desired. However, traditional photochemical finishes have become extremely rare for Hollywood features.

Sound

Films are traditionally shot with dual-system recording, where picture is recorded on camera, and sync sound is recorded to a separate sound recording device. Picture and sound are then synced up in post-production. In the past this was done manually by lining up the image of the just-closed clapper board sticks with their characteristic clap on the sound recording. Today it is often done automatically using timecode data burnt onto the edge of the film emulsion and timecode displayed on digital clapper slates.

Most cameras used for digital cinematography can record sound internally, already in sync with picture. In theory this eliminates the need for syncing in post, which can lead to faster workflows. However, most sound recording is done by specialist operators, and the sound will likely be separated and further processed in post-production anyway. Moreover, high-end dedicated audio recording devices typically can record better quality sound than the audio recording subsystems of cameras, so most higher end production uses dual system recording even with cameras that are capable of recording sound internally. On such productions, internal camera audio may be used to record "scratch tracks" as an aid to the editor when syncing with the separately recorded master audio, to allow footage to be edited or viewed with sound prior to being synced with the master audio, or as a backup in case something is wrong with the master audio. Like modern film cameras, digital cinematography cameras can typically accept timecode data from external devices and record it with each frame of footage. Digital clapper slates are also commonly used.

Archiving

Many people feel there is significant value in having a film negative master for archival purposes. There are after all numerous extant examples of original 19th century film footage which were manufactured under primitive conditions, with no consideration given to archival value, but whose original images are still clearly visible and recoverable with relatively simple equipment. As long as the negative does not completely degrade, it will always be possible to recover the images from it in the future, regardless of changes in technology, since all that will be involved is simple photographic reproduction. In contrast, even if digital data is stored on a medium that will preserve its integrity, highly specialized digital equipment will always be required to reproduce it. Changes in technology may thus render the format unreadable or expensive to recover over time. For this reason, film studios distributing digitally-originated films often make film-based separation masters of them for archival purposes.

Economics

Low-budget / Independent Filmmaking

For the last 25 years, many respected filmmakers like George Lucas have predicted that electronic or digital cinematography would bring about a revolution in filmmaking, by dramatically lowering costs.

For low-budget and so-called "no-budget" productions, digital cinematography on prosumer cameras clearly has cost benefits over shooting on 35 mm or even 16 mm film. The cost of film stock, processing, telecine, negative cutting, and titling for a feature film can run to tens of thousands of dollars according to From Reel to Deal, a book on independent film production by Dov S-S Simens, based on his 2-day film course. Costs directly attributable to shooting a low-budget feature on 35 mm film could be $50,000 on the low side, and over twice that on the high side. In contrast, obtaining a high-definition prosumer camera and sufficient tape stock to shoot a feature can easily be done for under $10,000, or significantly less if, as is typically the case with 35 mm shoots, the camera is rented.

Most extremely low-budget movies never receive wide distribution, so the impact of low-budget video acquisition on the industry remains to be seen. It is possible that as a result of new distribution methods and the long tail effects they may bring into play, more such productions may find profitable distribution in the future. Traditional distributors may also begin to acquire more low-budget movies as better affordable digital acquisition eliminates the liability of low picture quality, and as they look for a means to escape the increasingly drastic "boom and bust" financial situation created by spending huge amounts of money on a relatively small number of very large movies, not all of which succeed.

Hollywood

On higher budget productions, the direct cost advantages of digital cinematography are not as significant in relation to the total budget, primarily because the costs imposed by working with film typically account for no more than a few percent of such large budgets.

Digital acquisition, however, offers numerous significant advantages on high-budget shoots, such as the ability to work faster (with fewer magazine changes and less concern over shooting large amounts of footage), to back up footage on set for additional safety, and to check important shots immediately, potentially avoiding costly reshoots. Digital workflow may also allow shots to be delivered to post production pipelines for color grading, visual effects work or editorial assembly even before principle photography ends. Some of these functions may even be performed on set so that, for instance, if a cinematographer has a specific stylized look in mind for color grading, he or she can see almost immediately how footage will appear with that look applied, and take this into account while shooting it.

Rick McCallum, a producer on Star Wars Episode II: Attack of the Clones, has commented that the production spent $16,000 for 220 hours of digital tape, where a comparable amount of film would have cost $1.8 million. With disk-based systems such as the Red One, the cost would be even lower, and exact backups can be stored at different locations on different media as well. However, this does not necessarily indicate the actual cost savings percentage, as the very low incremental cost of shooting additional footage may encourage filmmakers to use far higher shooting ratios with digital.

Industry acceptance of digital cinematography

Throughout the 20th century, virtually all movies were shot on film, and nearly every film student learned about how to handle 16 mm and 35 mm film. While most major motion pictures are still shot on film, digital cinematography has gained widespread acceptance over the last few years. In 2009, the Academy Award for Best Cinematography was awarded for a movie mostly shot digitally, Slumdog Millionaire[2]. Another nominee, The Curious Case of Benjamin Button, was also shot digitally.

Digital cinematography accounts for a larger fraction of feature movie shooting every year, and seems destined to eventually eclipse film-based acquisition, much as digital photo cameras have largely replaced film based photo cameras in the still photography world.[citation needed]

Some notable high-profile directors and producers that have shot with digital equipment include:

Some directors have expressed an openness for either format, such as Jean-Jacques Annaud who used 35 mm and HDCAM together for Two Brothers [5], or Quentin Tarantino, who, while he ended up shooting his contribution on film, expressed an interest in digital acquisition for Grindhouse [6].

Other filmmakers haven't directed digitally acquired films, but have produced them. For instance, Ridley Scott produced the 2007 series "The Company," which was shot on the Arri D-20 [7].

Lower-budget and limited-release movies have adopted digital cinematography at a somewhat faster pace, although some filmmakers still choose to shoot such productions on 16 mm film, the traditional medium for that market segment.

As the digital intermediate process gains wider use for finishing movies shot on film, and as digital acquisition technology continues to improve, it seems likely digital cinematography will continue to gain wider acceptance.

Digital technology has eclipsed analog alternatives in many other content creation and distribution markets. On the content creation side, digital photo cameras significantly outsell film photo cameras, digital video tape formats like MiniDV have superseded analog tape formats, digital audio workstations have almost entirely replaced multi-track tape recorders, digital non-linear editing systems have displaced Moviola/Steenbeck equipment as the standard means of editing movies, and page layout software running on desktop computers has come to dominate the graphic design industry. On the distribution side, CDs have largely replaced LPs, DVDs have largely replaced VHS tapes, and digital cable systems are displacing analog cable systems. It seems likely[citation needed] that despite current resistance on the part of some in the industry, digital technology will eventually be similarly successful in the feature film acquisition and theatrical exhibition markets.

List of major films shot in digital

In the last decade a large number of movies have been shot digitally. Some of them are independent, low-budget productions, while others are major Hollywood- and Europe-based productions.

See also

References

Bala's Naan Kadavul first Indian DI film [8]

  1. ^ http://www.siliconimaging.com/DigitalCinema/News/PR_01_31_09_Slumdog.html
  2. ^ "SI-2K-Shot Slumdog Millionaire". NewBay Media, LLC. 2008-12-03. Retrieved 2009-05-05.

External links