Interlaced video
Template:Other uses2 Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.
This effectively doubles the time resolution (also called temporal resolution) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals.
Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the other being progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines.
A Phase Alternating Line (PAL)-based television set display, for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work together to create a full frame every 1/25 of a second (or 25 frames per second), but with interlacing create a new half frame every 1/50 of a second (or 50 fields per second).[1] To display interlaced video on progressive scan displays, playback applies deinterlacing to the video signal (which adds input lag).
The European Broadcasting Union has argued against interlaced video in production and broadcasting. They recommend 720p 50 fps (frames per second) for the current production format—and are working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats, such as 720p 50 and 1080i 50.[2][3] The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.
Despite arguments against it,[4][5] television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV, DVB, and ATSC. New video compression standards in development, like High Efficiency Video Coding, do not support interlaced coding tools and target high-definition progressive video such as ultra high definition television.
Description
Progressive scan captures, transmits, and displays an image in a path similar to text on a page—line by line, top to bottom. The interlaced scan pattern in a CRT display also completes such a scan, but in two passes (two fields). The first pass displays the first and all odd numbered lines, from the top left corner to the bottom right corner. The second pass displays the second and all even numbered lines, filling in the gaps in the first scan.
This scan of alternate lines is called interlacing. A field is an image that contains only half of the lines needed to make a complete picture. Persistence of vision makes the eye perceive the two fields as a continuous image. In the days of CRT displays, the afterglow of the display's phosphor aided this effect.
Interlacing provides full vertical detail with the same bandwidth that would be required for a full progressive scan of twice the perceived frame rate and refresh rate. To prevent flicker, all analog broadcast television systems used interlacing.
Format identifiers like 576i 50 and 720p 50 specify the frame rate for progressive scan formats, but for interlaced formats they typically specify the field rate (which is twice the frame rate). This can lead to confusion, because industry-standard SMPTE timecode formats always deal with frame rate, not field rate. To avoid confusion, SMPTE and EBU always use frame rate to specify interlaced formats, e.g., 480i 60 is 480i/30, 576i 50 is 576i/25, and 1080i 50 is 1080i/25. This convention assumes that one complete frame in an interlaced signal consists of two fields in sequence.
Benefits of interlacing
One of the most important factors in analog television is signal bandwidth, measured in megahertz. The greater the bandwidth, the more expensive and complex the entire production and broadcasting chain. This includes cameras, storage systems, broadcast systems—and reception systems: terrestrial, cable, satellite, Internet, and end-user displays (TVs computer monitors).
For a fixed bandwidth, interlace provides a video signal with twice the display refresh rate for a given line count (versus progressive scan video at a similar frame rate—for instance 1080i at 60 half-frames per second, vs. 1080p at 30 full frames per second). The higher refresh rate improves the appearance of an object in motion, because it updates its position on the display more often, and when an object is stationary, human vision combines information from multiple similar half-frames to produce the same perceived resolution as that provided by a progressive full frame. This technique is only useful though, if source material is available in higher refresh rates. Cinema movies are typically recorded at 24fps, and therefore don't benefit from interlacing. A solution which reduces the maximum video bandwidth to 5MHz without reducing the effective picture scan rate of 50Hz.
Given a fixed bandwidth and high refresh rate, interlaced video can also provide a higher spatial resolution than progressive scan. For instance, 1920×1080 pixel resolution interlaced HDTV with a 60 Hz field rate (known as 1080i60 or 1080i/30) has a similar bandwidth to 1280×720 pixel progressive scan HDTV with a 60 Hz frame rate (720p60 or 720p/60), but achieves approximately twice the spatial resolution for low-motion scenes.
However, bandwidth benefits only apply to an analog or uncompressed digital video signal. With digital video compression, as used in all current digital TV standards, interlacing introduces additional inefficiencies.[citation needed]. EBU has performed tests that show that the bandwidth savings of interlaced video over progressive video is minimal, even with twice the frame rate. I.e., 1080p50 signal produces roughly the same bit rate as 1080i50 (aka 1080i/25) signal,[3] and 1080p50 actually requires less bandwidth to be perceived as subjectively better than its 1080i/25 (1080i50) equivalent when encoding a "sports-type" scene.[7]
The VHS, and most other analogue video recording methods that use a rotary drum to record video on tape, benefit from interlacing. On the VHS, the drum turns a full revolution per frame, and carries two picture heads, each of which sweep the tape surface once for every revolution. If the device were made to record progressive scanned video, the switchover of the heads would fall in the middle of the picture and appear as a horizontal band. Interlacing allows the switchovers to occur at the top and bottom of the picture, areas which in a standard TV set are invisible to the viewer. The device can also be made more compact than if each sweep recorded a full frame, as this would require a double diameter drum rotating at half the angular velocity and making longer, shallower sweeps on the tape to compensate for the doubled line count per sweep. However, when a still image is produced from an interlaced video tape recording, on most older consumer grade units the tape would be stopped and both heads would just repeatedly read the same field of the picture, essentially halving the vertical resolution until playback proceeds. The other option is to capture a full frame (both fields) upon pressing the pause button right before actually stopping the tape, and then repetitively reproduce it from a frame buffer. The latter method can produce a sharper image but some degree of deinterlacing would mostly be required to gain notable visual benefit. While the former method will produce horizontal artifacts towards the top and bottom of the picture due to the heads being unable to traverse exactly the same path along the tape surface as when recording on a moving tape, this misalignment would actually be worse with progressive recording.
Interlacing can be exploited to produce 3D TV programming, especially with a CRT display and especially for color filtered glasses by transmitting the color keyed picture for each eye in the alternating fields. This does not require significant alterations to existing equipment. Shutter glasses can be adopted as well, obviously with the requirement of achieving synchronisation. If a progressive scan display is used to view such programming, any attempt to deinterlace the picture will render the effect useless. For color filtered glasses the picture has to be either buffered and shown as if it was progressive with alternating color keyed lines, or each field has to be line-doubled and displayed as discrete frames. The latter procedure is the only way to suit shutter glasses on a progressive display.
Interlacing problems
Interlaced video is designed to be captured, stored, transmitted, and displayed in the same interlaced format. Because each interlaced video frame is two fields captured at different moments in time, interlaced video frames can exhibit motion artifacts known as interlacing effects, or combing, if recorded objects move fast enough to be in different positions when each individual field is captured. These artifacts may be more visible when interlaced video is displayed at a slower speed than it was captured, or in still frames.
While there are simple methods to produce somewhat satisfactory progressive frames from the interlaced image, for example by doubling the lines of one field and omitting the other (halving vertical resolution), or anti-aliasing the image in the vertical axis to hide some of the combing, there are sometimes methods of producing results far superior to these. If there is only sideways (X axis) motion between the two fields and this motion is even throughout the full frame, it is possible to align the scanlines and crop the left and right ends that exceed the frame area to produce a visually satisfactory image. Minor Y axis motion can be corrected similarly by aligning the scanlines in a different sequence and cropping the excess at the top and bottom. Often the middle of the picture is the most necessary area to put into check, and whether there is only X or Y axis alignment correction, or both are applied, most artifacts will occur towards the edges of the picture. However, even these simple procedures require motion tracking between the fields, and a rotating or tilting object, or one that moves in the Z axis (away from or towards the camera) will still produce combing, possibly even looking worse than if the fields were joined in a simpler method. Some deinterlacing processes can analyze each frame individually and decide the best method. The best and only perfect conversion in these cases is to treat each frame as a separate image, but that may not always be possible. For framerate conversions and zooming it would mostly be ideal to line-double each field to produce a double rate of progressive frames, resample the frames to the desired resolution and then re-scan the stream at the desired rate, either in progressive or interlaced mode.
Interline twitter
Interlace introduces a potential problem called interline twitter, a form of moiré. This aliasing effect only shows up under certain circumstances—when the subject contains vertical detail that approaches the horizontal resolution of the video format. For instance, a finely striped jacket on a news anchor may produce a shimmering effect. This is twittering. Television professionals avoid wearing clothing with fine striped patterns for this reason. Professional video cameras or computer-generated imagery systems apply a low-pass filter to the vertical resolution of the signal to prevent interline twitter.
Interline twitter is the primary reason that interlacing is less suited for computer displays. Each scanline on a high-resolution computer monitor typically displays discrete pixels, each of which does not span the scanline above or below. When the overall interlaced framerate is 30 frames per second, a pixel that spans only one scanline is visible for 1/30 of a second followed by 1/30 of a second of darkness, reducing the per-line/per-pixel framerate to 15 frames per second.
To avoid this, standard interlaced television sets typically don't display sharp detail. When computer graphics appear on a standard television set, the screen is treated as if it were half the resolution of what it actually is or even lower. If text is displayed, it is large enough so that horizontal lines are never one scanline wide. Most fonts for television programming have wide, fat strokes, and do not include fine-detail serifs that would make the twittering more visible.
Interlacing example (warning high rate of flickering) | ||
---|---|---|
|
Deinterlacing
ALiS plasma panels and the old CRTs can display interlaced video directly, but modern computer video displays and TV sets are mostly based on LCD technology, which mostly use progressive scanning.
To display interlaced video on a progressive scan display requires a process called deinterlacing. This is an imperfect technique, and generally lowers resolution and causes various artifacts—particularly in areas with objects in motion. Providing the best picture quality for interlaced video signals requires expensive and complex devices and algorithms. For television displays, deinterlacing systems are integrated into progressive scan TV sets that accept interlaced signal, such as broadcast SDTV signal.
Most modern computer monitors do not support interlaced video, besides some legacy text-only display modes. Playing back interlaced video on a computer display requires some form of deinterlacing in the software player, which often uses very simple methods to deinterlace. This means that interlaced video often has visible artifacts on computer systems. Computer systems may be used to edit interlaced video, but the disparity between computer video display systems and interlaced television signal formats means that the video content being edited cannot be viewed properly without separate video display hardware.
Current manufacture TV sets employ a system of intelligently extrapolating the extra information that would be present in a progressive signal entirely from an interlaced original. In theory: this should simply be a problem of applying the appropriate algorithms to the interlaced signal, as all information should be present in that signal. In practice, results are currently variable, and depend on the quality of the input signal and amount of processing power applied to the conversion. The biggest impediment, at present, is artifacts in the lower quality interlaced signals (generally broadcast video), as these are not consistent from field to field. On the other hand, high bit rate interlaced signals such as from HD camcorders operating in their highest bit rate mode work well.
Deinterlacing algorithms temporarily store a few frames of interlaced images and then extrapolate extra frame data to make a smooth flicker-free image. This frame storage and processing results in a slight display lag that is visible in business showrooms with a large number of different models on display. Unlike the old unprocessed NTSC signal, the screens do not all follow motion in perfect synchrony. Some models appear to update slightly faster or slower than others. Similarly, the audio can have an echo effect due to different processing delays.
History
When motion picture film was developed, the movie screen had to be illuminated at a high rate to prevent visible flicker. The exact rate necessary varies by brightness—40 Hz is acceptable in dimly lit rooms, while up to 80 Hz may be necessary for bright displays that extend into peripheral vision. The film solution was to project each frame of film three times using a three-bladed shutter: a movie shot at 16 frames per second illuminated the screen 48 times per second. Later, when sound film became available, the higher projection speed of 24 frames per second enabled a two bladed shutter to produce 48 times per second illumination—but only in projectors incapable of projecting at the lower speed.
This solution could not be used for television. To store a full video frame and display it twice requires a frame buffer—electronic memory (RAM—sufficient to store a video frame). This method did not become feasible until the late 1980s. In addition, avoiding on-screen interference patterns caused by studio lighting and the limits of vacuum tube technology required that CRTs for TV be scanned at AC line frequency. (This was 60 Hz in the US, 50 Hz Europe.)
In the domain of mechanical television, Léon Theremin demonstrated the concept of interlacing. He had been developing a mirror drum-based television, starting with 16 lines resolution in 1925, then 32 lines and eventually 64 using interlacing in 1926. As part of his thesis, on May 7, 1926, he electrically transmitted and projected near-simultaneous moving images on a five-foot square screen.[8]
In 1930, German Telefunken engineer Fritz Schröter first formulated and patented the concept of breaking a single video frame into interlaced lines.[9] In the USA, RCA engineer Randall C. Ballard patented the same idea in 1932.[10][11] Commercial implementation began in 1934 as cathode ray tube screens became brighter, increasing the level of flicker caused by progressive (sequential) scanning.[12]
In 1936, when the UK was setting analog standards, CRTs could only scan at around 200 lines in 1/50 of a second. Using interlace, a pair of 202.5-line fields could be superimposed to become a sharper 405 line frame. The vertical scan frequency remained 50 Hz, but visible detail was noticeably improved. As a result, this system supplanted John Logie Baird's 240 line mechanical progressive scan system that was also used at the time.
From the 1940s onward, improvements in technology allowed the US and the rest of Europe to adopt systems using progressively more bandwidth to scan higher line counts, and achieve better pictures. However the fundamentals of interlaced scanning were at the heart of all of these systems. The US adopted the 525 line system known as NTSC, Europe adopted the 625 line system, and the UK switched from its 405 line system to 625 to avoid having to develop a unique method of color TV. France switched from its unique 819 line system to the more European standard of 625. Although the term PAL is often used to describe the line and frame standard of the TV system, this is in fact incorrect and refers only to the method of superimposing the colour information on the standard 625 line broadcast. The French adopted their own SECAM system, which was also adopted by some other countries, notably Russia and its satellites. PAL has been used on some otherwise NTSC broadcasts notably in Brazil.
Interlacing was ubiquitous in displays until the 1970s, when the needs of computer monitors resulted in the reintroduction of progressive scan. Interlace is still used for most standard definition TVs, and the 1080i HDTV broadcast standard, but not for LCD, micromirror (DLP), or plasma displays; these displays do not use a raster scan to create an image, and so cannot benefit from interlacing: in practice, they have to be driven with a progressive scan signal. The deinterlacing circuitry to get progressive scan from a normal interlaced broadcast television signal can add to the cost of a television set using such displays. Currently, progressive displays dominate the HDTV market.
Interlace and computers
In the 1970s, computers and home video game systems began using TV sets as display devices. At that point, a 480-line NTSC signal was well beyond the graphics abilities of low cost computers, so these systems used a simplified video signal that made each video field scan directly on top of the previous one, rather than each line between two lines of the previous field. This marked the return of progressive scanning not seen since the 1920s. Since each field became a complete frame on its own, modern terminology would call this 240p on NTSC sets, and 288p on PAL. While consumer devices were permitted to create such signals, broadcast regulations prohibited TV stations from transmitting video like this. Computer monitor standards such as CGA were further simplifications to NTSC, which improved picture quality by omitting modulation of color, and allowing a more direct connection between the computer's graphics system and the CRT.
By the mid-1980s, computers had outgrown these video systems and needed better displays. The Apple IIgs suffered from the use of the old scanning method, with the highest display resolution being 640x200, resulting in a severely distorted tall narrow pixel shape, making the display of realistic proportioned images difficult. Solutions from various companies varied widely. Because PC monitor signals did not need to be broadcast, they could consume far more than the 6, 7 and 8 MHz of bandwidth that NTSC and PAL signals were confined to. IBM's Monochrome Display Adapter and Enhanced Graphics Adapter as well as the Hercules Graphics Card and the original Macintosh computer generated a video signal close to 350p. The Commodore Amiga created a true interlaced NTSC signal (as well as RGB variations). This ability resulted in the Amiga dominating the video production field until the mid-1990s, but the interlaced display mode caused flicker problems for more traditional PC applications where single-pixel detail is required. 1987 saw the introduction of VGA, on which PCs soon standardized, Apple only followed suit some years later with the Mac when the VGA standard was improved to match Apple's proprietary 24 bit color video standard also introduced in 1987.
In the late 1980s and early 1990s, monitor and graphics card manufacturers introduced newer high resolution standards that once again included interlace. These monitors ran at very high refresh rates, intending that this would alleviate flicker problems. Such monitors proved very unpopular. While flicker was not obvious on them at first, eyestrain and lack of focus nevertheless became a serious problem. The industry quickly abandoned this practice, and for the rest of the decade all monitors included the assurance that their stated resolutions were "non-interlaced". This experience is why the PC industry today remains against interlace in HDTV, and lobbied for the 720p standard. Also the industry is lobbying beyond 720p, actually 1080/60p for NTSC legacy countries, and 1080/50p for PAL legacy countries.
See also
- Field (video): In interlaced video, one of the many still images displayed sequentially to create the illusion of motion on the screen.
- 480i: standard-definition interlaced video usually used in traditionally NTSC countries (North and parts of South America, Japan)
- 576i: standard-definition interlaced video usually used in traditionally PAL and SECAM countries
- 1080i: high-definition television (HDTV) digitally broadcast in 16:9 (widescreen) aspect ratio standard
- Progressive scan: the opposite of interlacing; the image is displayed line by line.
- Deinterlacing: converting an interlaced video signal into a non-interlaced one
- Progressive segmented frame: a scheme designed to acquire, store, modify, and distribute progressive-scan video using interlaced equipment and media
- Telecine: a method for converting film frame rates to television frame rates using interlacing
- Federal Standard 1037C: defines Interlaced scanning
- Moving image formats
- Wobulation: a variation of interlacing used in DLP displays
References
- ^ "Interlacing". Luke's Video Guide. Retrieved February 12, 2014.
- ^ "EBU R115-2005: FUTURE HIGH DEFINITION TELEVISION SYSTEMS". EBU. May 2005. Archived from the original (PDF) on 2009-05-27. Retrieved 2009-05-24.
{{cite web}}
: Unknown parameter|deadurl=
ignored (|url-status=
suggested) (help) - ^ a b "10 things you need to know about... 1080p/50" (PDF). EBU. September 2009. Retrieved 2010-06-26.
- ^ Philip Laven (January 25, 2005). "EBU Technical Review No. 300 (October 2004)". EBU.
- ^ Philip Laven (January 26, 2005). "EBU Technical Review No. 301 (January 2005)". EBU.
- ^ "Deinterlacing Guide". HandBrake.
- ^ Hoffman, Itagaki, Wood, Bock (2006-12-04). "Studies on the Bit Rate Requirements for a HDTV Format With 1920x1080 pixel Resolution, Progressive Scanning at 50 Hz Frame Rate Targeting Large Flat Panel Displays" (PDF). IEEE Transactions on Broadcasting, Vol. 52, No. 4. Retrieved 2011-09-08.
It has been shown that the coding efficiency of 1080p/50 is very similar (simulations) or even better (subjective tests) than 1080i/25 despite the fact that twice the number of pixels have to be coded. This is due to the higher compression efficiency and better motion tracking of progressively scanned video signals compared to interlaced scanning.
{{cite web}}
: CS1 maint: multiple names: authors list (link) - ^ Glinsky, Albert (2000). Theremin: Ether Music and Espionage. Urbana, Illinois: University of Illinois Press. ISBN 0-252-02582-2. pages 41-45
- ^ Registered by the German Reich patent office, patent no. 574085.
- ^ "Pioneering in Electronics". David Sarnoff Collection. Archived from the original on 2006-08-21. Retrieved 2006-07-27.
{{cite web}}
: Cite has empty unknown parameters:|month=
and|coauthors=
(help) - ^ U.S. patent 2,152,234. Interestingly, reducing flicker is listed only fourth in a list of objectives of the invention.
- ^ R.W. Burns, Television: An International History of the Formative Years, IET, 1998, p. 425. ISBN 978-0-85296-914-4.
External links
- Fields: Why Video Is Crucially Different from Graphics – An article that describes field-based, interlaced, digitized video and its relation to frame-based computer graphics with lots of illustrations
- Digital Video and Field Order - An article that explains with diagrams how the field order of PAL and NTSC has arisen, and how PAL and NTSC is digitized
- 100FPS.COM* – Video Interlacing/Deinterlacing
- Interlace / Progressive Scanning - Computer vs. Video
- Sampling theory and synthesis of interlaced video
- Interlaced versus progressive