Jump to content

Talk:Deinterlacing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 86.184.24.140 (talk) at 16:30, 16 June 2011 (→‎Repeated vandalism by a pair of sockpuppets). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

what is it?

Actually, what is Deinterlacing? I still cannot understand. Can someone make this article using simple English and terminology. because not everyone can understand the way it was described. Help is much appreciated. thanks. from Senyuman

—The preceding unsigned comment was added by 60.50.137.61 (talk) 12:22, 13 January 2007 (UTC).[reply]

100fps.com??

Would anyone be opposed to linking 100fps.com in this article... I know some of the information is outdated, but it provides a huge amount of screenshots, video clips, etc.

Linear Interlacing?

It would be nice with an explanation of what the interlacing type 'linear' mean.

==

I deleted this link as it crashes firefox hard!

A few suggestions

A pedantic take on the terminology:

- the output of the de-interlacing process are progressive frames;

- frames are considered progressive; fields denote interlaced content; evenness/oddness of a field is related to the set of lines from the original progressive frame that comprise the field: lines 0, 2,.., 2k form the top field. Respectively, odd lines compose the bottom field.

- it's not necessary that interlaced video content is presented as pairs of fields; this type of content is made of single-field video samples.

- Line doubling is a term used mainly in the Home Theatre discussions. Interpolation would be the more appropriate term, since the missing lines aren't necessarily doubled.

- the two fundamental approaches to deinterlacing could be better termed as spatial and temporal; of course, combinations of the two also exist;

- the deinterlacing techniques are categorized as follows:

 * BOB line replication: the missing lines are simply copied from one of the existing lines
 * BOB vertical stretch: the missing lines are interpolated from the existing lines of the current field; the standard algorithms use either 2 {1/2, 1/2} or 4 {-1/16, 9/16, 9/16, -1/16}lines;
 * median filtering: a median selection gives the value of the calculated pixels
 * edge filtering: this technique eliminates the combing effect by attempting to detect an edge in the current field; the interpolated pixels are calculated by a filter that follows the edge;
 * field adaptive: depending on the amount of motion detected, the interpolation of pixels is performed by either a spatial filter (within the current field) or a temporal one (using future fields or backward reference frames);
 * pixel adaptive: similar to field adaptive, but at a pixel level (thus much costlier);
 * motion vector steered: requires multiple fields/reference frames, attempts to predict the trajectory of moving objects within the picture.

—The above unsigned remarks were made by 71.112.10.95 on 12 April 2006

I basically like the above remarks a lot. Except perhaps for the statement that "frames are considered progressive". It is certainly common to refer to interlaced video frames (SMPTE timecode has a frame counter, etc.), so I think that not all frames should be considered progressive. To me, a frame is a pairing of a top field and a bottom field that are temporally consecutive or simultaneous, regardless of whether these represent an interlaced-scan (consecutive) or progressive-scan (simultaneous) sampling. Also, I have some trouble with the idea of "the original progressive frame". In some interlaced-scan systems there is no original progressive frame - the material is produced using an interlaced scan from the very beginning. Also, in my experience, many people use the term "line doubling" loosely (and in my opinion not so accurately) to refer to any form of deinterlacing process. —SudoMonas 05:59, 1 July 2006 (UTC)[reply]

Basic question

If a video has to be deinterlaced for showing on a TFT display, why isn't it just "deinterlaced" like it would on a CRT display (by the afterglowing). This means, if a video is 50 interlaced fields per second, why don't we just show field 1 and field 2, then field 3 and field 2, then field 3 and field 4, and so on, each for 1/50 second? --Victor--H 16:30, 3 April 2006 (UTC)[reply]

I'm not sure what you meant by "afterglowing". A deinterlacing method similar to the one in your description is weaving, which only produces satisfactory results with static video content. Any sort of moving content would display visible artifacts (combing). 71.112.132.83 07:38, 12 April 2006 (UTC)[reply]

  • There is no significant afterglow on a CRT TV display. I once took a photograph at high shutter speed to test this, and found that the picture faded over about a quarter of the picture height, in other words in 1/200th of a second. Nor is 'persistence of vision' the simple thing it seems. I believe, from experiments, that it is processed image content in the brain that persists, not the image on the retina. Interlacing works because of this. The brain does not combine subsequent frames; if it did we would see mice-teeth on moving verticals, as we do on computer images or stills, which in fact we don't see on a CRT display. The brain also cannot register fine detail on moving images, hence I think litte is lost on a proper CRT interlaced display, while motion judder is reduced as well as flicker. Modern LCD and Plasma displays seem to me to be fundamentally unsuited to interlaced *video since they necessitate de-interlacing in the TV with inevitable loss. In theory, it is not impossible to make an interlaced plasma or LCD display, in which the two fields were lit up alternately, but in practice this would halve brightness, even if the response was fast enough. In view of this, I think it is a great pity that the 1080i HD standard was created, since it is unlikely ever to be viewed except as a de-interlaced compromise on modern displays. If 1080p/25 (UK) were encouraged worldwide, then de-interlacing would not be needed. 1080p/25fps requires no more bandwidth than 1080i of course, but it has the advantage of being close enough to 24fps to avoid 'pull down' on telecine from movies (in the UK we just run movies 4% fast and get smoother motion.) It also fits well with the commonly used 75Hz refresh rate of many computer monitors, which would just repeat each frame three times for smooth motion. In high-end TV's processing using motion detection could be used to generate intermediate frames at 50 or 75Hz as was done fairly successfully in some last-generation 100Hz TV's. Reducing motion judder in this way as an option is a better way of using motion detection than de-interlacing, because it can be turned off or improved as technology progresses. I note that the EBU has advised against the use of interlace, especially in production, where it recommends that 1080p/50fps be used as a future standard. --Lindosland 15:26, 23 June 2006 (UTC)[reply]

When should "Odd interpolate" and "Even interpolate " be used ? 213.40.111.40 (talk) 17:23, 29 May 2009 (UTC)[reply]

Correcting common misunderstandings over interlace

I have taken out the statement that the interlaced image contains only half the information. In terms of information theory it contains the same information, assuming we are de-interlacing to the same frame rate and not attempting to generate twice as many frames. Arguably interlaced video contains MORE visible information, since by sampling the image twice as often it provides more temporal information. Though this is of course at the expense of detail, the point about true interlacing (on a CRT) is that we do not miss the detail, even on motion, as we do not percieve detail so well on motion (see my above comments regarding the real nature of 'visual persistance') - we have to concentrate to see it. I think a lot of misunderstanding has arisen from the fact that interlaced video is now being judged on LCD and Plasma displays, via an unknown deinterlacer which introduces actual visible blurr onto anything that moves. --Lindosland 16:00, 23 June 2006 (UTC)[reply]

Then you were wrong to do so. Interlaced video really does contain half the information that progressive video does. This is betrayed by its raison d'être which is to use half the bandwidth of progressive video for the same vertical scan rate.
Since this seems to be a difficult concept for some people, permit me to elaborate. A 50fps progressive video stream of 1080/50p format transmits 50 1920x1080 pixels per second - that's data for 103,680,000 pixels every second (there is some extra housekeeping data, but let's ignore that for this argument).
In the interlaced format (1080/50i), only every other line of pixels is transmitted with each field, the 'missing' lines are transmitted on the next field. This means that although half the pixels are transmitted 50 times a second, the entire 1920x1080 pixel image is only transmitted 25 times per second - that's data for 51,840,000 pixels per second. The important issue here is that for each 1/50th second vertical scan, 1080 lines of image are transmitted in the progressive system, but only 540 lines of data in the interlaced system (every other line). 86.177.31.209 (talk) 18:09, 8 January 2011 (UTC)[reply]

More 3:2 pulldown info needed

The article absolutely needs to explain its relationship, and clearly explain the difference in respect, to 3:2 pulldown and reverse telecine, because it is a massively big subject in the audiovisual industry. There also needs to be more information about the close relationship between deinterlacing and reverse telecine (see reverse telecine section). Sometimes these two are even confused. However, 3:2 pulldown removal applies to movies broadcast as video, while deinterlacing applies to video broadcast as video. Because both movies and video are broadcast, many modern devices (line doublers, upconversion in HDTV sets, some progressive scan DVD players, etc) automatically switch between reverse telecine (3:2 pulldown removal) and deinterlacing, based on algorithms that automatically analyze the video for the prescence of a pulldown sequence in the video. Chips such as DCDi perform this task. As proof, there are over 70,000 search results that cover both "3:2 pulldown" and "deinterlacing" on the same page: Search ... A lot of confusion exists because these two (3:2 pulldown and deinterlacing) go hand-in-hand in modern consumer devices nowadays (HDTV's, line doublers, progressive-scan DVD players, etc). Therefore more consistency needs to exist between the deinterlacing article and the reverse telecine section in the telecine article. I've added a few sentences to refer to each other, as a result, because of the close relationship that exists here (especially in the explosion of modern end-user video equipment, such as HDTV sets which, when upconverting NTSC 480i material to high-def, automatically do both either deinterlacing or reverse telecine, depending on the video material being displayed). --Mdrejhon 22:56, 7 August 2006 (UTC)[reply]

question -where deinterlacing is done

Hi Guys, Trying to learn more about deinterlacing and as usual turning to wikipedia.

My basic question is: Am I better off

  1. playing a regular DVD on a regular player, and having the HDTV de-interlace and upscale the DVD? or
  2. playing a regular DVD on a player which de-interlaces and upscales the DVD, then sends to HDTV?

Not sure if this article is the place to answer such a question - but your heading "where deinterlacing is done" seemed ideal :) Greg 12:58, 22 March 2007 (UTC)[reply]

Answer:

You have two interlacer/upscalers (one in the DVD player and one in the HDTV). Use whichever choice gives better quality; there is a lot of variation in quality amongst deinterlacers and scalers. You probably also have the option of selecting progressive (but not upscaled) output from the player, thus using the deinterlacer in the DVD player and the upscaler in the HDTV.

216.191.144.135 (talk) 13:28, 30 May 2008 (UTC)[reply]

matched the properties of CRT screens

The second paragraph of the article says: "analog television employed this technique because it allowed for less transmission bandwidth and matched the properties of CRT screens."

The part about CRT properties is wrong. There is no property of CRT screens that mandates interlace. Millions of CRT computer monitors displaying progressive images prove it. --Xerces8 (talk) 12:32, 6 November 2010 (UTC)[reply]

In fact, CRTs do have specific properties which allow them to display interlaced video directly. Unlike current displays, CRTs have no fixed pixels but just lots of subpixels (RGB triads) which are scanned continuously by the electron beam. This allows CRTs to directly support various resolutions with no perceived quality loss or artifacts, because the electron beam and the shadow mask perform a kind of analog "filtering" and "scaling" the video signal. Also, the phosphors in the RGB triads excite very fast, and they fall-off very fast as well, allowing subfields to be perceived by the eye as two separate half-resolution frames at twice the framerate.
On the other hand, fixed-pixel displays like LCD, plasma, DLP, FED/SED, etc. feature fixed resolution and have much worse pixel response times, so you can not directly feed analog interlaced video signal to these displays, it first needs to be deinterlaced, scaled to match the native resolution, and frame rate conversion needs to be performed as well. --Dmitry (talkcontibs ) 18:51, 13 December 2010 (UTC)[reply]
Whilst you are correct, the phrase, "... and matched the properties of CRT screens." does imply that there is some characteristic of a CRT whereby it will operate better with an interlaced signal. This is, of course, not the case. They operate satisfactorily with either interlaced or progressive video (or even vector scan video). 86.163.86.51 (talk) 08:38, 5 June 2011 (UTC)[reply]

Handbrake, VisualHub, Wondershare

These video encoding programs seem to be able to deinterlace, detelecine, etc., almost any kind of video source. How? And should it be mentioned? (With proper citation, etc.) Apple8800 (talk) 09:24, 15 February 2011 (UTC)[reply]

Provided you don't make it read like an advert for those prodocts, then they probably should. I look forward to your edit. DieSwartzPunkt (talk) 14:49, 3 May 2011 (UTC)[reply]

Repeated vandalism by a pair of sockpuppets

The article is repeatedly being vandalised by a pair of IP addresses. The vandalism persistently removes the table of comparison of various deinterlacing methods.

The Ip addresses that are vandalising are:

User:188.123.231.4 and

User:82.179.218.11

The edit history of the two user also suggests that they have near identical interests.

The vandalism is identical from the two users and neither user leaves an edit summary (always a reliable sign). I would have submitted a sockpuppetry report, but the systen seems to have changed and I can't figure out how to do it. Can someone else oblige or at least tell me where I am going wrong? 86.184.24.140 (talk) 16:29, 16 June 2011 (UTC)[reply]