LCD monitors for computers do NOT simply upscale a "VGA signal".
The passage I deleted in this article gave an example of upscaling as a standard LCD computer monitor upscaling a 640x480 VGA signal to a much larger resolution. This is preposterous... computers haven't output standard VGA for over a decade, perhaps two, except for the POST sequence. LCD monitors actually ADJUST to the output of modern video cards, which output a far greater resolution than 640x480. I suppose it's assumptions like this (all PCs output VGA, it's just "upscaled") that make the youth of today actually believe that true "upscaling" is even possible. Raster images are simply not scalable upward, so all that's taking place is pixel doubling.
Another example was given about the image being processed and filtered to "retain original detail", which is completely fallacious. Most of this article does seem to acknowledge that NO lost detail can be regained by "upscaling", but that particular example was completely misleading at best. Image processing can only attempt to smooth out the effects of pixel doubling, but it CANNOT reconstruct original detail. Image processing algorithms, no matter how advanced, can't tell a house from a tree, and have no way of "knowing" what detail was there to begin with. The ONLY way to see 1080 lines of detail is to view a source containing 1080 lines of resolution. Upscaling a 480-line source (standard DVD, for example) to fit a 1080-line display will only give you 480 lines that are simply LARGER than they were originally. Adding "image processing" will only BLUR those lines, not add any extra detail or improve image quality.
When I originally contributed to this entry (both content and images), I was working at a home theater video-processor company by the name of Anchor Bay Technologies (makers of DVDO, now part of the Silicon Image unit of Lattice Semiconductor). This was my first and only contribution to Wikipedia, and was the result of a ton of customers calling in the support line with either no clue or horrible information. As a rebuttal to your comment on an LCD scaling a VGA 640x480 VESA standard signal to what-ever resolution the LCD glass is, that is exactly what they do. Your comment that the "LCD monitors actually adjust to the output of modern video cards", is actually 100% wrong (and is similar to one of the reasons I had to write this article in the first place). Displays including CRTs have what is called an EDID ROM, which communicates over the DDC channel of the video interface (AnalogRGB, DVI, HDMI, DisplayPort). This ROM contains a VESA standard table of the display's capabilities. The video card (or really any video source that uses the standard interfaces) will then read the table and output what either the display says is the prefered resolution, or whatever it can produce that is in the table of capabilities. I am personally familiar with the VESA, DVI, HDMI, and DisplayPort standards, however they are covered under strict NDAs so copying technical data verbatim is not allowed and at the time I was having trouble finding out how to present 100% accurate information without violating that NDA. I also can't go into the technical magic that actually did the scaling for various products lines, although I can point to patents and coach another writer on what certain things mean.
My intention was to go back and fill in more details on how video scaling was actually done. There are various mathematical processes that occur and the order is very important, and I can describe those various elements using some public domain information.
The first of two interpretations of "keep the original content" that I had not developed in the article well is that, at face value, the video in = the video out - and the intention is to reduce or eliminate scaling induced artifacts. Yes you discovered the old adage "Garbage in, Garbage out", but each signal processing step is not perfect, it can be ideal under given circumstances, but rarely ever perfect - and that was where I wanted to go with that portion. As an example the simplest form of a true scaler is a merely that of a sample rate converter - a given number of samples "pixels on a line" needs to be converted to a different number of sample on a line in a same period of time. For example, converting a 640 pixel wide line to a 1920 pixel wide line that has to occur in the same frame period time, means that you are increasing the pixel sample frequency for that line (because we aren't generating any new content, we are limited by the content that the source provides). Take a look at the wikipedia article on sample rate conversion and they will get into why certain types of conversion are prefered over others (though I do note a circular reference to this article for the Apollo Mission under Film/TV). The problem is that any signal processing you do adds error to the signal, and error appears as visual noise. It could manifest as a "ring" where the scaler overshoots the target pixel intensity, then undershoots, then overshoots (etc.) as it homes in on the correct value, all the while outputting pixels (because time doesn't stand still). This can look like a ripple around an edge if not handled properly, and will make the edge look softer to the viewer (even if they can't articulate why). By changing the type of processing that is done around certain danger areas, one can mitigate various effects that are likely to happen around them thereby "retaining the original detail". See it's not dishonest, the reader just needs more information. Merely doubling pixels will not get you a properly scaled image all the time, 1920/1280 = 1.5 which is not quite double. Assuming that a scaler only ever doubles or makes simple mathematical averages of new pixels (and yes some very bad scalers do) is a disservice to the readers.
As for further input to your concern that I was less than honest that original content could be preserved - actually there are several cases where this is true in a video processor, an interlaced video signal that is converted into a progressive video signal, can have detectable artifacts that identify the line that was not part of the the original signal. This coupled with the cadence of the video signal can get a confidence in an algorithm that certain lines are not original, and they can be removed (restoring the original interlaced video signal). for the company I was working for at the time, this technology was infact productized and marketed under the trade name "PReP", which stood for progressive re-processing (take a poorly de-interlaced progressive signal, strip off the bad lines that were not the original content thereby returning it to the original interlaced signal, then re-process the interlaced signal with a de-interlacer).
The last rebuttal to your concern is that there are several algorithms out in the field that can look at compression artifacts and determine what caused them - this can actually be reversed to a point, but the result is you actually end up with more original detail than you would by viewing the same input source. See the Teranex line which was a spinoff of Lockheed Martin's satellite imagery software, in hardware for video.
What was removed from this article that made it more clear was the section on video processor, which was meant to call out that a video processor contained several of the various functions listed now under the "See Also" section, these functions need to be designed to work together to retain the original detail - poorly design hardware and algorithms or algorithms that are combined with incompatible algorithms will result in a larger image corruption than doing it well. I agree that an article for each of those topics is and was warranted, and I'm glad that others continued the work I started. However the way it was broken up leaves a lot to be desired.
Frame rate conversion is different from scaling (converting from say 24Hz to 60Hz), however a version of "frame rate scaling" is being done now with a technology called frame rate interpolation - it's really the same concept as scaling a line of video applied to entire frames of video over time. An original set of frames happens at a given spacing in time, and the algorithm needs to figure out what should go between them. At that high level, frame rate conversion and scaling are the same thing. Cheers! Tim292stro (talk) 22:22, 13 October 2015 (UTC)
- LOL...I just looked up upconversion here yesterday. (grin) It does redirect to the section on upconverting DVDs...is upconversion generally used in other contexts? It might be confusing if the redir didn't go specifically to that section. Doniago (talk) 18:36, 23 June 2010 (UTC)
video "scaling" in place of "scaler"
Blu-ray players are DVD upscalers too
I believe every Blu-ray player ever made is also a DVD upscaler (Are there any exceptions, however obscure?), so should the article mention this? What kind of upscaler are they? Does the quality of the upscaling vary across models? — Preceding unsigned comment added by 188.8.131.52 (talk) 01:11, 26 May 2011 (UTC)
All BluRay players were required by the BluRay standard to contain a scaler, as long as it took certain input resolutions and output certain resolutions that was as tight as the standard got. Each vendor was able to buy their own parts or make their own algorithms (if they had the technical prowess to do that). Yes the quality difference varried - think of it like the difference between a Yugo and a Bugatti, they are both cars and have four wheels and run on gasoline/petrol, but that's about where the similarities stopped. You get what you pay for here, and not everyone needs a Bugatti to pick up the kids... tim292stro (talk) 21:53, 13 October 2015 (UTC)
The entire article is in desperate need of sourcing, but I'm moving some of the more troublesome sections here. Please feel free to properly reference and reincorporate into the article text. Doniago (talk) 13:28, 29 July 2011 (UTC)
Upscaling/upconverting DVD players contain a scaler, which allows the user to convert lower resolution content into a signal that the display device will handle as high definition content. Depending on the quality of the scaling that is done within the upscaling/upconverting DVD player, the resultant output quality of the video displayed may or may not be improved. The idea behind upconverting DVD players is that when a DVD player is connected to an HDTV, especially one of the fixed pixel display type such as LCD, Plasma display, or DLP and LCoS projection TV, scaling happens anyway, either inside the player or inside the TV. There exist independent benchmark tests verifying that some upconverting DVD players do produce better video quality. However, under no circumstances will an upscaling/upconverting DVD player provide "high-definition content", since video information can only be retained or lost in each successive conversion step, but not created. Companies such as Denon, Pioneer Electronics, Panasonic and OPPO Digital were among the first to make upconverting DVD players. Now, almost all consumer electronics brands have this product category. Computer software DVD-Video players like PowerDVD and WinDVD tap into a computer's video card in order to upscale a video frame from the DVD content to the user's set output resolution.A properly-designed upconverting DVD player should have these key parts all with good quality: MPEG decoder, deinterlacing component, and video scaler. Among those, the deinterlacing component is an important one, if you are playing back interlaced content (most DVD's aren't). If the deinterlacer assembles the video frame in an incorrect manner, no matter how good the video scaler is, it still cannot produce the correct video. On the other hand, some upconverting DVD players use a single chip that contains the MPEG decoder, deinterlacing component and video scaler. This type of chip is often called SoC (System-on-a-Chip). Low cost upconverting DVD players usually feature the SoC design.
==Display limitations== Placing a video scaler before a limited-capability display device will not remove the limitations of that display device. For instance, you can't make a 720p display take a 1080p signal and expect to see all 1920x1080 pixels on the 1280x720 display surface; instead, the display will either scale the signal down or possibly crop the signal. A common misconception of consumers is that if you upscale from a 720p source to 1080p and the TV downscales to 854x480 internally (like within a plasma display), that you would end up with a better image. Since the final display surface does not contain the necessary pixel amount to display the 720p content in its entirety, there is a loss in the vertical and horizontal resolution in the final displayed image. It is preferred to send the display the exact resolution that it needs to output a final display image - even if the output device has more pixels than the source there will have to be blending of pixels or some sort of upscaling and this can cause problems. Some displays may have a further problem when displaying native resolution however; when sent the exact native resolution image, the display may be programmed to assume that it is receiving a signal from a PC (which causes some displays from manufacturers like Pioneer to reduce output brightness to avoid phosphor burn-in from a still image) or to crop off the overscan and zoom the rest.
I do feel that the last section here was one of the most important and got dumped because of me. Here I claim responsibility for doing original research - I stated a technical fact (you will never recover 1080p worth of high frequency detail from a 720p or lower frequency limited input signal), then I followed it up with what I had constantly heard from consumers. That said, I think at this point with some digging we can find some formal market research that has been done to substantiate the claims.
The section on the Pioneer Plasma was a direct pull from the manual, unfortunately I cannot recall the specific model, although it was a Black model (under the "Kuro" line name). Tim292stro (talk) 22:22, 13 October 2015 (UTC)