Talk:Exposing to the right

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Film (Rated Start-class)
WikiProject icon This article is within the scope of WikiProject Film. If you would like to participate, please visit the project page, where you can join the project and see lists of open tasks and regional and topical task forces. To use this banner, please refer to the documentation. To improve this article, please refer to the guidelines.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
Checklist icon
Taskforce icon
This article is supported by the Filmmaking task force.
 

ETTL? =[edit]

Refer to http://en.wikipedia.org/wiki/Talk:Exposing_to_the_left — Preceding unsigned comment added by 2.30.163.217 (talk) 22:59, 29 August 2013 (UTC)

Criticism[edit]

The premise behind ETTR is the falicy that each bit records a different zone of f stop of light thus meaning the last bit of a 14-bit raw file records massive detail all in the highlights. This is of course an absolute absurdity to anyone who has even a basic understanding of how semiconductors and digitization works but this article makes no mention of it. — Preceding unsigned comment added by 68.150.59.159 (talk) 01:49, 15 August 2011 (UTC)

You are wrong, or perhaps you have simply misunderstood. I'm trying to make the article clearer. Please let me know what you think. Regards, nagualdesign (talk) 00:19, 26 November 2011 (UTC)

The common understanding of ETTR (...highest (brightest) stop uses fully half of the discrete tonal values...) is just plain wrong and utter nonsense! Truth is: When you "shift the histogram to the right", you basically increase exposure time. The longer the exposure, the less noise, it's that simple. Sensors catch light, a basic physical principle, nothing else, they have no distribution of complex tonal values or such. — Preceding unsigned comment added by Ahinterl (talkcontribs) 09:25, 8 March 2013 (UTC)

Histograms[edit]

Eagle-eyed experts may have spotted that the image histograms are non-linear, whereas ETTR deals entirely with in-camera, linear histograms that are subsequently processed in a RAW converter (also linear) before being exported to Photoshop (other non-linear editors are available). A more accurate image of an ETTR histogram would look almost exactly like the 'normal' histogram but shifted to the right, without being stretched out. The same mistake is made on the Luminous Landscapes website, which I reproduced without thinking. Would anybody like me to change the image to avoid confusion? Conversely, does any body think that changing the image would be more confusing? ..I realize that I may be talking to myself here. nagualdesign (talk) 00:19, 26 November 2011 (UTC)

Apologies for my impatience but I decided to swap the Photoshop-style, non-linear histograms for Canon EOS-style, linear histograms. It's more technically correct, even though it does not look like the ones on the Luminous Landscape website. nagualdesign (talk) 04:34, 26 November 2011 (UTC)
I'm not sure about those histograms divided into five sections (which I presume relate to the nominal 5-stop range of the camera discussed). Surely the point of ETTR is that those stop dividers should be drawn at 16,32,64,128, illustrating that each higher stop uses twice the number space range of its predecessor? GideonReade (talk) 15:26, 1 January 2012 (UTC)
The 5-segment histograms are copied off my EOS 600D. I also presumed that each line represented +/-1EV, and a quick test confirms this, yet the camera can take photos with more than 5 stops range. Go figure. (Something to do with the histogram being based on a jpg, rather than the actual raw data.) The only bit of BS are the gradients underneath, which are non-linear and inaccurate. They don't really belong there, strictly speaking, but they give a quick idea of how to read the histograms (that they run from dark to light). nagualdesign (talk) 09:24, 2 January 2012 (UTC)

Misconceptions[edit]

I have amended the article as the reasons given for the disputed validity of this technique misrepresent the concept. I expect that this is due to misconceptions rather than a blatant straw man argument, so I'll attempt to explain it here and, hopefully, a better way of describing the technique may result, so that others do not have the same misconceptions.

1) "The basic principle, that the lighter tones in an image contain more information than the darker tones is not consistent with the physical realities of image sensors and their operation." Of course any tone may be considered to be a single piece of information - the tonal value (a number). What ETTR actually says is that the range of tonal values describing the lightest tones (the first 'stop') is larger than the range of tonal values that describe the darkest tones.

2) "A connection is made between the fact that a difference of 1 stop represents a doubling or halving of exposure, while the assumption is made that the same principle is applies to the image sensor and the way it records information." The sensor does not record information, it collects light. More properly, it converts light into electric charge. Double the light exposure and you double the charge (up to a limit). This charge is then digitized (converted to a number), such that a doubling of the number represents a doubling of the light exposure. In reality the accumulation of charge and the digitizing process may not precisely follow this rule due to physical limitations and the methods employed to mitigate these limitations, but the basic principle is valid. After digitization these values may be gamma corrected and then recorded (as in a JPEG), at which point we have gone beyond the scope of ETTR (which deals purely with RAW data).

3) "The image sensor will, however, (depending on its design and features) use between 8 and 16 bits of information to record the brightness of a given pixel in the image. If we assume an 8-bit sensor, and for the time being ignore the color of the image, the available brightness values are 0-255. Fully black will be recorded as 0000 0000(binary) = 0(decimal), white will be recorded as 1111 1111 (binary) = 255(decimal). In both cases we see that the image sensor uses 8 bits to record the information, irregardless of the actual value being recorded." As noted above a single tonal value (be it black, white or any shade in between) is indeed represented by a single number, however mid-gray (in this regime) would be represented not as the mid-value, 128(decimal), but by a much smaller number. This would leave many more tonal values available to describe the upper 'half' of the sensor's range of sensitivity as the lower 'half'.

4) "In ETTR, it is assumed that the part of the image with the highest light intensity uses half of the available recording values.." No it isn't. No 'part of the image' uses half of the available recording values. It's the range of tones which describe the highest stop of light sensitivity that use half of the available tonal values.

5) Inserting phrases such as "are said to", "it is assumed", etc. when describing the proposed technique, and making assertions such as "is not consistent with the physical realities.." when rebutting the technique smacks of POV.

Please comment below before making changes to undermine the article. Regards, nagualdesign (talk) 00:04, 6 March 2012 (UTC)

I would suggest that the x-axis of the histogram be labeled: My initial reaction to those histograms was that they didn't correctly show the scaling of the response due to the doubling of the exposure. I am coming from a sensor/FPA POV where I would consider the photon to electron to 'digital unit' (A/D output) to be the 'linear' response so I would consider this histogram to be non-linear since each vertical bar seems to represent a one stop change (a doubling).

Considered from this POV, it seems clear that increasing the exposure or opening the aperture will increase the intensity resolution in 'digital units' throughout the intensity range while leaving much (but not all) of the noise constant thereby resulting in a better S/N ratio.

In color photography this process can have a negative effect on color fidelity and even contrast as the intensity increases due to non-linearities in the FPA. There is, for any FPA, a 'sweet spot' in the output range that minimizes this latter effect. If one were to do ETTR with this in mind, the necessarily smaller difference between a non-ETTR exposure and this sweet-spot ETTR versus a full-on ETTR exposure would limit the benefit to S/N.

In monochrome FPAs with high linearity throughout the range, there is great benefit from an ETTR approach. Keep in mind that HDRI essentially uses an ETTR-like approach to capture low-lights by allowing high-lights to blow-out, so to the extent that HDRI 'works' to capture low-light areas with a higher S/N, ETTR should be expected to 'work' as well. — Preceding unsigned comment added by Softboyled (talkcontribs) 20:16, 6 September 2012 (UTC)

Reply[edit]

I edited the article to clarify the fact that this concept is in fact disputed.

Frankly, I believe that [1] proves that the concept is invalid, thus the article should simply briefly state that ETTR is a flawed concept. Out of respect for the time you have put into the article, I chose to merge the dispute into the text, to add balance. The phrases "are said to" etc were added to clarify that the research we have shows that ETTR does not work, although "it is said" that it works.

However, until research surfaces to disprove [2], I strongly believe this encyclopedic article should reflect the fact that the concept is proven wrong, although it still has some following.

Best regards, Thoger — Preceding unsigned comment added by 84.48.118.251 (talk) 08:59, 7 March 2012 (UTC)

Thanks for the reply, Thoger. Unfortunately the author of Why "Expose to the Right" is just plain wrong also misrepresents ETTR in their blog. The tests that they performed were of little practical value as they used ISO gain to expose to the right. As others have pointed out in the comments section of that blog, only 2 of the examples laid out are even valid. And setting out to disprove something that the author already 'knows' to be wrong is not exactly scientific.
For a more well reasoned analysis of the technique you may wish to read Noise, Dynamic Range and Bit Depth in Digital SLRs, the first of the external links, published by Chicago University. It's author has tested the technique with some rigor and shows that it does indeed work, albeit for slightly different reasons than is often assumed. One thing it does not seem to deal with is selectively adding 'fill-light' in post production (a common technique), which a simple test confirms does indeed lead to posterization. However this simple test would be considered Original Research and is therefore inadmissable. If I could find a reputable source that has published such tests I would add it to the refs.
It is certainly a fact that ETTR is disputed - there are many people online who get frustrated even discussing the concept - hence the need to have 'both sides of the argument' represented in the article. But it is far from factual to claim that its denigrators are correct in their assertions.
If you'd like to reproduce the experiment yourself simply photograph a range of dark tones, like the shadow under your settee (a smooth gradient is best), using a normal exposure, then take another with double the Time Value. Next take the second photograph and stop it down in ACR (other RAW convertors are available), or better yet, take the first photograph and stop it up. Now open both files in Photoshop and count how many 'steps' there are in each shadow. You should see greater (or should I say worse) posterization in the 'normal' photograph. You can use a drastic curves or levels adjustment to see the banding more clearly, but do remember to treat both photographs in an identical manner and make sure that you're working in 16-bit mode. (Convert them to 8-bit and they will look nearly identical.)
Regards, nagualdesign (talk) 15:02, 7 March 2012 (UTC)