Talk:High color

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Useful demonstration images[edit]

For anyone who wishes to enhance the article visually, there's a series of image I've uploaded to the Commons:

The thing is, I don't know how to incorporate it into the articles, because 1) I don't know how to fit the images, even as thumbnails, without wreaking havoc on the page layout, and 2) when presented as thumbnails, the scaling resamples the images, thereby rendering them useless for the colour depth demonstration unless the reader follows the links to the full-sized images. --Shlomi Tal 20:31, 24 June 2006 (UTC)

COLOR QUESTION[edit]

Hey, I realize that this is not exactly the forum for this question, but I am looking for a relatively obscure answer that no one has been able to help me with so far. I'm trying to make stimulus for a psych experiment using CIE space, so I made it in photoshop with LAB color, a CIE space, but I can't save it as anything but a Tiff which most other applications won't load. I can save it as an 8-bit LAB color tiff, which will sort-of load, or a 16-bit RGB color .PNG file, which will load. The problem is that the two look very different, and I dont know which is closer to the true 16-bit LAB color. Any ideas? thanks and sorry for posting a somewhat irrelevant question here.

Colour component packing[edit]

Does anyone know how the 8 bit colour values are packed into the 5/6 bits? The simplest method seems to be to just remove the 3/2 lsbs. --Dean Earley 12:54, 2 November 2006 (UTC)

Dean:

Our amps go to 11.210.84.60.22 10:46, 3 February 2007 (UTC)

It probably depended what company / individual was making the hardware and/or software in question and how much they had to invest (financially, temporally, personnel-wise and in terms of marketing risk), and even how much they cared.
As you say, the simplest though maybe not most accurate way would be to just truncate the data to its 5 or 6 MSBs... which could cause various different kinds of colour distortion on re-display. It probably wouldn't have been that computationally costly by that point, however, to take the input data - as 24-bit with values from 0 to 255, or 30-bit with 0 to 1024, or whatever - and scale it to the output range (0 to 63, or 0 to 31; note this is NOT the same as scaling 256/1024 down to 64/32) and round it off using whatever convention ended up giving the best-looking results. IE in the case of reducing 24-bit down to 15-bit (or the R/B channels of 16-bit), perform a single multiplication by 31/255 for each channel value (ie "x 0.1216"... or, multiply by 31 and divide by 255) and round to 0dp, then extract the 5 LSBs out of the mantissa as your result.
This is more necessary because if you graph it out you'll see that, as the multiplier is 0.1216 not 0.125, there won't be a straight mapping of a block of 8 input levels to a single output level (if you want to avoid clipping and/or loss of overall contrast, anyway), but it'll waver between 8 and 9... As there are defined single values for "min" and "max", the effective loss of resolution when dropping a bit is slightly more than 0.500x, and the intermediate "grey" shades will no longer exactly match-up.
Heck, if it's something you're going to do often, as a particular feature of the program or chipset, then you could always cheat and take the Yamaha OPL way out - the lowest frequency their digital synthesis chips can make a full temporal resolution sine output at is 43Hz (at CD sampling frequency, 47Hz for DAT/DVD), as it doesn't produce it either mathematically or using analogue circuitry. Instead it's just a quarter-wave, 256-sample lookup table in ROM, which is read forwards then backwards in ping pong fashion with the sign bit being automatically inverted after every sub-cycle to recreate a full 1024-sample wave. In the same way, you could just have a simple precalculated lookup table as a couple of 256x8 ROMs (or a couple of 256-byte entries in RAM/the program's resource files), mapping 8-bit input to either 5 or 6-bit output, as the answer will never be different anyway, regardless of which way you do it. The system load is vastly reduced then, to a simple memory access that uses the channel intensity byte itself (or larger figure, if its 30-bit colour and we use 1024-character ROMs/resource files) as the memory access address. Whatever value is read out of the memory on the next cycle is the precalculated result... 193.63.174.211 (talk) 15:22, 5 June 2014 (UTC)

Graphic??[edit]

What is the point of the graphic?

What relevance does the caption have to this article? ("Human eyes are more sensitive to green light. The greens are easier to see than the reds, and the blues are almost impossible to see.")

Assuming some relevance, what does "the blues are almost impossible to see" mean? I can see the blue in the graphic fine.

Or is all this simply "plausible vandalism"? —Preceding unsigned comment added by 65.192.31.130 (talk) 19:06, August 28, 2007 (UTC)

It's somebody being confused. For color perception, blue is most important. For brightness perception, green is most important. We perceive details best with a yellow-green color. Thus a better way to encode at 16 bits per pixel would be to pack 32 bits with 2 red pixels at 6 bits each, 2 green pixels at 6 bits each, and 1 wide blue pixel with 8 bits. The subsampling commonly used for JPEG and video takes advantage of this in a more complicated way. 24.110.145.32 21:19, 16 September 2007 (UTC)
I believe you're confused as well... typical video/photo encoding tends to prioritise monochrome and also green, with both red and blue suffering reduced resolution, in both dimensions (though particularly the horizontal one - which, coincidentally, always used to be the problem factor with trying to get high resolution and high colour out of old, slow computers, which is why most "raster" effects are horizontal rather than vertical).
The luma channel gets most bandwidth devoted to it, and then there's a couple lower bandwidth channels encoding either red and blue "difference", or arbitrary inexactly-named "red/green" and "yellow/blue" difference channels based on the nature of analogue video signals using phase encoding for colour content. Usually this doesn't affect the channel bit depth, just the spatial clarity.
The case instead in 16-bit hi-color - and what the graphic is intended to show, but doesn't do very well as it has, ironically, too high a colour depth - is that the eye is most sensitive to greenish frequencies, and so can detect changes in intensity of green colours more precisely than for red and especially blue... which is why green generally seems a "brighter" colour than red, and blue a "darker" one, especially at the same actual intensity. So if your display hardware shows fewer intensity steps than the eye is capable of discerning, a viewer will more easily be able to pick out the "edge" between each individual band of colour in a gradient in the green channel than the blue, if they all have an equal number of steps.
Therefore, a common method of improving the overall display quality slightly, in systems that didn't use the 16th bit for genlocking or all-or-nothing alpha, was to use it to double the bitdepth of the green channel. This helped to improve (but certainly not eradicate) visible banding of photographic material in hi-color mode, and could produce results almost indistinguishable from 24-bit true-color when using dithering on high resolution displays. An unfortunate side effect was that it could impart a sort of colour fringing because of the unevenness between the number of available steps on each channel (even though there's the same total number, fully-off and fully-on don't count for much when trying to align levels in these situations, and there were in fact 62 intermediate shades of green to 30 for red and blue, numbers which don't divide exactly into each other) - relatively few pure grey shades were available, with most being tinged with variously noticeable amounts of green or magenta... which were even more obvious in shallow greyscale or low-saturation gradients because the green channel was "stepping" roughly twice as often as the others and so flip-flopping between hues as well as luminance levels... and it still had an effect when showing more even-toned 15-bit (and indeed, 12-, 9- or 6-bit) paletted images, as each source colour still had to be aliased to one within the 16-bit space, of where there wasn't always a direct equivalent available.
Still, as a relatively minor annoyance that came along with a much improved colour depth for the most important channel, and reduced banding overall even if what remained was a bit strange and still rather noticeable, it wasn't ever really seen as a major problem.
There were also variations upon it used with other colour depths - for example, 256-colour direct-addressing modes which used a 3-3-2 scheme (instead of adding a bit to green to take it from 15 to 16 bits, one is removed from blue to drop from 9 to 8), yielding 8 levels each for red and green, and just 4 for blue (with only two, far-from-pure intermediate grey shades available), where the decision wasn't instead taken to use a HSV type system. It's also possible to set up an 8-bit indexed-colour palette with something akin to direct colour (with appropriate hardware to deconvolute the index numbers through base conversion), using 6 *levels* for each of red and blue, and 7 for green to give 252 colours, plus 4 spare entries that could e.g. be used to restore the intermediate greys that would otherwise be missing (as an efficient extension from the typical evenly-space "web safe" 6x6x6 level, 216-colour cube which leaves 40 entries free for use by the OS, browser software, etc...), and even to tweak the otherwise evenly spaced 6-bit palette in that same way with 5 levels of green (improved), 4 of red (no change) and 3 of blue (reduced - just fully-on, half-bright, and off / black) for 60 colour indices plus four intermediate greys (vs the mere 2 which would otherwise be available, thus giving 6 levels of luminance and prioritising it even over green). All of which improve the balance of data dedicated to various colour channels and tune it more closely to the human visual system, improving the results that are available from otherwise limited hardware resources.
(...I've actually made the 5/4/3(+6), R/G/B(+I) palette as a custom one in Paintshop Pro and applied it to random photographs to judge the effect, with and without dithering, and it worked far better than it had any right to... certainly better than plain 4/4/4(+4) and closer to a fully customised 64-entry palette... The 7/6/6(+6) one worked quite nicely too, particularly vs regular 6/6/6 web-safe. You still get very obvious colour banding with these, when not dithering, but there's still less of it, and each block of flat colour appears a bit truer in tone to what it's attempting to represent, than there otherwise would have been.)
The mixed bit-depth and colour-resolution reduction scheme you propose is quite interesting, incidentally... but I would propose an alteration. 7 bits for green, 6 for red, and 6 for the shared blue... the overall effect being one of approximately 19 bits (so a hue accuracy at least as good as 18-bit, with a luminance accuracy closer to 21-bit), within a 16-bit-per-pixel wrapper. Green is then saved from having no greater depth than in 16-bit mode, and being effectively as good as 24-bit with some very simple dithering, whilst red is as good as it would be in a native 18-bit mode, and blue's depth is reduced from a somewhat OTT (in this context) 256 levels (equal to 24-bit mode!) to a probably-still-indistinguishable 64 (same as red).
Whether the smearing would end up being too obvious, and if it would be better to have it sampled as nearest-neighbour or instead smoothed out between adjoining pixels would be a matter for experimentation. It'd probably have worked better on CRTs than modern LCDs, mind... and better for videographic material than synthetic images on a computer screen. Monochrome text, screen elements and line drawings, after all, would end up exhibiting rather annoying blue and yellow colour fringing (the sort of thing many people upgrade to S-Video or indeed RGB connections to get away from, with composite) ... Still, if it was maybe a latterday attempt at a "HAM-16" mode (complementing the HAM-6 and HAM-8 of the old Amigas), it could have been useful. — Preceding unsigned comment added by 193.63.174.211 (talk) 14:42, 5 June 2014 (UTC)

You Are Right[edit]

I can see the blue in the graphic fine too. What is written in this article is incorrect. the blues are possible to see easily. That's why you're right. — Preceding unsigned comment added by Orhosh (talkcontribs) 17:29, 13 June 2013 (UTC)

Self-reference[edit]

Wikipedia:Avoid self-reference states that articles should avoid assuming that the article is being read on a screen, as this one does. I suggest being more vague and saying something like "this demonstration may not work if the colors in the image have not been correctly preserved". Dcoetzee 22:58, 9 January 2008 (UTC)

Important Note[edit]

Some people have sensitive eyes to blue light more than average, so for them it is not next to impossible to see blue light. In addition, I think that's true for almost everyone. — Preceding unsigned comment added by Orhosh (talkcontribs) 16:59, 13 June 2013 (UTC)

RGBAX?[edit]

What the heck is this? I know RGB, and RGBA, and both of those are covered in the article behind the RGBAX hyperlink ... but not RGBAX itself. What's the X for? 193.63.174.211 (talk) 13:48, 5 June 2014 (UTC)

Came across the term in the RGBA discussion, it seems that the described format either doesn't exist or is used in a very limited context. All I found so far was a Java implementation[1] for the type, but even there it just labels the X as unused (making it all the more confusing). The original sources were apparently all circle referencing each other via backups of older page versions and originated from the BMP file format page. Rainforce15 (talk) 14:26, 12 June 2014 (UTC)