Jump to content

Talk:Dots per inch

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Vbakke (talk | contribs) at 23:27, 1 June 2013. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

DPI printout

Some of this seems very subjective. —Preceding unsigned comment added by 68.225.92.194 (talk) 00:36, 22 August 2009 (UTC)[reply]

DPI Requirements

It seems to me that a section on DPI requirements is sorely lacking in this article. Unfortunately, I don't know where to find the physiological data references to back up the claims in the following posting by Kelly Flanigan, which seems to be the most rational explanation I've found. Unfortunately, it's not a reputable enough source. —Preceding unsigned comment added by Jaxelrod (talkcontribs) 15:01, 17 October 2008 (UTC)[reply]

+1 CannibalSmith (talk) 09:22, 28 December 2009 (UTC)[reply]

British vs. American spelling

Since inches are now only an American unit* the spellings on this page should be in American English. Similarly, a page on the British monarch or Australian government would use British spellings, and a neutral article (i.e., one on molecular biology) could use either or both spellings. SteveSims 04:39, 5 January 2007 (UTC)[reply]

*Excluding a few non-English speaking countries.

I asure you, the inch is still widely used in the UK. A page on the BRITISH monarch or AUSTRALIAN government is a different matter, because its actualy about a particular country, Dots per inch is universal. -OOPSIE- 14:07, 26 August 2007 (UTC)[reply]
OOPSIE, if the "inch" has no strong tie to any particular English-speaking country, then the spelling used should be the one initially used. The first version of the article used AE spelling ("Color images need...."). This should be corrected. JamesMLane t c 01:33, 14 April 2013 (UTC)[reply]
Indeed, it got flipped in 2006 in this uncommented edit. Let's put it back. Dicklyon (talk) 02:03, 14 April 2013 (UTC)[reply]

Merges

I hadn't noticed before that a separate article existed for DPI until Bobblewik helpfully pointed it out. I have arrogantly decided to simply redirect DPI to this article, wiping out the previous contents of that article. Here is why:

  • In a general sense, there was no information in that article that is not already in this one.
  • Several factoids from that article are either vague or misleading, and are more adequately explained in this article or in related articles Pixels per inch and samples per inch. Specifically:
    • DPI always refers to a physical representation of pixels per length unit. - Not true. DPI most correctly refers to printing resolution - dabs of ink on paper. The term is often used to specify what would more accurately be called pixels per inch; this distinction is explained in the current article.
    • Only when outputting this image to a physical medium with a certain size (say printing on to paper 20cm by 15cm) does the DPI get defined. - This is confusing, subtly wrong, and partially contradictory to the previously-mentioned sentence that DPI always refers to pixels.
    • The resulting DPI depend both on the resolution of the image... -- Not true. Even if DPI is used broadly, encompassing printer and monitor output, it still refers to a physical characteristic of an output device. The resolution of a particular image displayed on that output device has no bearing on that device's DPI capability. And DPI doesn't really make any sense in describing an image with such-and-such number of pixels; the "DPI" of an image only makes sense when given some number of inches. You can print a 5x5-pixel image on a 1200dpi printer; the output is 1200dpi. Printing a 1000x1000-pixel image does not change that.

Of course, the history is preserved, if anyone wants to extract any useful nuggets of information from it. -- Wapcaplet 20:57, 12 Aug 2004 (UTC)

Removal

From an editor: Draw a 1-inch black line on a sheet of paper and scan it. If the resulting image shows a black line with a width of, say, 300 pixels, then does the scanner not capture at 300 PPI? 128.83.144.239

I moved the above anon comment from the article to here. It was posted by way of explaining the setence "A digital image captured by a scanner or digital camera has no inherent "DPI" resolution until it comes time to print the image..." I'm not sure how it helps to explain this point, however; indeed, it seems to stem from the very misunderstanding of DPI that is being explained (that DPI and PPI are not the same thing, and SPI is yet another thing entirely). -- Wapcaplet 23:42, 23 Sep 2004 (UTC)

DPI versus PPI

DPI is mostly used to to tell how big a resolution an image should be printed at. Of course, this should be really be "pixels per inch"! But since it has become a standard, I think it should be explained here.--Kasper Hviid 18:33, 9 Nov 2004 (UTC)

  • I am not sure what you mean; the details of what it means to print an image at a certain DPI are, I think, fairly well-explained in the article at present. DPI is all about printing; there is a separate article on the related but different term pixels per inch. In printing an image, three things influence the output quality: DPI (the physical capability of the printer), the number of pixels in the image, and the space in which it is to be printed. As far as I know, there is no term to describe "the number of pixels printed in a one-inch space on the paper," though pixels per inch is probably the most appropriate. The resolution of an image sent to the printer (that is, the number of pixels) is unrelated to DPI of the printer. Maybe this should be explained better in the article? -- Wapcaplet 00:12, 10 Nov 2004 (UTC)
  • I has always throught of DPI as "How many pixels should be printed per inch"? This is a wrong, but common use of the word. As you said, there is no term to refer to "the number of pixels printed in a one-inch space on the paper", but this is probably why the word dpi has been used instead. For instance, at www.lexmark.com they tell that "resolution is measured in dpi (dots per inch) which is the number of pixels a device can fit into an inch of space." And at www.olympus-europa.com, they tell that a 640 x 480 pixels image at 150 dpi will end up as 10.84 x 8.13 cm in the print. Since this has become a commonly used standard, it deserves to be accepted in the article as a official use of the word dpi, along with a note that this really is wrong use of the word. The "pixels per inch" article dont tell anything about pixels per inch in print, but only about the screens resolution, something I have never understood the point of. Kasper Hviid 09:54, 10 Nov 2004 (UTC)

I wasn't able to find the references you gave on lexmark.com or olympus-europa.com, but it doesn't surprise me that DPI would be used in this broad way in documentation intended for the general consumer. I've seen scanner software that uses DPI to mean samples per inch. "Dots" is a fairly general term for most people; pixels, color samples, and ink spots could all be "dots" to most people. I don't think there's any call for distinguishing an "official" use of the word. It's official if people use it that way, and (I suppose) if it's defined that way in the dictionary, which it is. It's really only the more technical among us who care to differentiate DPI from PPI and SPI.

As for the purpose of describing screen resolution, I suppose it's probably useful in calibrating a computer display to a printing device. If a print shop needs to have things displayed on their computer monitor at the same size they will be printed, monitor PPI is useful. I thought about adding to the pixels per inch article to include the idea of pixel density on paper, but while it makes sense to me, I know of no other instances of PPI being used in that way. If you find such a usage, let me know! -- Wapcaplet 22:59, 10 Nov 2004 (UTC)

Since pixels are actually dots (PDF), it comes out that PPI and DPI are actually the same thing. --88.153.32.35 12:50, 24 June 2006 (UTC)[reply]

---

Yes, since dpi and ppi are obviously used interchangeably, this interchangeability should be explained here... so that novices looking here for definitions can understand what they find to read in the real world. It seems extremely arrogant to say "Wrong" and "misuse" when obviously pixels per inch has always been called dpi. And still is.

Do some few say ppi? Yes. Do the vast majority say dpi. Absolutely yes.

1. All scanner ratings are specified as dpi, obviously meaning pixels per inch. They dont say "samples per inch", they all say dpi, which we all know means pixels per inch. Scanners create pixels, not ink dots. Who are you to call every scanner manufacturer wrong?

2. All continuous tone printers (dye subs, Fuji Frontier class, etc) print pixels, and call their ratings dpi too (colored dots also called pixels). Who are you to call all these manufacturers wrong?

3. The most current JPG image file format specification claims to store image resolution in "dots per inch". The most current TIF file format specification claims to store image resolution in "dots per inch" They are referring to pixels... there are no ink dots in image files. Who are you to call these authors of the most common file format specifications wrong?

http://www.w3.org/Graphics/JPEG/jfif3.pdf (page 5)

http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf (page 38)

4. Google searches on 7/14/06 for

"72 dpi" 17,200,000 links

"72 ppi" 124,000 links

(138 times greater use of dpi... a couple of magnitudes more useage)

You may be aware that 72 dpi topics are never about printer ink dots.

When calling everyone else wrong, a wise man would reevaluate his own position. The Wikipedia author who claims misuse of dpi is obviously dead wrong. It is probably only his wishful thinking that the world OUGHT to be as he wishes it to be, but it is just his imagination, and this Wiki definition is definitely WRONG.

The two terms are obviously interchangeable. Wake up, look around, where have you been? Pixels per inch has ALWAYS been dpi. Yes, dpi does also have another use. So what? Almost every English word has multiple meanings and uses. However which term is best is not important here - this is certainly not the place to decree it (as attempted). Both terms are obviously used with the same meaning (pixels per inch) and that matter is long settled. Say it yourself whichever way you prefer to say it, but we obviously must understand it both ways. Because we see it everywhere both ways. So this both-ways phenomena needs to be explained in the definitions here. Without bias. About how the real world really is, not about how some author might dream it ought to be.

WHAT IS IMPORTANT is that beginners need to know the two terms are used interchangeably everywhere, with both terms meaning pixels per inch, simply so they can understand most of what they will find to read about the subject of imaging. There is no reason to confuse them even more by telling them everything they read is wrong. Wiki is wrong. The Wiki definition can only totally confuse them.

Beginners do need to know the two concept differences (your two definitions), but once the concepts are known, then the terms are almost arbitrary.. We could call them "thingies per inch". The context determines what it means (like all English words), and if the context is about images, dpi can only mean pixels per inch (ppi can mean that too). If the context is about printer ratings, then dpi can only mean ink dots per inch. 71.240.166.27 03:20, 14 July 2006 (UTC)[reply]


DPI is the CORRECT term for the target resolution at which an image is to be printed or displayed. It is a value stored in a digital file which indicates the current target printing resolution of that file. To use it otherwise is to sow confusion out of some misguided ideology. —Preceding unsigned comment added by 24.128.156.64 (talk) 18:19, 10 October 2007 (UTC)[reply]

Printer advertisements

I see several printers advertised with "4800 x 1200 color dpi" and such. Is this some kind of industry conspiracy to redefine the term "dpi"? Or am I misunderstanding something? Example: [1] -- Anon

  • Nope, sounds like the most appropriate and correct possible usage of DPI to me, assuming those figures are what the printer is actually capable of. Now, if a scanner is advertised with some DPI, then Samples per inch is what is actually meant. Many times a scanner is advertised with its interpolated sampling resolution, since that number is often much higher than the actual optical resolution; a good consumer scanner may only be able to capture 1600 samples per inch, but the samples are often scaled (either in the hardware or in the scanning software) to much higher resolution, such as 19,200, and of course "19,200 DPI" looks better in an advertisement than "1600 DPI." Whether printer manufacturers use a similar strategy, I don't know; I do know that the reason the two DPI figures are often different is that one of them is a horizontal resolution, determined by how finely the printing heads can be controlled, while the other is a vertical resolution, determined by how finely the paper feed roller can be controlled. Finally, if you see a digital camera advertised with some DPI, buy a different brand, since DPI has no meaning in that context unless they are referring to the quality that might be achieved in printing a digital photo at a certain size (and then, pixels per inch is probably more appropriate). -- Wapcaplet 18:11, 28 Nov 2004 (UTC)
Yes, it's a mixture of morons being stupid and marketing people trying to pull the wool over peoples' eyes. They're conflating dots per inch with dots themselves, image size (48x12 kpixels across) with spatial image resolution (note that the "DPI" in the prior figure does NOT contain actual inches or any other real-world spatial measurement, dead givaway of moronity.) I'll put this back in the article. 76.126.134.152 (talk) 11:47, 2 June 2008 (UTC)[reply]

FSDGSDFG — Preceding unsigned comment added by 115.248.176.253 (talk) 06:24, 3 November 2011 (UTC)[reply]

Color

I am concerned about the following statement:

This is due to the limited range of colors typically available on a printer: most color printers use only four colors of ink, while a video monitor can often produce several million colors. Each dot on a printer can be one of only four colors, while each pixel on a video monitor can be one of several million colors; printers must produce additional colors through a halftone or dithering process.

Computer displays work in a similar fashion to printers: they use a combination of different amounts of the primary colors (in this case, the additive primaries: red, green, and blue) to produce a wide range of visible colors. Most printers use the (subtractive) primaries and black in different combinations and patterns.—Kbolino 02:18, 10 February 2006 (UTC)[reply]

No they don't. Video displays can actually produce darker or lighter versions of the same color in each of their subpixels by altering the amount of light produced. Printers, on the other hand, can not blend their ink with the color of the paper (I.E.: white) to produce darker or lighter shades/tones due to the ink's nontransparency. Instead, they must place smaller blobs of a given color of ink in order to (on white paper) make a less saturated tone or bigger blobs to make a more saturated shade. Admittedly, some printers actually CAN change the color of their ink by mixing all 4/5/6/8 colors together into one big blob, like the solid ink printers I've drooled over for years. 76.126.134.152 (talk) 11:47, 2 June 2008 (UTC)[reply]

What is DPI dependent on?

"The DPI measurement of a printer is dependent upon several factors, including the method by which ink is applied, the quality of the printer components, and the quality of the ink and paper used."

This is not true, or at least is confusing.

The DPI in the printing direction is dependent on the head firing frequency and the linear print speed. The DPI in the advance direction (perpendicular to the printing direction) is dependent on the spacing of actuators (e.g. nozzles for inkjet) on a head, and the angle of the heads. Each of these can be multiplied by use of interleaving/"weaving" using multiple passes and/or multiple heads.

What the sentence above may have been trying to get at is that different print modes can use different firing frequencies, linear speeds, interleaving factors, etc., and the effect of ink and media settings in print drivers is often to change the print mode (possibly in addition to other software settings that don't affect DPI). Also, different print head technologies may improve at different rates in terms of firing frequency, actuator spacing, etc.

On the subject of advertisements, I strongly suspect that some of the dpi figures quoted in printer adverts are inflated. You can inflate a dpi figure by:

  • counting different colors as more than one dot (this may be what "4800 x 1200 color dpi" means -- I expect it is really 1200 x 1200 in 4 colors)
  • counting each dot printed by a variable-dot head as more than one dot
  • saying "equivalent dpi" and making up a random number.

This kind of creative arithmetic is all the result of trying to munge various resolution and quality factors into a single number for marketing purposes. It's similar to how clock speed used to be used to indicate how "fast" a processor was. At its worst, it can lead to distorted technical decisions that maximize DPI with no improvement in, or even at the expense of quality (just as the Pentium 4 design was distorted to maximize clock speed).

It's unlikely that someone could see a visible improvement in resolution above about 1000 dpi with the unaided eye at a normal reading distance. The extra quality that you can get from a higher dpi than that is not due to an increase in resolution; it's due to a reduction in "graininess" (and possibly better hiding of head defects) from using smaller drop volumes, which requires you to use more dots in a given space to achieve the same ink density.

To meaningfully compare printers, you need at the very least to know the volume of ink in a drop (for inkjet heads as of 2006, this can vary from about 1 to 80 picolitres), including what "subdrop" volumes are possible in the case of variable-dot heads, as well as the real dpi figure in each direction. The overall quality will also depend on halftoning algorithms, the gamut of the inks used, color management, positioning accuracy of the printer mechanism and any encoders, head defects and how well the print mode hides them, etc. The intended application is also significant: to give an extreme example, there's no point in achieving "photographic" resolution in a printer that will be used to print billboards -- although color gamut would still be very important for the latter.

DavidHopwood 00:16, 5 June 2006 (UTC) (working for, but not speaking for, a printer manufacturer)[reply]


Metric

"There are some ongoing efforts to abandon the dpi in favor of the dot size given in micrometres (µm). This is however hindered by leading companies located in the USA, one of the few remaining countries to not use the metric system exclusively."

I wouldn't blame US companies for this, even though I'm an enthusiastic S.I. advocate. Software interfaces to RIP packages and driver APIs require dpi, and there's no compelling reason to change them. Despite this, it is possible for a printer controller implementation to be internally almost S.I.-only. DavidHopwood 01:20, 5 June 2006 (UTC)[reply]

DPI or dpi

This is pretty trivial, but should the case (DPI or dpi) be standardized in this article? The last section uses dpi while the others use DPI. --MatthewBChambers 09:13, 2 October 2007 (UTC)[reply]

It should use DPI. It's an abbreviation. I've fixed it. --jacobolus (t) 10:55, 2 October 2007 (UTC)[reply]

Both of the external links are low quality links to pages by writers with an ideological axe to grind and a limitied understanding of the topic. —Preceding unsigned comment added by 24.128.156.64 (talkcontribs)

So be WP:BOLD, add some better sources! --jacobolus (t) 04:29, 13 October 2007 (UTC)[reply]
I disagree, the "Myth" link was the first place where I have understood why and how to reset dpi without losing quality. RomaC (talk) 14:42, 31 March 2008 (UTC)[reply]

DPI for digital images “Meaningless”?

This sentence "Therefore it is meaningless to say that a digitally stored image has a resolution of 72 DPI." is just simply, clearly unequivocally false. It is also misleading in a way that exacerbates existing confusion among users. —Preceding unsigned comment added by 24.128.156.64 (talk) 16:44, 12 October 2007 (UTC)[reply]

Hmm, there seems to be some disagreement here about the origin of the term DPI. The position of the article is that it has its origins in printers, while 24.128.156.64 says that it had its origins in digital file formats. Does anyone have a reference to support either position? Personally, I think it's the printers. Rocketmagnet 17:10, 12 October 2007 (UTC)[reply]

I don't think it is only an issue of origin. It is, most importantly, an issue an issue of use. Users of graphics software get confused on this issue as it is. To deny the fact that all professional graphics software and most amateur graphics software allows the editing of a value called DPI which gets stored with the file adds to that confusion. Here is an example of a page using DPI correctly: http://msdn2.microsoft.com/en-us/library/ms838191.aspx—Preceding unsigned comment added by 24.128.156.64 (talk) 17:23, 12 October 2007 (UTC)[reply]

I don't think anyone was denying that DPI can be stored in a digital image. And you're right, people do get confused about it all the time. People often ring me up wanting a picture, and they say "I want a picture, and I need it at 300 dpi" And I say, "Well, it depends how big you're going to print it." And they say "I don't know, just give it to me at 300 dpi". Rocketmagnet 17:59, 12 October 2007 (UTC)[reply]
Which is precisely the problem. People don't seem to realise that a digital image doesn't fundamentally have DPI, in the same way that it fundamentally has a resolution. The DPI is a tag that's added on by some software, that can be used or ignored as the user sees fit. Rocketmagnet 17:59, 12 October 2007 (UTC)[reply]
I'm having trouble reconciling "Therefore it is meaningless to say that a digitally stored image has a resolution of 72 DPI." with "I don't think anyone was denying that DPI can be stored in a digital image."
On the other hand it is now clear that we're both motivated by trying to correct the same confusion among our clients (and other people in similar positions) and disagreeing about how to do that. —Preceding unsigned comment added by 24.128.156.64 (talk) 18:43, 12 October 2007 (UTC)[reply]
I'm sure we agree that a digital image fundamentally has a resolution, since it is literally made out of pixels. It is impossible to change the resolution without fundamentally changing the content of the image. Now, a digital image might also have a filename. But the filename is not fundamental to the image. I can rename the file to be whatever I want, and it does nothing whatsoever to the content of the image. It could even have no filename if I haven't saved it yet. So I would say that digital images don't fundamentally have a filename. I would say the same thing of the DPI value in the image. The image might have a DPI value stored in it, but digital images in general do not fundamentally have DPI. I could change the DPI value to be whatever I want, and it will do nothing to the content of the image.
Another example: I could e-mail a photo of a cat to a local print shop. Then call them up and say "I know the image says it's 300 dpi, but I want it printed at a width of 2 feet". That makes sense, because the DPI is not fundamental to the image. The guy at the print shop probably wouldn't even bother editing the dpi value in the image, he would set the printer to print it to the size I want. Compare it with this example: I e-mail a photo of a cat to a local print shop, then call them up and say "I know it's a photo of a cat, but please print me a photo of a house". That would be insane.
Yet another example: I add a tag to my digital image which says "50 gsm" (gsm = grams per square meter). The idea is that, when the image is printed I want it printed on that weight of paper. But is it meaningful to say that this digital image has a weight of 50 grams per square meter? No. That's nonsense. Digital images do not have weight (even if there is a GSM tag in the image). In the same way, they do not really have DPI (even if there is a DPI tag in the image). Rocketmagnet 20:23, 12 October 2007 (UTC)[reply]
Er... any Wikipedia entry which said "Therefore it is meaningless to say that computer files have names." would get corrected.
Please do not misquote me. I did not say that computer files don't have names. I said that digital images do not fundamentally have filenames. A filename is not a fundamental part of what it means to be a digital image. For example, when I play Quake, I am seeing tens of digital images per second. Not a single one of those has a filename. Nor do any of them have a DPI.
The DPI in the file DOES effect the printed output unless it is changed or overridden. It is like the icc color profile in that regard. Would you say that it is meaningless to say that graphic image files in many formats can have color profiles? —Preceding unsigned comment added by 24.128.156.64 (talk) 21:58, 12 October 2007 (UTC)[reply]
Oh god. I think we're talking at cross purposes. Do you understand the difference between "some humans wear glasses" with "glasses are not a fundamental part of what it means to be human"? Glasses can be added to humans, and it may benefit them, but there are many humans without them, and they are still human. Likewise, a DPI value can be added to an image, and it may help people when printing the image, but many images have no DPI value added to them, and it makes them no less an image.
I think you are confusing "a digital image" with "a file on a disk which contains a digital image", which are two different things. A digital image is a much broader concept. Rocketmagnet 14:36, 13 October 2007 (UTC)[reply]
The DPI value is a part of a digital image in the same way that the color profile is; in the same way that vector data or text can be added to a bitmap in a photoshop image and become part of the digital image. This is the same way in which a file name is part of a computer file. I may have been a bit confusing in that a file name is not a part of a digital image in quite that way. The example of a file name being part of a file was a metaphor and may not have made my point clearer.
A digital image is composed of more than just a bitmap, it can include color profiles, DPI, vector data, text, positional offsets, filters, information specifying display of a variety of devices, compression specifications, and other elements.
To use your metaphor, I would object to an article that said "It is meaningless to say that a person has glasses." A person can have glasses. It would be very confusing to people to say that a person cannot have glasses if it wasn't a subject everyone is so familiar with. —Preceding unsigned comment added by 24.128.156.64 (talk) 16:20, 13 October 2007 (UTC)[reply]
Perhaps we are having a problem over the meaning of "has". You could say that a human has glasses. But this would be a different meaning of "has" to its use in: "a human has DNA". In the latter example, having DNA is fundamental to what it means to be a human (all humans have DNA). In the former example, we are talking about one example of a human, not all humans. The same applies to DPI. An image may or may not have a DPI value tagged onto it.
Look, Imagine if I could tag an image with "5 ounces", then would you say that the image has a weight? Really, does it weigh 5 ounces? No, digital images don't have weight. It would be still correct to say that "it is meaningless to say that a digital image has a weight".
Tagging an image with "100 dpi" doesn't mean that it really has one hundred dots per inch. A digital image cannot actually have one hundred dots per inch, because it doesn't have a size in inches. It only has inches if you actually print it out. Surely you can see that there is a difference here? Surely? Rocketmagnet 17:30, 13 October 2007 (UTC)[reply]
It's apparent that the word "meaningless" is insufficiently clear or direct; the result we are currently witnessing is an unproductive semantic squabble. You're both essentially correct, so instead of arguing, how about we try to come up with an alternative phrasing which all can be satisfied with? --jacobolus (t) 18:00, 13 October 2007 (UTC)[reply]
As an unrelated aside, 24.128.156.64, you might try signing your comments, like this: ~~~~. :) --jacobolus (t) 18:07, 13 October 2007 (UTC)[reply]
Thanks jacobolus, wise words. I'd considered re-writing the text in the article, but I thought it would be worth coming to an understanding, in case it caused an edit war. But, yes, this discussion doesn't seem to be getting anywhere. However, it does point strongly to the misunderstanding people seem to have with the concept of DPI relating to digital images. Rocketmagnet 18:26, 13 October 2007 (UTC)[reply]
If you think re-writing will cause an edit war, then put the proposed rewrite on the talk page first. :) --jacobolus (t) 20:14, 13 October 2007 (UTC)[reply]
Here's some proposed text. Perhaps a bit clunky, but I think its correct:
DPI refers to the physical size of an image when it is reproduced as a real physical entity, for example printed onto paper, or displayed on a monitor. A digitally stored image has no inherent physical dimensions, measured in inches or centimeters. Some digital file formats record a DPI value, which is to be used when printing the image. This number lets the printer know the intended size of the image, or in the case of scanned images, the size of the original scanned object. For example, a bitmap image may measure 1000×1000 pixels, a resolution of one megapixel. If it is labeled as 250 DPI, that is an instruction to the printer to print it at a size of 4×4 inches. Changing the DPI to 100 in an image editing program would tell the printer print it at a size of 10×10 inches. However, changing the DPI value would not change the size of the image in pixels which would still be 1000×1000. An image may also be resampled to change the number of pixels and therefore the size or resolution of the image, but this is quite different from simply setting a new DPI for the file.

24.128.156.64 22:45, 13 October 2007 (UTC)[reply]

I think that's pretty good. I've made a couple of small changes though. Rocketmagnet 22:48, 13 October 2007 (UTC)[reply]
Great. I took your version, changed "would tell the printer print it" to "would tell the printer to print it", put back in the page references that where in the original on the page, and put it into the article. 24.128.156.64 23:12, 13 October 2007 (UTC)[reply]

just my 2 cents on the words on top: the number of pixels is an hard coded value in any pixel format. the dpi value is sort of a scratch parameter in common image formats. people doing layouts and then prints have first to decide what width and height (measured in cm or inch) they want to select for the given image. only after that decision you can do the math and divide the number of pixels by the number of inches... some folks prefer talking in the news paper typical columns, e.g. 3 columns width - and thats a fixed length for a specific type of paper. --Alexander.stohr (talk) 16:18, 5 August 2010 (UTC)[reply]

Specification jargon

Can editors who understand these things (I don't) add explanations to the article that make the kind of jargon typically found in printer specs understandable to laypeople?

Examples (taken from HP and Brother):

  • "Up to 1200 rendered dpi black"
    What is the meaning of "up to" here?
    "Rendered" as opposed to unrendered and thus invisible?
  • "Up to 4800 x 1200 optimised dpi colour"
    "What does X times Y mean here? 4800 x 1200 = 5760000, so is this 5,760,000 dpi?
    "Optimised"?
  • "1200 input dpi"
    What does input have to do with the specs of the printing system?
  • "Optical Resolution Up to 600 x 2,400 dpi"
    This refers to scanning; again the mysterious multiplication.

 --Lambiam 08:43, 24 April 2009 (UTC)[reply]

I think that "printer specs understandable to laypeople" is outside the realm of possibility, and should not be attempted; certainly not without a reliable source; otherwise we'll have a rathole. Dicklyon (talk) 15:32, 24 April 2009 (UTC)[reply]

--ADDED QUERY (2009-04-30): At the start of this article, whose length I've yet to scrutinize, comes an example that sadly I find far from enlightening--to wit:

| An example of misuse would be if an LCD monitor manufacturer claimed that | a 320x240 pixel 3" monitor (2.4"x1.8") actually had a resolution of 400 DPI, | (three times the pixels per inch).

 [NB:  delete this last comma--NOT wanted before parenthesis, delimited by parens.]

PLEASE "show your work": i.e., what is being multiplied/divided-into what? E.g., I multiply 320x240 = 76_800, and 2.4x1.8 = 4.32 and think that density should result from 76_800 / 4.32 (= 17_777.77...), but that's not close to 400! ?? Dividing 76_800 by "400" gets me a 192 which I can't figure how to map ... . .:. It would be helpful, at this introductory point, to show the calculation! Thanks. (-; 216.194.229.45 (talk) 13:18, 30 April 2009 (UTC)[reply]

pc programs

quote from the article: Software programs render images to the virtual screen and then the operating system renders the virtual screen onto the physical screen; with a logical PPI of 96 PPI, older programs can still run properly regardless of the PPI provided by the physical screen. Useability and readability is heavily influenced by the technology (laser beamer, flat screen with/without contrast enhancements, cathode ray tube), by the viewers distance, the viewers individual vision capabilities and by some "crap" the operating system does, e.g. anti-aliasing fonts. even environment (lit, dark, foggy, reflections, ...) might play a big role every now and then. 99.5% of all computer progams will run on any screen with any PPI value - they just dont care about it. there are pretty few programs that need to fullfill any exact measures but rather sometimes the application offers a setup e.g. for the used font in editors and terminals. furthermore modern operating system offers lots of tuning for e.g. border widths, menu fonts, window decoration and so on. if someone wants to use an 8x8 font he can, but he can use an 14x16 font as well - the user is adapting to what he wants to see. if you connect a 12" tube or a 40" flat screen the operating system will rather respond with a desktop having more or less width than using the PPI value for adjusting to the changed conditions. BTW for much of legacy applications (e.g. a window of a C64 emulator) there are even zoom modes and nearly all modern consumer flat screens have even built in zoom even if folks like to use such devices in a 1:1 pixel match mode - forget about PPI and fix that statement in the article. --Alexander.stohr (talk) 16:27, 5 August 2010 (UTC)[reply]

I can't figure out what exactly you are trying to say. Are you suggesting that the article text should be changed? –jacobolus (t) 21:00, 5 August 2010 (UTC)[reply]

"dot pitch" or "dot trio pitch"

Are the common monitor "pitches" given in terms of "dot pitch" or "dot trio pitch"? Mfwitten (talk) 22:22, 1 October 2011 (UTC)[reply]

Trying to summarise

Just my luck to stumble, when looking for some clarification, on this article - one of those where the discussion is much longer then the article itself :-) There are lots of sentences in the article (and in the discussion) which I don't understand; others which seem confusing. But then again, I am far from a specialist in this field. Yet, maybe the following is helpful, if only to bring out the points of disagreement.

1. I think that in this article (and in the discussion) two objectives are intertwined, leading to confusion. I.m.o. an encyclopedia should a) describe the (various) common uses of a term; b) offer explanations and background knowledge. Therefore, I wholeheartedly agree that terms like 'wrong' or 'misuse of the word' or even 'misleading' should be avoided. However, an encyclopedia is not a dictionary; it should go one step further and EXPLAIN certain things. In fact, the various 'definitions' of dpi and ppi cannot even be comprehended without some background knowledge, and so would by themselves be of no use to any reader. Any explanation requires a strict definition of the terms used. Quite apart from any claim to 'correctness', when the explanation does not make a clear choice of words it becomes incomprehensible (which, i.m.o., it is right now). Both requirements need not be contradictory; one could very well describe the various ways in which a term is used, and nevertheless, when it comes to an explanation, use the terms in one specific well-defined sense.

2. So, what 'definitions' of DPI and PPI and other terms will the explanation use? In the discussion, it is not at all clear to me when the disagreement is about the use of a word and when it is about facts. Surely, these two kinds of disagreements should be separated as well as possible. In my opinion, following a historical line may be the most helpful to further comprehension.

- As far as I know the term DPI was first used for digital phototypesetting machines, like the Digiset (VideoComp in the USA), introduced in 1966. (I'm not sure about this, it would require verification) In the '80's, a digitital phototypesetter could achieve resolutions up to 3000 dpi. (or was it 4000?) DPI, then, describes the number of circular black dots, of varying size up till completely overlapping, per inch of paper. (The density of coloured dots was, at the time, described by different units.) When, for a colour printer, just 1 figure is given for the "DPI", it pertains to the number of black dots per inch and for a very good reason: to make it comparable to the DPI of a bl/w laserprinter, still widely in use.

I would propose to use the term "DPI", in the explanatory part of the article, in this sense only.

When two figures are given, like in "1200 x 4800 color dpi", I agree with Wapcaplet that the figures refer to black dots (used to print text and comparable to the figure for bl/w printers) and 4-colour dots (to print coloured pictures) respectively. This use of 'DPI' seems to me a quite reasonable 'extension' of the term DPI, to adapt it to the colour-print-era. (And frankly, I don't understand the objections raised by 76.126.134.152) The question remains, however, what to make of printer specifications where the second number does not equal 4x the first number (or vice versa, since there seems to be no rule on the order of the two numbers). Does anyone know what is meant by a ¨printer specification like "1200x2400 dpi"? I suspect it means that text is printed at 1200 dpi and colour pictures are printed at 600 dpi (that is: 600 dots of one colour per inch). Finally, I'd like to know what the 'x' between the two figures means; it suggests something like area, like 'horizontal and vertical'. If what I suppose here is true however, it has no meaning whatsoever and could just as well have been a dot or a comma or a semi-colon. This should be pointed out to the reader.

3. The term 'pixel' (and the related 'PPI') orginates in an entirely different field: the 'digitisation' of pictures. To make up a digital picture from an image, the image is overlayed with a grid of (theoretically) squares and for each square an everage colour and luminosity is determined, either by a scanner or a camera. The resulting 'bitmap', containing values for colour and luminosity of each pixel, is then saved in a picture file. I would propose to use the term 'pixel' in the explanatory part of the article, in this sense only.

Many file-formats (but I don't know which ones exactly) give the option of writing a value for PPI in the meta-section of the file, thus making it possible to determine the real size of the scanned original. Evidently, this value has no meaning for a picture of a landscape taken with a digital camera. (To my knowledge, camera's do not save this value when saving in, for instance, jpeg, but most scanning software does fill this value.)

4. The term PPI is, I think, not as clear as DPI. For scanners, the meaning would seem quite clear to me: it describes the size of the grid used to scan the picture. PPI, used in that sense, is a useful measure for the 'resolution' of a scanner. In practice, however, I rarely see PPI in the specs of scanners; manufacturers seem to prefer the more widely known term 'DPI' instead, and it is, I think, anybody's guess what they mean by that. Some may simply use 'DPI' when they mean 'PPI' (thus giving useful information on the quality of the scanner); others may use 'DPI' to simply refer to the measurements of the scanned picture as layed down in the meta-section of the resulting JPG or TIFF file, which has nothing whatsoever to do with the quality of the scan. (In practice, I met both)

For camera's, the use of the term 'PPI' seems less common, and for good reason: it is not clear what it refers to. When used, however, PPI does indeed seem to refer to the size of the sensor: given a total nr. of pixels, the bigger the chip (and thus: the lower the PPI value), the better the quality.

I would propose to use the term PPI, in the explanatory section of the article, primarily to denote the resolution of a scanner. (The meaning when used in reference to camera's is somewhat confusing, as a lower figure is associated with better quality, which is quite the reverse from it's meaning when used with respect to scanners.)

5. - The concept of dpi could conceivably be extended to monitors, as these, too, are output-devices and the original CRT's used coloured dots. However, the meaning is not as clear, as a monitor does not produce 'black dots'. The term 'dot-pitch' has always been more popular for monitors. (Although I don't know whether this referred to the distance between individual dots, or between the dots of one colour, or between groups of 3 coloured dots.) In fact, most modern lcd monitors do not produce dots at all; instead, they work with squares built-up from 3 coloured stripes, usually denoted as 'pixels'. (I'd propose to use the term screen-pixels to avoid confusion.) Most often, the resolution is described as "X*Y pixels", while the physical size is described by the length of a diagonal across the screen.

Thus, for most people the terms DPI and PPI in connection with a monitor doesn't add much, except confusion. In graphic design, though, you may want to make the size on-screen exactly the same as the size in print. In practice, you need to know the PPI of your screen to achieve this. One can calculate it as described in the article or simply take the vertical no. of screen-pixels and divide it by the screen-height in inches.

5. Some remarks

- Digital printers produce dots in a certain colour. These dots can vary in size, and are thus arranged that they they can overlap, ultimately, when fully overlapping, producing black. (I'd put an asterisk here, as this is very subtle stuff, concerning the way our eyes perceive colours etc., but I think this description may do in this context.) Screens, on the other hand, from the CRT'S of old to modern LCD or plasma, do NOT vary the dot-size, but they CAN vary the lumininosity of the dots (or stripes or whatever). The dots can be circular, but some printers can produce elliptical or even semi-square dots; the printing-software uses this option to produce the best-looking output. I quote from http://www.prepressure.com/printing-dictionary/d "The dot shape is varied to minimize the dot gain at the point where dots join one another. Elliptical dots minimize the sudden dot gain where corners of dots connect; they may connect in their short direction at 40% dot area and in their long direction at 60% dot area. Round dots, often used for newsprint, may not connect until 70% dot area." (Here, I'm out of my depth again: hopefully someone knows more about this. Although, on the other hand, it doesn't seem very relevant for this article.)

- It should be noted (and I sorely miss this in the article) that the term DPI when referring to dots of one colour (notably black) is still highly relevant when used to describe a printer. Text, as opposed to pictures, is often, if not always, delivered to the "printing engine" as a vector format, which is translated to a dot pattern according to the specifications of the printer. In other words: a printer with a higher dpi specification will give you crisper text on your print. The same is of course true for pictures delivered to the 'printing engine' in vector format. (Ensuring that a vector drawing does indeed profit from the maximum dpi of the printer seems to me an art in itself when you're not using Adobe software. Yet, this seems to me beyond the scope of this article)

- Obviously, to print a bitmap (that is: a file describing pixels and resulting from a scanner or camera), the pixel pattern has to be 'translated' to the dot-pattern of the printer. A "1:1 print" simply translates each pixel into the appropriate sizes of the dots in one 'colour group' (which may consist of 4, 5, 7 or even 12 different colours - but that's another article) Obviously, the better the printer, the smaller, and crisper, the "1:1" print. When enlarging a picture in print beyond "1:1", one pixel is spread over a number of groups of dots. Thus, the picture gets vaguer, up till the point where the individual pixels become visible. When reducing the size below "1:1", more then one pixel is available to determine the dot-sizes for the printer, resulting in a print as good as the printer can achieve. (Theoretically, one would expect there to be certain favourable proportions, e.g. "4 pixels to one colour-group', which is, theoretically, much easier to render then "1.7 pixels to one colour-group). However, in practice the rendering software is very sophisticated and there seems to be hardly any gain using such simple proportions. (Here again, I'm out of my depth; but maybe this is way besides the scope of this article anyway)

5. My comments on the article In view of the above, I have a number of comments on and questions about this article.

- "The DPI value tends to correlate with image resolution, but is related only indirectly." This seems to me unclear, if only because 'correlation' is not a term many people understand. But apart from that: DPI and PPI are either synonymous or (as I propose to use the terms) they have no correlation whatsoever, not even 'indirectly'.

- The article starts by explaining 'monitor resolution', which is the most problematic use of the term dpi. Bad idea, I think.

- "A less misleading term, therefore, is pixels per inch." I don't see what is misleading and I don't see the 'therefore'.

- "the measurement of the distance between the centers of adjacent groups of three dots/rectangles/squares on the CRT screen." 1. To my knowledge, there are no CRT-screens with squares or rectangles. 2. To my knowledge, there are no LCD screens with 'groups of squares' 3. Is this true? In other words, could the text be amended by "the measurement of the distance between the centers of adjacent groups of three dots on an CRT-screen, or between the centers of two squares (each consisting of 3 coloured rectangles) on a LCD screen."?

- "DPI is used to describe the resolution number of dots per inch in a digital print and the printing resolution of a hard copy print dot gain; the increase in the size of the halftone dots during printing. This is caused by the spreading of ink on the surface of the media." This sentence forms the heart of the article in the sense that it defines the term that is the title of the article. Yet it contains so many unclarities, that I'll have to take them one-by-one: "DPI is used to describe the resolution number of dots per inch" Should this not be either "to describe the resolution" or "to describe the number of dots per inch". "and the printing resolution of a hard copy print dot gain" I fail to see why this should be added; I have no idea what the difference is between a 'digital print', as described in the first part of the sentence, and a "hard copy print" as described in this second part. I have no idea what a 'dot gain' is. "the increase in the size of the halftone dots during printing" I don't know how this is connected to the statement before the semi-colon; I don't know what to make of 'halftone dots',nor what dpi has to do with "spreading of ink on the surface of the media".

In summary: this definition is totally unclear to me.

- "Up to a point, printers with higher DPI produce clearer and more detailed output." Up to which point?

- "A printer does not necessarily have a single DPI measurement; it is dependent on print mode, which is usually influenced by driver settings." This seems clumsily put; printers always have a maximum dpi (and this is not a measurement but a value). What dpi is effectively used depends on user's choices. (notably choosing 'economy mode')

- "An inkjet printer sprays ink through tiny nozzles, and is typically capable of 300-600 DPI.[1] A laser printer applies toner through a controlled electrostatic charge, and may be in the range of 600 to 1,800 DPI." As the definition of dpi is unclear, so are these statements. Yet, quoting higher values for laserprinters then for ink-jet printers seems to me doubtful in whatever sense the words are taken.

- "The DPI measurement of a printer often needs to be considerably higher than the pixels per inch (PPI) measurement of a video display in order to produce similar-quality output. " Is this so? Why? Why is this Often? How often?

- "This is due to the limited range of colours for each dot typically available on a printer. " Yes, very limited indeed: just one.

- "At each dot position, the simplest type of colour printer can print no dot, or a dot consisting of a fixed volume of ink in each of four colour channels" This is, ithink, not true. Even allowing for the fact that the word 'dot' is used here (very confusingly) to denote a 'dot-group' consisting of dots in all colours the printer is capable of, this is still unnecessarily opaque; I wouldn't know what a 'colour channel' is, for instance. Nor can I see how 'the simplest type of colour printer' works differently, in this respect, from the 'most advanced type of colour printer'. Nor is there any 'ink' in my laser-printer. Finally, the 'fixed volume of ink' is to my knowledge simply untrue; the whole point of colour printing is that the size of the dots (and thus the volume of the ink applied in the case of an ink-jet) varies.

- "typically CMYK with cyan, magenta, yellow and black ink) or 2e4 = 16 colours" This, too, is to me incomprehensible. Prints are made with dots of varying size (not with a 'fixed volume of ink'). The principle of colour printing is partly based on the fact that ink-and toner colours, like normal paint, can produce mixed colours when they overlap and also when they are spaced apart - a phenomenon well known to painters, notably the impressionists. Our eyes being able to discern just 3 colours, all possible colours can be achieved by printing dots of varying size in 3 colours (in practice 4 or more are used). I have no idea why the number '16' would be relevant here, nor where the formula comes from. In fact, I have no idea why all this is relevant to the article.

- "Higher-end inkjet printers can offer 5, 6 or 7 ink colours giving 32, 64 or 128 possible tones per dot location." Incomprehensible - see above.

- "Contrast this to a standard sRGB monitor where each pixel produces 256 intensities of light in each of three channels (RGB)." I have no idea what it is I am supposed to be contrasting here, but it DOES throw up a point: surely, the variation in size of a printer dot comes in discrete steps. I have, however, never seen this in the specification of a printer. Does anyone know more about this?

- "While some colour printers can produce variable drop volumes at each dot position, " Apart from the fact that this is, again, ink-jet-talk and that it is not the volume produced, but the dot-size produced which is relevant, I would like to know what colour printer is NOT able to do this.

- "the number of colours is still typically less than on a monitor." Why would that be? I can't follow the explanation. Nor can I see the relevance with respect to the explanation of the term 'dots per inch'.

- "if a 100×100-pixel image is to be printed inside a one-inch square, the printer must be capable of 400 to 600 dots per inch in order to accurately reproduce the image." This, I think, is true. But: First, the term 'accurately' seems unnecessarily vague. Hereabove, I used prnting "1:1" to denote printing a picture with one pixel translating to one 'colour-group' on the printer. (why not be precise instead of talking about 'accurately' or 'faithfully' and such) Second, the explanation leading up to this fact makes it seems like it's rocket science, while in fact it's quite trivial: To print 100 pixels "1:1", the printer uses 100 colour groups; in case of a four-colour printer, this means 400 coloured dots. In case of a 6-colour printer, this means 600 dots.

- Section "DPI or PPI in digital image files" No factual quibbles here; just the observation that the wording is imprecise, which doesn't help i.m.o. to further comprehension, and that some descriptions use difficult words unnecessarily. For example: "Some digital file formats record a DPI value, or more commonly a PPI (pixels per inch) value". Comment: Formats don't record anything, but computer programs do. Some formats offer the possibility of recording PPI - I do not know of ANY format offering the option of recording DPI. MANY computer programs confuse dpi and ppi and represent the ppi-value as dpi. A PPI value in the file only has meaning when recorded by a scanner program; camera's often record some value (Nikon gives 300 ppi, Canon gives 180 ppi) but these values are entirely without meaning. "If it is labeled as 250 PPI, " What does 'labeled' mean? Let's be precise. "An image may also be resampled to change the number of pixels". Incomprehensible seen the level of the article. Moreover: what has this to do with the explanation of "Dots per inch"?

-Precise wording can, I think, help the understanding. For example, instead of: "Changing the PPI to 100 in an image editing program would tell the printer to print it at a size of 10×10 inches. However, changing the PPI value would not change the size of the image in pixels which would still be 1,000 × 1,000." I would say: "Changing the PPI-setting in the 'description' part of an image file (which can be done with an image editing program) would tell the printer to print it at a size of 10×10 inches. This, of course, does not change the the pixels in the image file: the picture still consists of 1.000 x 1.000 pixels.

- Section "Computer monitor DPI standards" I.m.o. this is a good piece, drawing attention to what is indeed a major source of confusion. But the first time I read it I couldn't make head nor tails of it. While reading, I was waiting to learn what 'the problem' is and what 'confusion' was sown by Microsofts choice. I didn't get that, and I still don't get it. Furthermore, it doesnt help that the use of terms is a bit 'loose', while some odd expressions ("a resolution of 1 megapixels", "the intended size of the image") may put an unsuspecting reader on the wrong foot (as it did me). Also, to introduce the word 'vector image' the first time in the article with the sentence "For vector images, there is no equivalent of resampling an image when it is resized" seems quite inadequate. (there's not even a reference to some other article here) The core of the matter is not at all hard to describe: like I said hereabove: to make the screen image the same size as printed output, you need to know the ppi value of your monitor, and instruct the software accordingly. (here, now: I said it in one sentence) Overrating the PPI of the monitor leads, at least in serious graphics applications, to a 'larger-then-life' picture on-screen.(This, by the way, could be explained in some more detail) However, I still don't see any 'problem' or 'source of confusion'.

I suspect (but is this true?) that a temporary problem has been that Apple software, being written for Apple monitors, did not allow for the the software being instructed this way: it simply supposed the monitor was 72 PPI, Apple's 'standard value'. However, by now all Apple graphics software can be so adjusted. (can't it?) Windows, of course, had no say whatsoever over the PPI of the monitor used, and thus windows software always went with the value used by the OS, which could be adjusted by the user in accordance with the specs of his monitor. (To appease the Apple-fans: the windows graphical software was, at the time, years behind Apple graphics software.)

All in all, the only possible 'sources of confusion' I can see are these: - Microsoft consistently uses DPI for PPI (where 'PPI' stands for 'screen-pixels per inch') and the dialogue box used to adjust the value, insists that one sets "The DPI value of the screen". This, of course, is nonsensical, since the "DPI value of the screen" is a fixed given, not a user's choice. I bet this wording has spread lots of confusion; it should have been something like "What is the PPI-value of your monitor?", with an explanation about how to determine this value, and the consequences of making the software believe it is higher or lower then it actually is. - Second possible source of confusion: the many, many articles on the web making a big fuss about 'Apple's 72 dpi" vs "Microsofts 96 dpi", where in fact all this is of interest to IT-historians only, or so it would seem to me.

In summary: I think this section, as it stands, is long and doesn't add to comprehending the term "Dots per inch". Propose to strike, or else to re-write in such a way that it deals better with common misunderstandings about "72 dpi" and "96 dpi". — Preceding unsigned comment added by Mabel2 (talkcontribs) 17:20, 9 January 2012 (UTC)[reply]

72DPI (/ 96DPI)

Section "Computer monitor DPI standards" is confusing and wrong, better to remove it. — Preceding unsigned comment added by 2.34.179.118 (talk) 10:46, 14 March 2013 (UTC)[reply]

I agree. The section just adds to the 96 dpi computer screen myth. (Or the 72 dpi myth.)
Draw a line, 960 pixels long. Measure it with a physical ruler. Is it 10 inches? No..?
Connect your laptop to your TV. Is the line 10 inches now? Still not?
Show the line on your iPhone. Still same size? Why not?
The section may explain a possible source for this myth, however. But should then reflect that.--Vbakke (talk) 23:27, 1 June 2013 (UTC)[reply]