From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Explain why it was named as such

You sound like you're trying to say "megabyte" with cold when saying "mebibyte", so where did the word come from? Who decided to name them as such? And why does wiki use it so much on certain articles. None of my college proffessers have ever mentioned this term to me, and synonomously the most recent verison of the official A+ certification book states that a "megabyte" has the both the 1000x1000 and 1024x1024 meaning (and its up to a clarification to state how many bytes it really is). What this basically means is that mebibyte is by no means wide spread and I dont think its up to wikipedia authors to slap it into articles as if it was standard usage. The floppy article, for example, alternates on the term all the time and it gives it an unproffessional look.

See binary prefix for the origin of the term.
We use it in Wikipedia because it is a standard, it's unambiguous, and we use binary units in many fields; not just computer science. See the Manual of Style and this discussion. — Omegatron 16:23, 30 January 2006 (UTC)Reply[reply]
Unfortunately, the Manual of Style has been modified since your comment and binary prefixes are currently not welcome anymore. --NotSarenne 01:52, 4 November 2007 (UTC)Reply[reply]
"Standard"? What did you smoke? —Preceding unsigned comment added by (talk) 20:21, 13 October 2007 (UTC)Reply[reply]
From the article: "On March 19, 2005 the IEEE standard IEEE 1541-2002 (Prefixes for Binary Multiples) was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period". I guess that's what he smoked. Disclaimer: Smoking may cause lung cancer. --NotSarenne 01:52, 4 November 2007 (UTC)Reply[reply]

The use of mebibyte is being discussed at [1] --Thax 8 July 2005 02:56 (UTC)

A vote has been started on whether Wikipedia should use these prefixes all the time, only in highly technical contexts, or never. - Omegatron 14:49, July 12, 2005 (UTC)

Vote over, here's the Manual of Style on the subject. - Trevyn 04:42, 7 September 2006 (UTC)Reply[reply]

I'd love to see the archived vote and discussion, but good luck finding it in the 62 pages of archives of the discussion page. Wikipedia needs the ability to search for simple text "on all pages directly linked from the current page" or something. Google is of no help. 20:55, 18 January 2007 (UTC)Reply[reply]
Google works, but you have to use the right search terms.  :-/ [2] A direct link is here. — Omegatron 21:47, 18 January 2007 (UTC)Reply[reply]

I think they should, as base 10 is really for you humans, we computers are base 2 freaks... 10 + 10 = 100

THANK YOU FOR YOUR INPUT - Omegatron 13:56, July 18, 2005 (UTC)

Omegatron, please seek help. - Anonymous

How can I help you? — Omegatron 01:45, 9 October 2005 (UTC)Reply[reply]

I think it would be useful to include a pronounciation key -- I would like to know if the "i" in Mebibyte should by short (as in "think") or long (as in "time").

It is suggested that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as "bee." [3]Omegatron 00:33, 19 January 2006 (UTC)Reply[reply]
Thank you! I added this to the main article on prefixes.

I fought the law and the law won

I refer you to this article for some history on this definition and some legal cases taht have been influenced by it. Nothing has actually made it in front of a judge yet, though.

I am working in a corporate environment and we are looking for a way of differentiating the two amounts (mega and mebi) for consumers so this is actually a useful term. It's a real pain to be specific with customers when they don't understandthe difference and need explanatons of basic terms (and their contexts.) —The preceding unsigned comment was added by Craigwbrown (talkcontribs) 04:33, 31 January 2007 (UTC).Reply[reply]

Laymen are confused by the terms that computer experts have been using for years so what do they do? They rename the widely accepted value. MiB makes more sense as being short for Million Bytes than it does for Megabytes. BiB for Billion bytes, etc. This is a standard that won't stick even if a standards body is behind it.Jimberg98 22:44, 2 April 2007 (UTC)Reply[reply]
Another thing about MiB is an abbreviation for Mebibyte which is an abbreviation for Mega Binary Byte. That's pretty lame in itself, but it's also lame since it is redundant. Byte is an abbreviation for Binary Term. So MiB all expanded is Mega Binary Binary Term.Jimberg98 16:39, 3 April 2007 (UTC)Reply[reply]
Hee hee - some time ago I posted negative comments here about the term. But just the other day I was using a linux disk partitioning utility, and it was giving numbers in "MiB" ... and I knew exactly what it was saying. No confusion. It was ... actually nice. And if I pronounce it "mib" (just like an snmp mib) it actually doesn't sound half bad. Not as nice as "meg" and "gig", "gib" and "tib" sound especially silly.
Of course now we've got this clash between all the old documentation/software that has KB/MB/GB. Huge numbers of us are ALWAYS going to think MB=MiB, GB=GiB, and we're just going to continue thinking poorly of the disk manufacturers. I think the main negative connotation that makes us all so mad about this is that it's clearly a bunch of greedy corporations that were violating the common accepted usage of the language/terms who have forced this through solely for their benefit, as a "way out". 14:23, 27 April 2007 (UTC)Reply[reply]
But hard disks have been measured in decimal since they were first invented, way back to the days when drums only held "60K" (= 60,000 words).[4]
The hard drive manufacturer size inflation conspiracy theory is bogus. Microsoft and MacOS are the culprits here for showing disk size in the wrong units. — Omegatron 16:18, 27 April 2007 (UTC)Reply[reply]
The reference quoted by Omegatron is a 1957 manual for a 1954 system, the IBM 705. This machine pre-dated the now-ubiquitous 8-bit byte, and used 7-bit characters and 35-bit words [5]. It's no great surprise that it uses decimal to count words, and it's totally irrelevant to this discussion. (talk) 21:00, 29 June 2011 (UTC)Reply[reply]

Vote for purging this heresy from Wikipedia

Should we start a vote for purging thise heresy from Wikipedia? And enforce it by bots. 16:36, 9 July 2007 (UTC)Reply[reply]

Great idea. Yesterday I also added an article I found that gives a Greasemonkey script for all those mebibyte haters to get rid of all this crap web-wide. Someone did not like my addition. I think Wikipedia should give mouth for adversaries as well. Therefore I believe the greasemonkey options is a complete liable addition to this article. —Preceding unsigned comment added by (talk) 15:42, 22 March 2008 (UTC)Reply[reply]


That table is wrong for a start... 1KB = 1024B not 1000B who ever wrote this article is stupid. —Preceding unsigned comment added by Benyr (talkcontribs) 08:28, 7 September 2007 (UTC)Reply[reply]

You're wrong too: KB = kelvin byte, a completely meaningless unit. (talk) 12:38, 27 April 2013 (UTC)Reply[reply]

Leave this article here, but...

...get rid of this stupid notation all over Wikipedia. I just stumbled on it in the German version (where I am writing as "TheBug") and I really have to say I seldom have seen anything as useless as this. 22:02, 9 September 2007 (UTC)Reply[reply]

The use of these terms should be immediately banned from Wikipedia. If you look up Mebibyte or Gibibyte on Goolge you will find that more than 1/3 of the references are found on Wikipedia and the rest seem to be other sites that define the terms. This a self referencing act here. Nobody in the industry uses this tuff but Wikipedia tries to push it. Again TheBug from Germany. 13:51, 10 September 2007 (UTC)Reply[reply]

I agree! With 20 years in the computer industry under my belt, I can say with a straight face I have never seen the "*ebi-" prefixes outside Wikipedia. I never even knew about them before coming to WP. Everyone knows "gigabyte" (for example) means "1,000,000,000 bytes" as used by hard disk manufacturers to rip off consumers, but in every other instance (including every OS's directory feature) "gigabyte" means "2^30 bytes". Manufacturers' spec sheets and webpages even have footnotes saying as much... "One gigabtye, or GB, equals one billion bytes..." Sound familiar? I don't see why we need to bother with an artificial and phonemically awkward system which is not used anywhere. It just discredits Wikipedia as a whole and makes us all look silly. --Jquarry 06:17, 26 September 2007 (UTC)Reply[reply]

On the German Wikipedia we have initiated a ballot about this issue. It is amazing how distorted the view is of those who are in favour of the binary prefixes. They do not even admit that about 40-50% google search hits for "mebibyte" are generated by Wikipedia is a proper indication that nobody uses this standard. There are lots of standards out there that nobody uses. If Wikipedia picks them all up just because some bored students find them in the dumpster we will be the silliest looking enzyclopedia of the world. (TheBug from Germany). -- 22:17, 11 October 2007 (UTC)Reply[reply]

No one is denying that the term megabyte is used by the computer industry to mean one mebibyte. The problem is that it is also used by the computer industry to mean one million bytes. In other words, the computer industry has created a situation in which it was impossible to make unambiguous statements about computer memory without introducing a new term, either for 2^20 or for 10^6. That is why the mebibyte was introduced by the IEC. The purpose of this article is to define the mebibyte and to describe how it is used. There is nothing silly about that. Thunderbird2 09:23, 12 October 2007 (UTC)Reply[reply]

No problem with defining what mebibyte means, but I do have a severe problem with how its importance in the industry is portrayed. We do have the same problem in the German Wikipedia, which is why I have looked what is going on here. Wikipedia is the only significant user of the binary prefixes and since it tries to be an encyclopedia this is definitely wrong as it deviates from reality. 20:58, 12 October 2007 (UTC) (TheBug)Reply[reply]

In what way does the article deviate from reality? Thunderbird2 07:17, 13 October 2007 (UTC)Reply[reply]
Not the article, but the fact that mebibytes are all over the computer-related articles here on wiki, as if it was a generally accepted standard, which it's not. —Preceding unsigned comment added by (talk) 20:51, 13 October 2007 (UTC)Reply[reply]
I don't usually encounter this unit myself, but I can see its value to someone wishing to make an unambiguous statement about computer storage capacity. Can you suggest a better way? Thunderbird2 20:59, 13 October 2007 (UTC)Reply[reply]
Sure - leave things as they were for the past few decades. Some n00b doesn't know why things are that way and not the other? FAQ him until he does. —Preceding unsigned comment added by (talk) 11:06, 14 October 2007 (UTC)Reply[reply]
You are missing the point. The discussion is not about why there is a mess, but about how to fix it. An encyclopaedia needs to be unambiguous and you have not explained how the megabyte can be used without ambiguity. (As I understand the term is used by the computer industry to mean three different quantities of information). Thunderbird2 11:18, 14 October 2007 (UTC)Reply[reply]
Very simple. Wikipedia is really great because of the easy linking you can do. Just use the MB, GB etc. in the binary form that is used by the majority of the industry (except for the marketing departments of the storage manufacturers). Link "MB" to an article about storage units and explain the problem as well as the fact that Wikipedia does use the binary interpretation. No need to try to introduce a notation that close to nobody uses in the industry.
The confusion about the Megabytes comes from misuse of the units that was triggered by storage manufacturers to be able to claim higher capacities. In technical context it makes no sense to use a decimal format since memory is build and used in binary multiples. 21:41, 15 October 2007 (UTC) (TheBug)Reply[reply]
I see that you agree with me that the confusion is created by the computer industry. You also claim that the megabyte is used (unambiguously) to mean 2^20 bytes. That would mean that a 1.4 MB floppy disk contains 1.4 * 2^20 bytes, right? Thunderbird2 06:33, 16 October 2007 (UTC)Reply[reply]
No, I don't claim that only intelligent people walk on this planet, there are marketing departments. The 1.4M Disk is the worst example of all since its capacity really is 1400KB, a mix of decimal and binary. What I do claim is that close to nobody in the industry is using the IEC binary prefixes and wikipedia should do neither. Defining and enforcing that wikipedia uses the binary interpretation (while documenting the existence of the IEC prefixes) should be the right way. Wikipedia has to be precise but also has to stay with the terminology in use in the real world. 07:49, 16 October 2007 (UTC) (TheBug)Reply[reply]
First you tell me that a megabyte is always 2^20 bytes. Now you say it is sometimes 1024000 bytes. That is a contradiction, and it also proves that the megabyte is an ambiguous unit. Thunderbird2 08:10, 16 October 2007 (UTC)Reply[reply]
Interesting, the same sort of arguments as used in the German wikipedia. No, I did not say that, I only said that there are people who use units in the wrong way. Are you going to redefine each unit which gets misused? But the main point is that wikipedia is no documenting here, it is trying to change reality. 16:01, 16 October 2007 (UTC) (TheBug)Reply[reply]
In that case I misunderstood you. You say that a megabyte is is always defined as 2^20 bytes, and those who use this unit in a different way do so incorrectly. Is that your position? Thunderbird2 16:21, 16 October 2007 (UTC)Reply[reply]
Correct. Until the IEC spec there was a IEEE spec that defined the Megabyte and the other multiples of bytes as binary. But it seems like the hard disk manufacturers have succeeded in getting the IEC to redefine the units which have been in use in computer science for a long time. While this is about the most stupid thing you could do (redefining a unit in use) the industry has completely ignored this, manufacturers of rotating media keep defining the units however they see fit, the rest of the world is using binary megabytes. And that is what wikipedia should document, instead of using units that nobody knows or uses. 23:30, 16 October 2007 (UTC) (TheBug)Reply[reply]
I see. That's the most sensible reasoning I've ever heard from a "mebibyte basher" [I hope you don't mind me calling you that - it's not meant to be impolite, just a statement of fact :)]. Let us accept, for the sake of argument, that there is one and only one (correct) definition of the megabyte, namely the binary one. My next question would be, how many megabits are there in a megabyte? I would also appreciate a precise reference to the IEEE standard to which you refer. Thanks Thunderbird2 06:00, 17 October 2007 (UTC)Reply[reply]
IEEE 100 ("The Authoritative Dictionary of IEEE Standards Terms") unfortunately not available online it has to be paid for. It has been some time since there were more or less than 8 bits in a byte, current use of byte is equal 8 bits. But in any case the major point is that wikipedias task is to document, not to try to change things. 13:45, 17 October 2007 (UTC) (TheBug)Reply[reply]

That's not really what I meant. Rather, how does IEEE 100 define the megabit? Is it 2^20 bits (so that 1 MB = 8 Mbit) or 10^6 bits (so that 1 MB = (2^23/10^6) Mbit)? A separate question is whether the standard is now out of date. When was IEEE 100 published? If there are more recent IEEE standards, shouldn't they take precedence? Thunderbird2 17:21, 17 October 2007 (UTC)Reply[reply]

IEEE100 was published in 1986 and it defines the MB * 8 = Mb = 2^20 * 8. The current IEC standard has no binding status. It would not be the first IEC standard that gets ignored by the industry (actually I know of standardisation efforts in IEC that are opposed by the industry). Redefining a unit is about the most stupid thing you can do, it will make it impossible to find out which interpretation has been used for writing docuemnts. Right now we know that rotating memories are specified decimal by their manufacturers, the rest of the industry uses binary.

But the main point still is that almost nobody execpt some overeager wikipedia users does use the IEC spec. 08:04, 19 October 2007 (UTC) (TheBug)Reply[reply]

If you want other editors to take your main point seriously, you need to give them a credible alternative to IEC. The main weakness I see in your proposal to adopt IEEE 100 instead is that it is superseded (or contradicted) by later IEEE standards. Why should WP adopt an IEEE standard that the IEEE itself does not recognise? Thunderbird2 19:54, 19 October 2007 (UTC)Reply[reply]

Wikipedia should stick with reality. This IEC standard is more than 8 years old and some part of wikipedia is the only user. 08:38, 20 October 2007 (UTC) (TheBug)Reply[reply]

The IEC and its proposals are irrelevant to this line of reasoning, which was an attempt to explore the viability of the alternative you propose (IEEE100). Please answer the question in my previous post. Thunderbird2 10:31, 20 October 2007 (UTC)Reply[reply]
Wikipedia should use the common notation as used in the computer industry (read this as technical, not marketing department) and that is 1MB = 2^20 Bytes. All physical implementation and organisation of memory is binary based. The interpretation 1MB = 10^6 is used by marketing because it yields higher numbers but technicall this is irrelevant. Articles in Wikipedia should use 1MB = 2^20 Bytes and where necessary eyplain the difference from the manufacturers number and link to relevant articles (including this one here). 21:13, 22 October 2007 (UTC) (TheBug)Reply[reply]
You know very well that common is not equivalent to correct. Also you better back up claims like "used by marketing because it yields higher numbers". You know very well that this is nonsense because every harddisk vendor specifies capacities in the same way and always has. Why exactly should Wikipedia stick to MB as 1024^2 and then explain in each single case which of the two (or even three) thinkable meanings applies? Isn't it much easier and efficient to use MiB whenever possible? You're proposal means we have to edit each and every article which uses kilo/mega/giga or the SI prefixes even for articles that are unrelated to IT. Or how should someone who read everything about computer hardware know that in physics SI prefixes are always decimal? You're really just looking at it from your IT expert corner. You're not trying to look at it from the perspective of a reader who knows little to nothing about computer hardware. I hope you understand that such people may be very intelligent and highly educated. --NotSarenne 01:26, 2 November 2007 (UTC)Reply[reply]
Articles correctly use 1MB = 2^20 bytes. From the perspective of a reader who knows little to nothing about computer hardware it is not better to use MiB because the IEC prefixes are from a failed standard. QuinellaAlethea 05:30, 2 November 2007 (UTC)Reply[reply]
Please, define "failed standard". It is a current standard. It has even be adopted by and integrated into other standards. It is not failed until the same organizations decide to scrap it. It is also a fact that articles do not universally use "1MB = 2^20". A so called "1.44 MB floppy" does not hold 1.44 x 1024^2 bytes, similar applies to DVDs, HDDs etc. --NotSarenne 15:12, 2 November 2007 (UTC)Reply[reply]
It is obvious. The "standard" you prefer has been around for many years and is not being used by anything like the majority of the industry. Doing a quick Google comparison of kilobyte/kibibyte and the same of megabyte and gigabyte as a rough guess I would say use of the "standard" you prefer is currently being used by less than 5% of sources. As such it is a failed "standard". Also the JEDEC standards body defines KB, MB, GB etc in terms of binary values. Your other point about not being universal and floppy drive sizes are irrelevant because the sources and context relevant to articles remove ambiguity. Fnagaton 13:48, 3 November 2007 (UTC)Reply[reply]
"Don't count your chicken before they're hatched". This standard has been adopted and the resistance in Europe is definitely smaller than in the USA. This almost correlates with IPv6 which already widely deployed especially in Asia and still many people are claiming it will never "come" when in fact several deadlines are approaching and IPv6 will definitely and finally have it's break through. The USA may very well be the last to accept this - again for hysterical raisins. You are ignoring the fact that everyone who conforms to MB=1,000,000 is already using this IEC standard whether they also use units like MiB is irrelevant. So, please, look again and realize that HDD/DVD/HD-DVD/BluRay/Flash/Network capacity is already specified in accordance with the current IEC/IEEE standards. I'm almost certain that's more than 5% of the industry. JEDEC applies only to the USA and semiconductors and if your compare the number of members, you'll see that the IEEE is much larger and covers a lot more areas of IT/computing. --NotSarenne 17:13, 3 November 2007 (UTC)Reply[reply]
I'm not ignoring anything so do not try to claim otherwise. Your "argument" is to keep on throwing up red herrings however the fact is that the "standard" is not common usage and therefore it has no place being forced into wikipedia articles. That is consensus. Fnagaton 18:33, 3 November 2007 (UTC)Reply[reply]
"Common usage" and "standard" are different concepts. The IEC standard for binary prefixes is even older than Wikipedia itself, so careful editors would have used it right from the start. I don't blame them and I fully understand that this standard is not yet widely known and also heavily suppressed. Fortunately, the recent lawsuit settlement involving Seagate is helping to spread the knowledge. So every day that passes there are fewer people who can claim they are not aware of this standard. I dare to say, any IT/computer expert who still has never heard of it, is seriously out of the loop. Using the word "to force" is highly non-neutral. One could as well claim that the standard is being forced out of Wikipedia. Basically, it's not useful to argue this way. It is only confirming that some editors are highly emotionally and subjective about this issue. A consensus decided by a minority of all involved editors decided deep down in the labyrinth of Wikipedia is simply invalid especially if it goes against the current international standards and even more so if you consider that the previous consensus stated the exact opposite. If you see any kind of consensus it is at best a highly disputed one, thus fairly useless. I'm also certain that most people do not have time and energy to discuss this endlessly and running in circles, especially as the "consensus" can be reverted any time in any way, letting all previous effort go to waste. --NotSarenne 18:58, 3 November 2007 (UTC)Reply[reply]
Since Wikipedia is descriptive and not prescriptive your points are irrelevant red herrings. For the majority of articles here the correct binary units to use are KB, MB, GB, kilobyte, megabyte and gigabyte. Fnagaton 20:19, 3 November 2007 (UTC)Reply[reply]

Definition of "failed standard": A standard which fails to get accepted by industry in any significant scale. This is in contrast to something like IPv6 which is a standard that is accepted but takes time to get implemented due to infrastructural changes. Another example for a failed standard is IEC 60617-12, interestingly initiated by the very same institute and also widely adopted by international standarization bodies including the German DIN. But for some reason industry did not like it, maybe due to the fact that it tried to solve a non-problem. 00:59, 4 November 2007 (UTC) (TheBug)Reply[reply]

In what sense is the use of an ambiguous unit a "non-problem"? Thunderbird2 10:21, 4 November 2007 (UTC)Reply[reply]
That's a rhetoric stunt. "non-problem" refers to IEC 60617-12 trying to discredit IEC and DIN. It's about how to draw OR- and AND-gates in diagrams (or similar) which was potentially not any real problem (I'm not really familiar with the details of that case). I admit, it was a nice try. Too bad I've seen it being used before. :) --NotSarenne 12:18, 4 November 2007 (UTC)Reply[reply]
Oh, I see. In that case I misunderstood. I mistakenly thought the discussion was about the mebibyte (and its ambiguous, more widely used counterpart, the megabyte). Thunderbird2 12:29, 4 November 2007 (UTC)Reply[reply]

You asked what a failed standard is, I gave an example. We ARE on the topic of the binary prefixes which are heading to the dumpster, except here in Wikipedia. 22:40, 9 November 2007 (UTC) (TheBug)Reply[reply]

It what was not I who asked about the failed standard (I think that was NotSarenne). The question I asked, which you have so far not addressed, was why anyone should adopt a standard (IEEE 100) that the IEEE itself no longer uses. Thunderbird2 08:02, 10 November 2007 (UTC)Reply[reply]
This is not about adopting the IEEE100 standard, this is about wether to follow the standardisation institutes in dropping it, or to do the same as >99% of the world does: Ignore the MebiByte stuff. 00:32, 11 November 2007 (UTC) (TheBug)Reply[reply]
That's all very well, but an encyclopaedia should strive to be unambiguous. Further, if 99% of the world causes confusion through ambiguity, it is the job of an encyclopaedia to explain the confusion and its cause. The relevance of IEEE 100 is that this is the closest you have come to offering an unambiguous alternative to IEC. Should I conclude that you prefer ambiguity to clarity? Thunderbird2 11:38, 11 November 2007 (UTC)Reply[reply]
You should conclude that a standard that redefines (contrary to replaces) a unit which is in widespread use should be ignored. At least >99% of the world has decided to do so. 00:52, 15 November 2007 (UTC) (TheBug)Reply[reply]
Do you mean 99% of the world apart from the entire telecommunications industry and computer disk manufacturers? Thunderbird2 07:43, 15 November 2007 (UTC)Reply[reply]
No, he means, he's right no matter what your arguments are. -- 22:41, 15 November 2007 (UTC)Reply[reply]

IEC 60617-12 is a fairly ridiculous example. IPv6 just like Megabytes concern experts and non-experts alike. Technical diagrams do not concern any consumers in the least. Just like IPv6 though the problem gets more and more severe, the longer you wait. I apologize for bringing in a analogy (IPv6) because discussions based on comparisons almost always diverge from the topic and cause red herrings. Sorry, for that. Regarding the failure, you might realize that more and more software and documentation is being updated to conform with the IEC standard for binary prefixes. At some point a critical mass will be reached and then almost everyone will quickly adopt it. It also seems you're just pissed off that the German Wikipedia has widely adopted them based on the observation that you frequently resort to profanity regarding this issue. I'm pretty sure the IEC binary prefixes are never going to win the google race because nobody is going to update a 10 years old home page and shouldn't have to. Just like we don't burn old books because of they use outdated language, units or contain politically incorrect content. Anyway, it's not like you have to use KiB, MiB, GiB in order to conform to the new standard. --NotSarenne 01:42, 4 November 2007 (UTC)Reply[reply]
I've yet to see a single example of any software or documentation that uses these new prefixes. Maybe one or two really do exist as you claim, but I am most definitely not seeing any momentum building at all.  —CobraA1 06:09, 23 February 2008 (UTC)Reply[reply]
IEC60617-12 was just an example for what happens if standardization institutes try to force something on the industry. Binary prefixes are more than 8 years old and Wikipedia is about the only user. This is a failed standard. 22:35, 5 November 2007 (UTC) (TheBug)Reply[reply]

Wow! I've been a computer programmer for almost two decades, and I never heard of this MiB business until this week when I saw what looked like a misspelling in a Wikipedia article. I thought that the writer wrote Gigabyte as GiB and then kept going to MiB for "Migabyte" in a funny accent. I've always gotten a bit disappointed each time I've been reminded that my spinning media didn't contain all the bytes that the package claimed to, and I kept realizing intuitively that it was a marketing ploy, but I never really sat and thought about it and realized that it had spread to all the spinning media sellers. Now they want to make it a "standard"? Well, if it ever becomes standard then fine, but meanwhile Wikipedia looks pretty silly using it. (There's something plain cutesy and icky about it, actually.) Eliezerh 03:03, 7 November 2007 (UTC)Reply[reply]

Unfortunately IEC charges $140 for their standard. I would love to take a look at the list of contributors, I have some suspicions about the involved parties. 00:36, 11 November 2007 (UTC) (TheBug)Reply[reply]

So, the Wikipedia is backing up a standard that is not available publicly, but rather requires payment for? No wonder I've never seen this standard outside the Wikipedia. Personally, I'd rather not encourage the use of a standard that most people won't bother to access.  —CobraA1 03:54, 19 March 2008 (UTC)Reply[reply]
I'm an unregistered user so it will be hard for me to defend a point of view about MiB, MB, KiB, KB and the like. To be honest, I'm tempted to revert MiB to MB in every article I find...but it's pointless because I'm an unregistered user and there are others here who wish to push this MiB thing. This new standard will not solve any problem because the layman would still be confused and for those initiated in the field, it isn't hard to understand what MB means for a hard drive or for a memory chip.
As a matter and fact, the layman would still see on his hard drive something like 128GB...and in windows (assuming MS will stick to this standard) a file size like 4KiB. And what the layman would think? Oh...the KiB thing is a spelling looks funny!
And my hard drive has about 128 gigs. But for him, KiB and KB or MiB and MB would still be the same.
Waht kind of gigs? It doesn't matter for him. Indeed, the error between MiB and MB is only about 5%...The error between TiB and TB is about 10%. So where does this go for the layman? His drive has still about 128gigs whether there are 128GB or 119.21GiB. In fact, only for drives in the vecinity of 1000TB there will be about 20% difference between TiB and TB and it will start to be really noticeable. 1000TB?? I mean that's huge!
A "minor" correction, I've done the math wrong... the error will be around 20% for drives around 1 tera TB (o thousand billions Tera Bytes). For a 1000TB drive, the error is around 12.5%...not that much. And still, 1000TB for a drive is huge!!
The only units that should be redefined here are those used for measuring spinning disks/discs capacities and not the already de-facto standard binary powers!
It is strange that Wikipedia wants to endorse the use of this standard. Why isn't Wikipedia interested in imposing a standard only 4 years older than this one? Like IEEE 1284 standard from 1994? A standard which recommends the use of 36 pins, half-pitch Centronics connectors for newer devices? Or that the SPP mode should be called Compatiblity mode? I'm yet to see a new computer fitted with that mini-centronics connector and a BIOS option that lets me select the Compatibility mode for my parallel port!
If a standard fails (because the industry doesn't care about it) then Wikipedia should not consider its use on its pages/articles! (like it doesn't care about the mini-centronics and compatibility mode).
So please, leave this article here, but do not endorse the use of this standard! (talk) 13:42, 10 April 2008 (UTC)ApassReply[reply]

The German Wikipedia just did a vote on the use of the binary prefixes. The result was 2 to 1 for using the traditional notation and to use it consequently in the binary meaning when data amounts are addressed (data rates, i.e. MB/sec are decimal). In addition we designed a set of tool tips and footnotes that can be used to provide further information. --TheBug (talk) 08:48, 2 June 2008 (UTC)Reply[reply]

This has to be resolved. It's embarrassing for the computer industry to have an essential quantity un-standardized. I certainly could do the job, but I don't have the authority. I say don't mix binary and decimal. What's wrong with powers of two? Nehmo (talk) 00:30, 4 March 2011 (UTC)Reply[reply]


Okay I get it some of you are fanatically behind this but PLEASE stop editing stuff all around Wikipedia to meet your agenda of forcing this mebibyte stuff on everyone. It's annoying.

I gotta agree, it's getting irritating seeing people mix in _iB with _B in an article, and most of the people around are not going to understand what _iB is, because so much of the standard is in rounded bytes. Kyprosサマ (talk) 20:31, 30 August 2008 (UTC)Reply[reply]

Looks like a proposed standard

The intro says the unit is "standards based", and is "still" not widely accepted. I sense a contradiction there, as well as some (well-intentioned) POV-pushing.

I'm just as frustrated as anyone else with the discrepancy between 1,000,000 bytes vs. 1,048,576 bytes. But we're supposed to report the facts about the real world, not wishful thinking. And we shouldn't pretend that something is an industry standard when it hasn't caught on yet. --Uncle Ed (talk) 21:01, 6 November 2009 (UTC)Reply[reply]

In the real world there is a difference between base-10 (decimal) and base-2 (binary) numbering. That is not wishful thinking. Some sizes are reported in base-2, and some in base-10, and some with a mixture. Drive capacities are mostly reported in base-10, whilst RAM memory was mostly reported in base-2. This is not POV pushing. That is how manufacturers of hardware and software are reporting sizes in the real world. You can never escape the difference between base-10 and base-2 (which is magnified as the numbers grow), and the nomenclature clarifies. 'Mega' in every other context except computing unambiguously refers to 106. 'Mebi' to me, is an efficient way of removing that ambiguity and un-polluting the 'Mega' namespace in cases where the reported size is binary. --Jargonautica (talk) 04:52, 1 August 2011 (UTC)Reply[reply]

The only reason "decimal" Megabytes/Gigabytes have ever existed is so that they could put a higher number on disk capacities and connexions bandwidths, to rip off consumers. I'm also pretty sure old disk drives in the '90s used true Mega Bytes (in power of two) before someone had the dirty idea to use powers of ten to virtually increase their sizes. It's a scandal Wikipedia and this so-called IEC standard who means absolutely nothing support this dirty practice. (talk) —Preceding undated comment added 11:53, 25 August 2011 (UTC).Reply[reply]

WP is not the place to push your stupid ideas about sales rip-off conspiracies. Kbrose (talk) 18:58, 25 August 2011 (UTC)Reply[reply]

"Mebibytes" are stupid

I have been a software engineer for well over ten years and have never heard the term "mebibyte" or any of these silly names except on things derived on Wikipedia. They are STUPID sounding. The answer is really simple, my preferred term is decimal megabyte and binary megabyte when the distinction is important. A megabyte can be either 10^6 or 2^20. A decimal megabyte and a binary megabyte is...what it sounds like it is, and may as well be abbreviated MB10 and MB2 respectively. Nothing ambiguous, and nothing inconsistent with the way these terms are used in real life. If I am reading "MiB" out loud, I say "binary megabyte". In situations where the subscript formatting is not possible (such as in a Linux command-line disk partitioning utility), then I believe it's reasonable to use "MiB" as an abbreviation binary megabyte (sort of how "lbs" is read as "pounds" instead of "loobs"). Reswobslc (talk) 17:35, 17 April 2010 (UTC)Reply[reply]

Perhaps you need to reeducate yourself once a while on the job and keep your personal opinions elsewhere. Kbrose (talk) 19:01, 25 August 2011 (UTC)Reply[reply]
Boy, talk about drive-by arrogance. Practice what you preach. What Reswobslc said, he stated concisely, with a proper level of detail. His "personal opinion" as you label it, is VALID and based on real-world experience, something that you perhaps do not understand. I assert, that if a global survey were taken, less than 5% of engineering professionals would admit to pronouncing or saying "Mebi..." or "Gibi..." at anytime, anywhere. WickWax (talk) 23:56, 2 July 2012 (UTC)Reply[reply]
I have to agree on the whole "Mibby" garbage. It was a ploy by the hard drive manufacturers to get themselves out of trouble when consumers realised that they weren't getting what they paid for. We have no business using this crap. Dr. Robert Sedgewick, the founding chair of the Department of Computer Science at Princeton University, and the author of the nearly-universal textbook Algorithms, in a lecture video, defines a KILOBYTE as "one thousand twenty-four bytes". He had nothing to say about "kebibytes" or other such nonsense. The lecture video is on Coursera in the class "Algorithms II", but I'm not sure whether or not you need a (free) login to get to the video. At any rate, I think Dr. Sedgewick knows what he's talking about. I'll trust him over a bunch of shady hard drive manufacturers any day. That's my $0.02 worth. — Preceding unsigned comment added by (talk) 13:11, 14 April 2013 (UTC)Reply[reply]
I insist on NOT using these bibble-words. The directly stated unit is the BYTE, a binary form, so any inference of decimal is wrong. No matter how we indicate this, it's up to us to do the maths and not be confused by claims from the sellers of hard drives. As to prefix K being confused with Kelvin, a clearer way out is to use uppercase prefix for greater than unity, lower for less, never mind Kelvin. He won't mind, he's bigger than this, and he's dead. The context makes it clear. If there is any ambiguity in reading KB as 'Kelvin Byte' as one writer showed to be meaningless, the ambiguity would be removed if maths started to be written K*B where such a thing had any meaning at all. After all, it writes + and - and /, the archaic obsession of omitting the multiplication operator and writing expressions in florid ways that cannot be written as inline expressions easily converted to computer code is itself a far bigger risk of ambiguity. If people really want to mess with accepted standards, they'd do better to address realistic problems and ease the transitions between engineering fields instead of obfuscating them with garbage like MiB that were inflicted on us by an effort to clear up a mess made my marketeers and others who would deliberately confuse themselves and other people rather than make it easier to get at the truth. This nonsense is a symptom of a major decline in the standards of computer and internet engineering. We need fewer arbitrary conventions, not more! Why else did Einstein seek to express so much in so simple a way? We should all be trying to do that. (talk) 16:58, 5 December 2015 (UTC)Reply[reply]

How about this idea

While the MiB is not in common usage, and I expect never will be, it is worth considering for a moment:

(Using the usual approximations)

  • 1KB = 1.024x10^3
  • 1MB = 1.049x10^6
  • 1GB = 1.074x10^9
  • 1TB = 1.100x10^12

so the percentage errors are increasing as memory sizes are increasing. While a couple of percent could be ignored, the bigger errors are bigger than the margins on some hardware devices - and already disk drives quote in 10^12. The sizes are reverting to the standard scientific notation. So this WP:neologism will probably be forgotten before long. Stephen B Streater (talk) 23:08, 19 April 2010 (UTC)Reply[reply]

So let's get this straight: 1 MiB = 1MB, correct ?

The difference is just the name, right ? -- Alexey Topol (talk) 13:47, 15 August 2010 (UTC)Reply[reply]

"MB" is ambiguous. It can = either 1000 bytes or 1024 bytes. MiB always = 1024 bytes. --Cybercobra (talk) 19:11, 15 August 2010 (UTC)Reply[reply]
No it's not. Which part of BYTE means BINARY do you not understand? MB always means 1*1024*1024 bytes whatever the sellers of hard drives want you to think. (talk) 17:04, 5 December 2015 (UTC)Reply[reply]
Cyber cobra, shame on ya. MiB = 2^20, not 2^10... (talk) 19:51, 17 January 2012 (UTC)Reply[reply]

Mentioned in TAOCP or just on Knuth's website?

In the article, it says that Knuth discusses the mebibyte in TAOCP. There is a reference provided to one of Knuth's "recent news" pages where he discusses the mebibyte, but I don't think he ever mentions it in TAOCP. Does anybody know where in TAOCP he discusses this? — Preceding unsigned comment added by (talk) 01:11, 26 March 2012 (UTC)Reply[reply]

Weasel Words

The article currently states: "Disk drive manufacturers generally use megabyte correctly to mean 1,000,000 bytes". Correctly? Re-defining a unit that has been understood clearly for 40+ years does not make the new use "correct". This should be re-phrased as "...use megabyte according to the IEC definition of 1,000,000 bytes". Mrstonky (talk) 23:29, 11 March 2014 (UTC)Reply[reply]

Not Weasel Words

"Re-defining a unit"? 1st, mega is a count and byte is a count, so megabyte is a count. 2nd, mega means 10 to the 6th power, not 2 to the 20th power. A Megawatt is 1-million watt, not 1 048 576 watt. You are claiming that "megabyte" is a unitary thing, but it ain't so. Calling 1 048 576 bytes a megabyte was a big mistake made by thoughtless nerds. --MarkFilipak (talk) 05:21, 5 June 2014 (UTC)Reply[reply]

The prefix "mega" does represent 10^6 in the SI system of units; however, in the context of computing, the term refers to 2^20. Note that the byte is not an SI unit, and that SI is entirely unrelated to the measurement of data sizes: the entire conversation here is outside the scope of SI. The names of the SI prefixes were re-purposed many years ago and applied, with different meanings, to a non-SI context, which is why we also have prefixes such as "kilo", "mega", etc., with different meanings, in the field of computing. The use of these prefixes to denote binary quantities has become establised usage in computing, and is therefore the definitionally correct usage: contrary to what others have said above, established patterns of common usage do indeed determine correctness in questions of convention and notation. When software - and Wikipedia articles - attempt to deviate from established standard usage (irrespective of what formal documents are published by standards bodies), it is they who generate confusion and difficulty. In computing, the empirically-observable standard is K=1024, M=1024^2, etc., and deviation from this usage is incorrect. (talk) 05:48, 28 June 2014 (UTC)Reply[reply]
"The horror, the horror..." How can self-proclaimed "intelligent" human beings be so infinitely ignorant? Citing [[kilo]: "The prefix kilo is derived from the Greek word χίλιοι (chilioi), meaning "thousand". It was originally adopted by Antoine Lavoisier's research group in 1795, and introduced into the metric system in France with its establishment in 1799."

The word or prefix "kilo" means exactly 1000 decimally in every single context and for every single technical unit there is. What's wrong with people that they cannot admit that using kilo as short for any number but 1000 is an absolute mistake? How dare you belittle, offend and even threat anyone telling the simple, obvious, undeniable truth? In fact, the people who complain the loudest about the most sensible unit prefixes Kibi, Mebi, Gibi are clearly those that know the least about computing, computing history and especially anything beyond that. They don't realize that even transfer rates have always been specified using decimal notation, e.g. a throughput of 800 Mbit/s has always meant 800 million bits per second no matter what kind of memory (if any) involved or not. Just because powers of 1024 are common and useful in computing doesn't mean anyone ever had the right to abuse and redefine common, widely understood, known and accepted prefixes. The bad guys aren't actually those who started this in the 1960s with a harmless K as short for 1024 in casual, informal use. The actual culprits are those millions of pseudo experts and programming wannabees who still to this day are unwilling to put things right. Obviously, it's exactly this kind of lousy attitude that leads to the sorry state of software quality - or better lack there of. It's all because the responsible people are unwilling to learn from mistakes - there own as well as others - and fix them. --2001:5C0:1400:A:0:0:0:CB (talk) 02:49, 17 January 2015 (UTC)Reply[reply]

Good reply! You're right and we need to relearn some things that have caused confusion. The misunderstanding has been wide, culturally. However, I have to admit that "kibi", "mebi" and "gibi" sound stupid. We need them but I wish they had come up with better terms. Also, I think manufacturers took advantage of the misunderstanding (or informal use of the terms). (talk) 22:27, 19 July 2015 (UTC)Reply[reply]
Why do kibi, mebi, et al. sound more stupid than giga, yotta, or others? They are good choices, because they immediately convey the equivalent level of exponentiation as the metric versions, in addition to identifying the binary nature. If one knows the decimal prefixes, one automatically knows the binary ones. This idea about manufacturers is another misconception, no one has to gain from confusion. Kbrose (talk) 22:44, 19 July 2015 (UTC)Reply[reply]
For once I find myself disagreeing with Kbrose. I do agree with the first part of his (or her) message: There is nothing silly about wishing to be precise, and the main reason why people think "mebi" & co sound silly is that those who wish to ridicule them use silly voices when they say them. Where I take issue with Kbrose is with the assertion "no one has to gain from confusion". The big winners from the confusion are those who shroud their products in 'MB mystery' in order to sound superior: their goal seems to be to perpetuate the confusion so that consumers continue to rely on their magic. So far they have been (very) successful. (Though in the long term they will make themselves look foolish). Dondervogel 2 (talk) 05:52, 20 July 2015 (UTC)Reply[reply]
Well, if you call that a gain, then I agree with you. Kbrose (talk) 13:08, 20 July 2015 (UTC)Reply[reply]

Unit symbol in lead

Per WP:BRD, I'm bringing a recent revert by Kbrose to the floor. Typically, we place common abbreviations in the opening sentence per MOS:LEAD (specifically MOS:BOLDSYN). The edit summary provided does not seem to justify the reversion. Furthermore, I checked WP:WikiProject Measurement for any style guidelines that might have been provided by the WikiProject. I found nothing pertaining to Kbrose's justification. --GoneIn60 (talk) 04:29, 22 July 2015 (UTC)Reply[reply]

For me the issue is one of harmony with kibibyte, gibibyte, etc. Most (possibly all) of those have a separate sentence at the end, and on those grounds I prefer the lede as it reads now, also with a separate sentence at the end. If those other articles had all included "(unit symbol KiB)", "unit symbol GiB", etc, parenthesed in the opening sentence, my preference would be reversed. Dondervogel 2 (talk) 10:03, 22 July 2015 (UTC)Reply[reply]
I appreciate the feedback. I can certainly understand where you're coming from. Uniformity is nice. The problem with that view, however, is that it's not exactly uniform across the scope of the WikiProject. There are other articles, such as kilogram, pound, ampere, and watt (just to name a few) that are formatted differently. In fact, they all agree with the example at MOS:BOLDSYN. The kilogram article has even been peer-reviewed on its way to achieving good article status. While your point makes sense, I think it only takes a smaller subset of measurement articles into account. The project as a whole seems to disagree, as well as other articles that have abbreviations across all of Wikipedia. There are situations when it's necessary to mention the abbreviation later in the lead, but those should be rare exceptions. --GoneIn60 (talk) 12:45, 22 July 2015 (UTC)Reply[reply]
Unfortunately, you are right in your observation about the poor style of many articles. Many "good" articles aren't. Looking at kilobyte, it is frankly atrocious, dominated by a few sockpuppet editors. Just try to actually improve that article, you shall find out. Kbrose (talk) 12:59, 22 July 2015 (UTC)Reply[reply]

The lede of an article should provide a clear and uncluttered summary of the article without ballast of details. Your version introduces a style of clustering, overloading the first sentence with information that is irrelevant in the first moments of introducing a topic. Here you introduced the notion of a unit symbol before it was even established that the topic is a unit of measurement. Articles and therefore the lede should not be a random accumulation of stuff, but follow some logical or educational construction. Too often I find articles where I have to search for the end of all the qualifications, alternate forms, pronunciations, translations, and what not before I can understand what the sentence actually tells me. WP is not a dictionary or short-from encyclopedia, but uses full-article format. There is plenty of space for development of a topic. Kbrose (talk) 12:55, 22 July 2015 (UTC)Reply[reply]

@Kbrose: Thank you, but I'm well aware of the lead's purpose and structure. The view that placing abbreviations in parenthesis in the opening line adds unnecessary clutter is at odds with Wikipedia's Manual of Style. You are certainly entitled to your opinion, but this is not the place to take a stand against it. If you wish to state your case and have the Manual of Style changed, start a discussion at its talk page. My efforts here adhere to the guidelines that have already been established. --GoneIn60 (talk) 14:42, 22 July 2015 (UTC)Reply[reply]
Where in the MOS does it require a unit's symbol to appear in the opening line? Dondervogel 2 (talk) 17:40, 22 July 2015 (UTC)Reply[reply]
Including just an abbreviation of the topic name, without greater discourse in prose, appears appropriate when commonly recognized, and I have included that in many articles of technical nature myself to avoid repetition of technical terms in the lede and article. However, a unit symbol is not an abbreviation, and should not be used self-standing. Kbrose (talk) 18:00, 22 July 2015 (UTC)Reply[reply]
A unit symbol is not an abbreviation? Really? That sounds like a game of semantics. I would certainly contend that it is in fact a form of an abbreviation, which is defined as "a shortened form of a word or name that is used in place of the full word or name" by Merriam-Webster. Another source for the definition clearly lists lb as an example of an abbreviation. So I'm not sure what kind of point you're trying to make with that comment, but it's not helping your case. Furthermore, your comments seem to be ignoring my intentions. I am not trying to have an abbreviation be "self-standing". I am simply trying to insert it in either parenthesis or prose in the opening line. These are my proposals:

1. The mebibyte, represented by the unit symbol MiB, is a multiple of the unit byte for digital information.

2. The mebibyte (MiB) is a multiple of the unit byte for digital information.

Neither proposal is putting the unit symbol out of context or making it appear self-standing. Both are perfectly acceptable according to the examples listed at MOS:BOLDTITLE and MOS:BOLDSYN. In fact, the example for sodium hydroxide at those links clearly shows the compound's abbreviation appearing in the opening line in parenthesis: "Sodium hydroxide (NaOH) also known as lye and caustic soda, is an inorganic compound."
Given all the examples I've provided, I just don't understand why this is being so adamantly opposed. The explanations provided so far are only causing more confusion. --GoneIn60 (talk) 19:26, 22 July 2015 (UTC)Reply[reply]
I agree with Kbrose that a symbol is not an abbreviation. The issue, therefore, is whether a symbol belongs at the beginning or the end of the lede. It seems that the MOS is silent on the matter, and that is a weakness of the MOS. For this reason, wouldn't it be better to hold this discussion (about where a unit symbol belongs) at the MOS? Dondervogel 2 (talk) 19:46, 22 July 2015 (UTC)Reply[reply]
Any chance you'll explain why you agree? Also, you've ignored the sodium hydroxide example as if it was never provided from the MOS. It is truly a game of semantics if you are implying that abbreviations and unit symbols are unalike, and therefore should be treated differently. I'll wait to see how Kbrose feels before deciding what the next step will be. Hopefully other editors that frequent articles like this will weigh in as well. --GoneIn60 (talk) 19:56, 22 July 2015 (UTC)Reply[reply]
Yes it is semantics. No, it is definitely not a game. My reason for agreeing with Kbrose is that the mebibyte, along with many other units, is part of the International System of Quantities (ISQ), as defined by ISO/IEC 80000. The ISQ makes a clear distinction between symbols and abbreviations, and for good reason (for example, to enable a clear and unambiguous expression of the nature and value of physical quantities). I do not mean to imply that the existing MOS rule should not apply to symbols - only that it does not apply as presently worded. The solution? Write not so that you can be understood. Write so that you cannot be misunderstood. Dondervogel 2 (talk) 20:09, 22 July 2015 (UTC)Reply[reply]
It's fine if there's a technical reason to avoid calling unit symbols abbreviations. I can understand that in some cases, the symbol is not a shortened form of the word. It is merely a symbol. I get that. However, I don't think there's any reason to treat them differently from other symbols. If we accept for a moment that there's a significant difference between symbols and abbreviations, then we can agree that NaOH is clearly a symbol, not an abbreviation. The cited example in the MOS allows the symbol to appear in parenthesis in the opening line. Therefore, it would be reasonable to conclude that the MOS supports at least one of the proposals. --GoneIn60 (talk) 20:37, 22 July 2015 (UTC)Reply[reply]
I agree that NaOH is a symbol. If that example is given one can conclude, as you say, that including a symbol in the opening line is permitted, but not that it is required. I still think that the solution is to clarify the intended meaning of the MOS and reword it accordingly. Dondervogel 2 (talk) 21:24, 22 July 2015 (UTC)Reply[reply]

We're making progress. Part of the argument above seemed to claim that it wasn't acceptable, but at least now we've recognized that it is. If we look at WP:MOS#Units of measurement, it states, "Potentially unfamiliar unit symbols should be introduced parenthetically at their first occurrence in the article, with the full name given first." Why should we treat it any differently here? I think it's a stretch to say this doesn't apply to articles that are about the unit of measurement in question. --GoneIn60 (talk) 14:33, 23 July 2015 (UTC)Reply[reply]

This is a completely different circumstance. This is for the situation, as quoted, when a quantity value is cited with a potentially unfamiliar unit symbol in running text. Here, the lede develops the concept of the unit, not merely uses it. What exactly is your objection to the present text and the logical development of the topic?
BTW, NaOH is not a symbol. It is a chemical formula. The elements have chemical symbols (Na, O, H), but combinations of them are formulas.Kbrose (talk) 16:06, 23 July 2015 (UTC)Reply[reply]
(ec)@GoneIn60: I think it is reasonable to argue exactly that. The reason it is needed when faced with an unfamiliar symbol is to avoid a WTF reaction, which I have recently heard referred to as the "principle of least unfamiliarity" or somesuch. The purpose of the MOSNUM requirement you mention is to avoid the reader encountering an unfamiliar unit symbol without having first being introduced to its name. There is absolutely no risk of that in an article about the unit because the first thing the reader encounters is the name, and therefore no need to follow the unit name by its symbol (though there is also no need not to, if you see what I mean).
@Kbrose:OK, so I stand corrected. (Is it only the elements that have symbols then, and not compounds?)
@both: Either way, I believe this thread exposes an ambiguity in MOSNUM. Dondervogel 2 (talk) 16:18, 23 July 2015 (UTC)Reply[reply]
Whether or not it is an acronym, abbreviation, contraction, initialism, symbol, chemical formula, etc., really doesn't matter. It is practically instinct in writing to place it in parenthesis next to the first occurrence of its expanded form. The failure to do so is a deviation from the norm, even in an article that is covering the term itself. While it's true that it's not a requirement and reads perfectly fine the way it is now, I think you'll find future visitors to this article (and others like it) making similar attempts to place it in the opening line. It is a natural tendency to move the bolded terms close to each other, evidenced by the way such terms are handled in so many other articles. In the grand scheme of things, it's not a large enough concern for me to pursue any further, so with that, I'll gracefully bow out. If one of you decide to address this later at a MOS talk page, feel free to ping me if you'd like my input. I will do the same. --GoneIn60 (talk) 19:40, 23 July 2015 (UTC)Reply[reply]
@GoneIn60: So far I have avoided expressing a preference between the various options, other than for harmony across related articles. The main reason for that reluctance is that I had not really thought about the issue before you raised it, and don't really feel strongly about it either way. Now that I have thought about, I am leaning towards the form "The mebibyte (symbol MiB) is ...". Dondervogel 2 (talk) 23:00, 23 July 2015 (UTC)Reply[reply]
I'm not sure the disambiguating term symbol is needed, but it would be a welcome improvement from my perspective. --GoneIn60 (talk) 17:01, 24 July 2015 (UTC)Reply[reply]
@GoneIn60: To be acceptable, the change would need to be applied uniformly to kibibyte and all the other mebi-cousins.
@Kbrose: Would you object to this change, assuming it were applied uniformly? The benefit that I see is that it gives the same message in fewer words, which I think makes it clearer. Dondervogel 2 (talk) 17:13, 24 July 2015 (UTC)Reply[reply]
Kbrose, looks like we're still waiting for your closing thoughts on the matter. --GoneIn60 (talk) 19:27, 10 December 2015 (UTC)Reply[reply]

Do any Linux distributions not use the standard?

It might be more than "many" distros that use it. I'd be interested to know about the ones that don't. Also interesting is where the trend began and what made it catch on within the Linux community. There is perhaps some correspondence to Firefox's support for web standards in the face of IE's disregard for them, where adhering to standards becomes a point of principle, pride, and identity in the open source movement. Except the recent ANSI C standards such as C99 and C11 for some reason.  Card Zero  (talk) 17:08, 25 January 2016 (UTC)Reply[reply]

To me it seems more a question of principle, pride, and identity by Microsoft in flouting the standard, just because they can. In terms of the history of when, where and why, have you tried the timeline article? Dondervogel 2 (talk) 17:21, 25 January 2016 (UTC)Reply[reply]
For a Microsoft perspective, Raymond Chen says "If Explorer were to switch to the term kibibyte, it would merely be showing users information in a form they cannot understand, and for what purpose?"
Regarding the timeline (fun article):
  • In the 80s and 90s (up to '98), MB is used in a universally binary sense except by hard drive manufacturers, for obvious marketing reasons.
  • A computer science paper in 1999 uses "mebibytes". Another one in 2001 uses KiB, MiB, and GiB.
  • The Linux kernel in 2001 begins reporting some sizes in KiB. Why? This is the context of a comment from Eric S. Raymond. After noting that "kibibyte" is and ugly word reminiscent of kibble, he says "best practice in editing a technical or standards document is to (a) avoid ambiguous usages, seek clarity and precision; and (b) to use, follow and reference international standards" and that his duty is clear, although he doesn't like it.
Now, what exactly is going on in that decision? It positions Linux as technical, precise, and responsible while making other operating systems (mainly MS at this point in time) look ad-hoc and erratic. (I'm not sure any system, faced with the problem of cat herding, can realistically aspire to avoid being ad-hoc in practical reality.) Did it solve any problem? ESR mentions that "it's a pretty quiet week [...] otherwise people would have better things to do than argue over KB vs. KiB". If the standard becomes popular, hard drive manufacturers who previously misled us by measuring exactly one thousand kilobytes as a megabyte will switch to misleading us by doing the exact same thing under the authority of an international standard. Computer scientists might be made happier by the unambiguous precision of "MiB", but exactly why it's important to them that sizes are reported this way within their operating system, rather than within scientific papers, is unclear. One might think computer scientists are the most likely to be aware of the ambiguity hazard, if it even matters.
  • 2000s: MiB is used by some computer scientists, MathML, and a BitTorrent client (open-source). Then in 2009 by Apple to "avoid conflict with hard drive manufacturers' specifications". Since OSX is at this point in time Unix-based, and glamorously expensive, the decision may be influential.
  • 2010: Ubuntu adopts it. Then, rather surprisingly, Hewlett-Packard.
  • Subsequently: the IBM style guide says "explain to the user your adopted system". Toshiba uses footnotes to clarify. An online computer dictionary defines the megabyte as one million bytes, but normal dictionaries such as Merriam-Webster and Chambers 21st Century define it as 1,048,576.
It looks like the change has increased ambiguity. Drive manufacturers are now technically correct in their unchanged specifications, which must have made Apple's lawyers happier. Everybody else has to disambiguate. Those who want to make a display of being very technical have a means of doing so, but everybody else has to join in with this technical precision by using new words that sound like "kibble", which might be misunderstood, or risk being misunderstood the other way because the new words exist, where prior to the standard the customary 2N meanings were well known and only drive manufacturers used different meanings, on purpose. So perhaps we could blame the drive manufacturers for being disingenuous, computer scientists for being anal, Linux for being pretentious, or Apple for being afraid of lawsuits, but I don't think MS were complicit through inaction, I think Chen's claim that Explorer is "just following existing practice" is valid.  Card Zero  (talk) 19:48, 25 January 2016 (UTC)Reply[reply]