Talk:Data rate units

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Computing / Networking (Rated List-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 List  This article has been rated as List-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
Taskforce icon
This article is supported by Networking task force (marked as Mid-importance).
WikiProject Measurement (Rated List-class, Low-importance)
WikiProject icon This article is within the scope of WikiProject Measurement, a collaborative effort to improve the coverage of Measurement on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 List  This article has been rated as List-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
WikiProject Telecommunications (Rated List-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Telecommunications, a collaborative effort to improve the coverage of Telecommunications on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 List  This article has been rated as List-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.

What is this.[edit]

Seriously, what. Who has the authority to retroactively redefine a standard that's been in place for 50+ years? A Kilobyte has ALWAYS been 1024 bytes and a Kilobit has ALWAYS been 1000 bits. Adding a third corrupt (and ridiculous sounding) standard to the mix just because hard drive manufacturers are getting sued because they wrongly define their drives in Megabytes instead of Megabits isn't a reason to change the entire basis of space measurement in computing. Hard drive manufacturers need to change, the world doesn't revolve around them.

Most operating systems use 1024 = kbyte and 1000 = kbit and have been doing so forever. I surely hope that people don't start catering to drive manufacturers just because they try and corrupt a standard. Ggigabitem (talk) 09:56, 21 September 2009 (UTC)

Here is another perspective: "kilo" has always meant 1,000. Then some computer people start using "kilo" for 1,024 and that is just incorrect and they should not use it incorrectly, the world does not revolve around them. - (talk) 22:15, 24 July 2011 (UTC)

This article is utterly WRONG. It's a shame that no serious computer scientist or mathematician comes in to put things straight. Esquierman (talk) 12:43, 21 May 2010 (UTC)

Take a Pause[edit]

The whole world knows that there are two predominant counting systems we have to contend with - the decimal and the binary systems. Everyone also knows that disk drive manufacturers cheat - these manufacturesers know that binary bytes do not fit into 'decimal' drive space. But this is turning into a silly debate! Some people are even turning it into a techno jargon discussion of 'start bits/stop bits/parity/bits per byte"... come on! Stop all this silly debate. Surely, when we talk of TRANSMITTING or STORING DATA, we should be using the binary system? And whether us computer guys like it or not, 'they' (the International Electrotechnical Commission (IEC)), a standards organisation which is supported by just about every country in the world that matters, renamed our "50-year-old" computer jargon from Mega to Mebi, etc. Ggigabitem, you will be surprised (and maybe saddened like I was) to find out a Kilobyte (1024 bytes) is now called a Kebibyte. Go look at That is a good start to this article. The standards bodies have spoken! Fix your terminology and there will be no confusion! -- (talk) 06:54, 30 November 2010 (UTC)

The IEC is not in charge of setting the standards, that task is the responsibility of the IAB and the IESG.

Regardless of how we personally view the debate, there is not a current official standard. Popular convention is to use a small 'b' for bit, and a large 'B' for byte, to refer to storage capacity in binary and transfer rates in decimal. And despite being "adopted" in '99, I have yet to see anybody in any industry using the new terminology.

As for which is the 'better' way to go, that's not a debate for Wikipedia to resolve. The section on the 'new' versions should be amended to reflect reality; that there is not a formally adopted standard, and that this alternate version is not in widespread use regardless of what the Standards say.

If someone cares to supply an RFC showing otherwise, or provides a valid reason why the IEC should preempt the normal standards adoption process, then I'll withdraw my request. (talk) 13:23, 3 January 2011 (UTC)

Page creation[edit]

This page was created to merge all the articles like kilobit per second and mebibit per second. A merge was discussed a few times on pages like Talk:Bit_rate#Merge_of_bit.2Fs_articles and Talk:Kilobit_per_second#Merge_into_Bit_rate.3F and was supported, but no one got around to it.

I think it's important that, no matter how we arrange this page, clicking a link like kilobit per second should take you directly to a definition of the unit (through a redirect to an anchor link in the merged page). Whether that anchor link is a section header or part of a table, though, is up for discussion. — Omegatron 02:07, 22 December 2007 (UTC)


The page currently seems to suggest using b/s as an abbreviation for bit/s. That is a serious mistake, as it invites confusion with Byte/s. The case difference is not enough in practice. The most usual way to give data rates (bit rates) is bit/s, kbit/s, Mbit/s, etc. (or bps, kbps, Mbps, for those who would still rather write cps instead of Hz). This avoids any confusion with bytes and makes it clear that k, M, G are SI prefixes and not multiples of 1024.

This needs to be fixed.

MahatmaWatcher (talk) 19:24, 6 July 2009 (UTC)

SI vs. CS unit confusion[edit]

I think this is a great start. I've avoided past discussion because there were too many articles involved.

As someone with a TRANSMISSION background, I have always seen transmission system rates discussed in some form of "decimal prefix bits" / second - never bytes, never binary prefixes. Also, rates always referred to the "line rate" as opposed to the "data transfer rate", when talking about the transmission system. When talking about COMPUTER SYSTEM data transfer rates, I understand and agree with the use of bytes and binary prefixes. This perspective makes comments like "Bytes are typically used in modern systems" and "1536k T1 — 1,536,000 bit/s (1.536 Mbit/s)" wrong. In the former case, bits are still predominantly (exclusively?) used in modern systems and in the latter case, the correct T1 rate is 1.544 Mbit/s. Does anyone else share this line of thought? Bellhead (talk) 02:32, 24 December 2007 (UTC)

Yes. "Mbit/s" always means 1,000,000 bits per second. It is not subject to the computer science ambiguity.
File transfer, though, is often in bytes; KB/s or whatever, which is ambiguous and means different things in different contexts.
To my knowledge, no one ever uses Kibit/s (or "binary kilobits per second") for anything. Only byte rates are expressed as binary multiples. — Omegatron 01:20, 29 December 2007 (UTC)
I noticed that the article says 'when a "1 Meg" connection is advertised, it usually means 1 Mib/s', but it seems to me that this would usually mean 1 Mbit/s (1000x1000 bits/sec), based on the comments above. -- (talk) 18:31, 1 April 2009 (UTC)

I noticed that the VHS/DVD/HDTV data rates in the megabit per second section were in Mbytes/s - different to the Mbits/s used in the surrounding text. I checked the data rates for DVD here [1], and found DVD to be 8Mbits/s as opposed to 8Mbytes/s in the text. Also, for HDTV I found a data rate of 27Mbit/s here [2]. With VHS I wasn't sure - isn't it an analog format?? —Preceding unsigned comment added by (talk) 05:00, 7 January 2008 (UTC)

I suspect that the purpose of this page is to clarify terminology regarding data rate units. Now, it's a nice little theoretical explanation, which I would be happy to support, by propagating for it, if I was entitled to. But as long as this theory is not applied/enforced by whoever it may concern (software developers, hardware manufacturers, vendors etc) then the information on this page only creates confusion. It's actually telling people something that is not true.
An example: Let's say we use a download client that is telling us that it's currently downloading by 653 kB/s. I bet that most people who have read this page actually believes the client is actually refering to 653,000 bytes per second. My suspicions are that most developers are actually still using the binary definitions. This should be clarified. I'm thinking; the first part of the page should be called something like "transition period" that explains how some software developers are still using the binary definitions and maybe a section below of a small list of famous companies that have adopted the new standard? Thoughts? — MahatmaWatcher (talk) 09:49, 7 July 2009 (UTC)

I really dislike having the first subheading read "Avoiding confusion." This seems "preachy" and unhelpful in describing the contents of the section. I say "preachy" because it can be construed to label dissenters as "confused." Because there is disagreement on the meanings of the various terms, this section should be prefaced with a statement attributing these definitions to some standards body (as well as real citations): "According to ANSI/IEEE/whatever a kilobit is...". A section discussing controversy or ambiguity should be pushed further down the article so that we can begin with the most factual, undisputed information. --Ilikeimac (talk) 17:32, 13 January 2010 (UTC)


Instead of listing examples here, should we just link to List of device bandwidths? —Preceding unsigned comment added by Omegatron test account (talkcontribs) 06:44, 17 January 2008 (UTC)

I agree, infact this whole article is in serious need of clean up. This article really needs to be simplified. I wish I had more experience editing Wikis, or I'd do it myself. —Preceding unsigned comment added by (talk) 18:41, 2 September 2008 (UTC)

I did a cleanup, hope the current version is okay. I agree that the List of device bandwidths should be the primary example list, but that is for physical devices only, while this page is about 'abstract' units. Data rates are also used in computing, e.g. the MS SQL Server returns a processing speed of the query in MB/s [3]; and MD5 hashing performance is also measured in MB/s [4]. These would definitely not fit to List of device bandwidths, and I think we should add a few of these examples here in the future. --Kuteni (talk) 15:47, 18 May 2009 (UTC)

Megabit per second[edit]

This section currently describes 8 Megabits per second as equal to 8 Megabytes per second. I'm wondering if that should be so, because this other page says:

1 MBps (megabyte per second) = 8 mbps (megabits per second)

Which is a ratio of 8 Megabits (Mb) to 1 Megabyte (MB), which I would expect, since 8 bits = 1 byte. In other words, whatever a Megabyte per second is, a Megabit per second should equal a Megabyte divided by 8, shouldn't it? If that's so, 1 Megabyte being 1,024,000 bytes, that divided by 8 should equal 128,000 bytes, so that 1 Megabit per second = 128 bytes per second. Yes?

An admitted novice here, but unless I'm missing something, the figures in the article don't line up. --Narfnarfsillywilly (talk) 16:34, 16 September 2008 (UTC)

1 Megabyte is 1,000,000 byte, divided by 8 is 125,000 byte. TechControl (talk) 01:58, 23 January 2009 (UTC)

TechControl, what are you doing, this is obviously wrong. A megabyte is 1024 * 1024 bytes (megabyte is binary not decimal no matter what this page says!) which equals 1048756, multiplied by 8 equals 8,388,608 bits. NarfNarf, you got the number of bytes per megabyte wrong, and divided by 8 to find bits, where you should be multiplying by 8 as each byte has 8 bits, not 8 bytes sum to make 1 bit! (talk) 11:04, 7 November 2009 (UTC)

Conversion formulas[edit]

There seems to be a mixup in the conversion formulas, in "kbps -> MBpm". "MBpm" isn't previously mentioned, but "MBps" is, so I assume it should only differ a factor 60. The page says: A kilobit per second (kbit/s or kb/s or kbps) is a unit of data transfer rate equal to 1,000 bits per second, and A Megabyte per second (MB/s or MBps) is a unit of data transfer rate equal to 1,000,000 bytes per second

But then in the formula, it says: To convert between common denotations: kbps -> MBpm ((((n * 1000) / 8) / 1024) / 1024) * 60 = m

To be consistent with the definitions above, the two 1024's should be 1000, right? - (talk) 17:30, 29 October 2008 (UTC)


raw data for

bit per second;b/s;1;1/8;
byte per second;B/s;8;1;
kilobit per second;kb/s;1000 = 10^2;125 = 10^2/8;
Kibibit per second;Kib/s;1024 = 2^10;128 = 2^10/8;
kilobyte per second;kB/s;8000 = 8*10^2;1000 = 10^2;
Kibibyte per second;KiB/s;8192 = 8*2^10;1024 = 2^10;

TechControl (talk) 02:03, 23 January 2009 (UTC)

Conversion formulas[edit]

I removed these sections from the page. The expanded the conversion table covers this first table IMO. However, adding some practical "how much my media download will take over my broadband" type of example calculations might be useful to the readers (something like this second table with proper formatting). --Kuteni (talk) 13:17, 16 May 2009 (UTC)

To convert between common denotations, the following formula are used.

kb/s → KiB/s ((n * 1000) / 8) / 1024 = m
kb/s → MB/m ((((n * 1000) / 8) / 1024) / 1024) * 60 = m
kb/s → MB/h (((((n * 1000) / 8) / 1024) / 1024) * 60) * 60 = m

The following table shows how much data would theoretically be downloaded when running such a stream in some common denotations.

kb/s 50.00 150.00 139.81
KiB/s 6.10 18.31 17.07
MB/m 0.36 1.07 1.00
MB/h 21.46 64.37 60.00

Factual accuracy[edit]

This page was tagged with:

But there's no explanation as to what is wrong and it was added by a user who only made this one edit so I've removed it to here. Feel free to put it back and explain here why if there is something to be disputed. Smartse (talk) 13:49, 17 May 2009 (UTC)

Well, as said at the top of this page, when has a kilobyte been 1,000 and not 1,024 bytes? And where did the kibi- mebi- etc. prefixes come from? I believe this article is incorrect and if you were to search how many bytes to a kilobyte' in google all the results on the first page would agree with me. (talk) 10:53, 7 November 2009 (UTC)

none of this makes any sense to me. wiki contributors, you have failed. —Preceding unsigned comment added by (talk) 04:27, 7 December 2009 (UTC)

This page is listed as a page related to data flow rates. Since the technicality of the article is beyond a general discussion of data flow rates, it probably should be redefined. I think that most readers/users do not need/use these data flow rates, and we should try to discuss the major flow rates that most users want to understand before we go into greater detail. (talk) 08:00, 9 January 2010 (UTC)

Hi all

Fixed factual accuracy, (see "Problems") all other parts ok?

Kory Burton PhD —Preceding unsigned comment added by (talk) 04:20, 22 November 2010 (UTC)

Removed tag from article (again). --Kvng (talk) 18:24, 7 December 2010 (UTC)

Unintelligible sentence[edit]

This sentence does not make sense:

So "256 b/s" will be less than "256 B/s", not the opposite from what we might expect.

Bits per byte[edit]

Is it worth noting that the conversion between bits and bytes isn't always 8 to 1? For example a serial link using one start bit and one stop bit needs 10 bits to transmit each 8-bit byte of data. —Preceding unsigned comment added by Jkball (talkcontribs) 08:51, 16 September 2010 (UTC)

Corrected some grammatical errors + last paragraph of section...[edit]

In the PROBLEMS section, I fixed a couple grammatical errors, tried to make the paragraphs flow better and sound more up-to-date (although the outcome of this law-suit probably needs to be investigated and included), and I struck the entire last paragraph because it seemed to be completely opinion, and not well written either.

- Screaming Monkey - (I really should sign-up) —Preceding unsigned comment added by (talk) 06:50, 25 December 2010 (UTC)

I fixed a few grammatical errors in one paragraph Elcidia (talk) 16:05, 9 February 2011 (UTC)


What's going on here? It's like the article is confused about the confusion. Also no inline citations, refs are broken. Binary prefix covers it very in depth. Instead of all the hullabaloo, couldn't there just be a main article link to Binary prefix under what is now "Avoiding confusion" or a see also link under "Problems"? The SI prefixes are vaguely contextual, but does the (now) nonstandard usage really apply so explicitly to data rates? There's also an elaborate conversion formula table, but all examples below it use the bit/s standard convention. A hat-tip is warranted, but the article jumps through a lot of hoops to make room for exceptions that are actually pretty irrelevant to the actual topic. Radiodef (talk) 00:11, 9 November 2012 (UTC)

I don't think anyone is confused about what the units mean except hard drive buyers and file sharers. Radiodef (talk) 00:48, 9 November 2012 (UTC)

Human Retina Transmission Rate for Comparison[edit]

Hi Guys!

Can we put a row in the "Examples of bit rates" table with human eye stats? Research suggests that the human retina transmits data to the brain at the rate of 1E+7 bits/sec, which is close to the [Gigabit Ethernet] speed. This is an interesting comparison to make, and if the article was less confusing, it would add value. But if not here, then where?

dbabbitt (talk) 15:29, 22 February 2013 (UTC)

107 bits/s is the speed of classic Ethernet 10BASE2, 10BASE5, 10BASE-T, two orders of magnitude less than Gigabit Ethernet. Zac67 (talk) 18:21, 22 February 2013 (UTC)
That's not really correct, in my opinion, and a wrong comparison. A network cable have ONE core for data (maybe two). But you have between 770,000 and 1.7 million nerve fibers. Optic nerve --Lastwebpage (talk) 12:17, 21 July 2013 (UTC)
I've added a dubious tag to that particular claim. First of all, the linked source is a writer's interpretation of the study, not the study itself (even the key point, "We quantify the patterns and work out how much information they convey, measured in bits per second" is not explained in a credible / verifiable fashion in the linked source). Second, even at a cursory glance, it's apparent that the analogy is dubious at best (neurons are massively parallel, essentially analog as timings between firings are what counts, etc.). Third, the credibility of said "researchers" is not addressed, for all we know it's just some poorly conducted / misguided PhD dissertation by somebody strapped for ideas and in a rush to graduate. I'm going to remove that line at some point in the future if these issues aren't addressed and nobody objects; it's not particularly useful anyways. -- (talk) 13:55, 13 June 2015 (UTC)

what does "MBps" mean[edit]

If used in this article, the abbreviation "MBps" needs to be explained. Does it mean 1000^2 bytes per second or 1024^2 bytes per second? Dondervogel 2 (talk) 00:47, 24 June 2013 (UTC)

Prior to the capitalization of the B in the "mega" section, this question was not raised in this, or in the "mega" bit section. Why do you feel it becomes important when it is capitalized, but not lower cased? (talk) 00:56, 24 June 2013 (UTC)
I see MBps used in the lede but nowhere else. I do not see Mbps anywhere. What do they mean? Dondervogel 2 (talk) 01:08, 24 June 2013 (UTC)
Please check out What Is the Difference Between Mbps and MBps?. (talk) 01:37, 24 June 2013 (UTC)
My interpretation of the ling you provide is 1 MBps = 1 MB/s = 1 million B/s and 1 Mbps = 1 Mbit/s = 1 million bit/s. Is this correct? Dondervogel 2 (talk) 19:24, 24 June 2013 (UTC)
Exactly. Thanks, (talk) 19:52, 24 June 2013 (UTC)

One of the principal purposes of Wikipedia is to orient the reader to the subject matter of the article[edit]

On that account, this article is one of the poorest I've ever had the misfortune of reading. QuintBy (talk) 10:15, 5 August 2013 (UTC)

PCI 133?[edit]

PCI defines 33 and 66 MHz operation. PCI-X raised the bar to 133 MHz (and subsequently 266 & 533 MHz) while at the same time doubling the bus width to 64 bit. So what's PCI 133? It's got to be PCI-X 133 MHz which transports up to 1067 MB/s (64 bit). Zac67 (talk) 19:27, 7 August 2013 (UTC)

Well of course you can feel free to WP:BEBOLD and change it yourself if you find it incorrect. The table does already have PCI-X examples so if original PCI was 33/66MHz then feel free to change it to that. PCI isn't something I personally know a lot about but I do know that errors like this can surface just as an editing artifact as small changes are constantly made all over. I would suggest taking a look at List of device bit rates#Computer buses ("main article") and seeing if that confirms your suspicion. Radiodef (talk) 21:23, 9 August 2013 (UTC)


This article seems to be about 50% bigger than it needs to be. Is there a reason for that? I doubt that even those not "technically knowledgeable" don't need the examples to be expanded for each possible combination. When I get bored later I'll write up a draft for a condensed version.

BTW, my input for the decimal and binary powers debacle... Not that I or the conventions really care for the other's authority, but if one thinks of the prefixes as powers of the base, with kilo- being *10^3, a binary-based system really should reflect the change in the base when considering names for powers/orders of magnitude: 1000 is not factor of 2. What I'm saying, IMHO, is that the whole argument is meaningless and counter-productive, so to speak.

ahem. Sorry if I sound elitist. JamesEG (talk) 19:55, 14 November 2013 (UTC)

The issue of Practice vs theory[edit]

This article should not be this complicated - it goes too much into theory (which can be an unlimited area...), instead it should be more concentrated on the practice - and that list of examples toward the end of the article (provided that it is accurate) shows it more clear than the whole rest of the article: in the all important practice when there's talk of data transfer rates (e.g. kilobits per second (kbit/s) or megabits per second (Mbit/s) etc.) units are almost always in the decimal (the standard) system of measurement (e.g. 1 kbit/s = 1000 bit/s exactly, not 1024 bit/s; 1 Mbit/s = 1,000,000 bit/s exactly, not 1,048,576 bit/s) - that information is what most people come here for - so: it would be useful to put that information at the very beginning of the article - data transfer rates are almost always stated in decimal system of measurement (units are multiplied by 1000 not 1024). Actually: can a byte of transferred data be fragmented or not - if not then all the stated data transfer values should be divisible by 8 - right - because there are eight bits in a byte.

Additionally, for clarification, it may also state that on the other hand in that same, all important, practice data capacity of memory media is generally, for practical reasons, reported in a binary system of measurement (e.g. 1 kB = 1024 Bytes (2^10); 1 MB = 1048576 Bytes (that's 1024x1024; 2^20); etc.), but there apparently exists a strong enough movement which insists that where binary system is used the old MB should from now on be written as MiB, kB as kiB etc. Also, the introduction part of the article should hint that (what it boils down to) - forcing such change actually implies that data capacity of media should (just like data rates) be interpreted in decimal system of measurement - which would give false specifications regarding the past devices (at the least...) - which is problematic and riddled with issues.

(there is an inherent practical problem with decimal system in (binary) computer science: memory chips and memory media are binary matrices by their nature, and even physical structure and construction - so it complicates matters and makes it impracticle (and even comedic) because, for example: physically there's exactly 67,108,864 bytes in 64 binary megabytes (64 b.MB; 64 MiB) and if a manufacturer would decimally state that his device has 64 MB that would mean that there's only 64,000,000 bytes on the memory media - which is false; on the other hand they could say that their device has 67.108864 MB of memory - down to the last decimal - but that would sound comedic in a commercial and incorrect if it was truncated to 67.1 MB, or if someone was to interpret 67.108864 MB as 70368744.177664 bytes both comedic and incorrect, or if they say e.g. 500 MB HDD and there's somehow actually only 476.837158203125 MiB then it's fraudulent, comedic and incorrect - confusing in any case)

— Preceding unsigned comment added by (talk) 23:17, 23 September 2014 (UTC)

Data Rate Conversion Table[edit]

Table itself seems dubiously useful, but I could see it being useful to some. However, the "formula" columns seem entirely redundant, unhelpful, and unnecessary. Can we remove them? (And, really, can we remove the whole section? There's a lot of overly detailed tables here and examples here.) -- (talk) 14:02, 13 June 2015 (UTC)