Talk:Relationship between latency and throughput

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Opening comments[edit]

Some related concepts should be included in this same discussion, but I'm still not sure on how to write about them, or how to translate it in english. The first one is throughput, or the sustained speed of the communications channel; another one (still unnamed) is the ability to sustain a continuous stream of communications over a large period of time. An example will help to make my point clear.

A truck filled with magnetic tapes has a bigger throughput than a 2.4 Gbps optical link; it is able to transfer more information in some definite amount time. It does not make much sense to talk about bandwidth in this case, and throughtput seems to define better its transfer capacity. It also has a much bigger latency. However, if you have only one truck, you have both high latency, and low availabity; if have an entire fleet of trucks, you still have high latency, but the communications channel is available for a lot of time, which means that you will be able to achieve a much higher throughput than with one truck only.

That's what I mean to say that something is missing. I'll let it rest a little, think about it, and if I come up with a good writeup, I'll post it here.

CarlosRibeiro 12:22, 2 Aug 2004 (UTC)



Some people include packet-transmission time in the latency, while others do not. A few try to do it both ways: Sean Breheny: "for protocols which need to receive the whole packet before making any of it available to the destination (i.e., to check the FCS to see if it was received correctly) it [latency] includes the time it takes to transfer the data over the RF link."

I thought that packet protocols *always* receive the whole packet before treating any of it as valid. Does there really exist any kind of protocol that immediately uses some of the data in the packet (to reduce latency) before the packet finishes transmission? ( ZMODEM doesn't count -- the reciever still recieves a whole packet (some people call it a "sub-packet" or "block") before treating any of it as valid, right? ) -- DavidCary

I don't know of any protocol that allows a higher layer to use packet data before that data has been validated (via the checksum) - you may be thinking of packet ordering; as in not worrying if all packets are recieved. I think some applications using UDP ignore packet order, where it's more important to get some of the data it a timely fashion, than it is to get all of the data eventually (eg voice communication; Skype is an example) - but don't take my word for it, i'm not a protocol expert, I only have a limited knowledge of the intricacies of various protocols.
Besides, latency should include actual transmition time, because even if a system was able to use packet data before it had been validated (or the entire packet received), it still needs to be transmitted for the destination to be able to use it at all - Lee Carré 17:42, 3 October 2006 (UTC)[reply]
No protocol that I know of hands data off to a higher layer before it has received the entire packet. However, if you look at devices such as Ethernet switches they can start to forward a packet after they just receive the header. So sometimes work is done with the packet before the entire thing is received. Danoelke (talk) 14:49, 27 February 2008 (UTC)[reply]

I propose that this article be moved to "relationship between latency and throughput" to better describe the content - because this article doesn't really compare the 2 factors - Lee Carré 17:42, 3 October 2006 (UTC)[reply]

I concur. I will reorganize this article with that in mind and see if consensus likes it before I make the move official. HatlessAtless (talk) 16:31, 29 April 2008 (UTC)[reply]

Maybe we should explain how you can measure each property: that latency is measured using *ping* , and throughput is basically what your download manager tells you that the speed of the connection is.

'ping' does not measure latency. It measures round trip delay, which is a different thing altogether. Latency is altogether more difficult to measure as it generally requires well synchronised, accurate clocks in two places. Latency is one-way, round-trip delay is two-way, as measured by 'ping'. WLD 15:50, 25 October 2006 (UTC)[reply]

It would also be practical to explain that the ADSL protocol provides for two transmission modes. One uses interleaving to spread out packets; this protects them better against burst errors, because an error will hit many packets a little, and these small errors can be corrected by the error-correcting codes. The other mode is called fastpath; it simply does not interleave. When an error occurs, the damaged packets usually have to be resent. Because interleaving spreads the packet over a longer period of time, the packet takes longer to arrive completely: the latency is high. Fastpath has lower latency, but on a marginal connections where many errors occur, the resending of packets takes up a large share of the available bandwidth; this reduces throughput. Typical latencies would be 20ms for fastpath and 80 ms for interleaved connections.

I am not sure how to source this; googling for *adsl fastpath interleaving* gets some sources. (mendel)

Bandwidth-delay product[edit]

I added some info on the bandwidth-delay product. This entire article could possibly be merged into the bandwidth-delay product article, since there's really very little useful information here. Gigs 10:13, 6 July 2007 (UTC) Actually, lets merge with just "latency" Gigs[reply]

Complete article reconstruct, with additional citations.[edit]

I just rebuilt the article, and I believe that it should be moved to "Relationship between Latency and Throughput". See what you think. Since this rewrite was pretty much a brain dump, it will probably require cleanup. —Preceding unsigned comment added by HatlessAtlas (talkcontribs) 19:02, 29 April 2008 (UTC)[reply]

Limited bandwidth causing latency[edit]

In my work I've tried improving throughput of servers sending data to a tape drive. The drive will have a rated top speed of X MB/s but it needs a steady stream of data. (Drive-based data compression complicates this further so I will disregard it here) Oversimplifying a little, tape drives have two speeds - "fast" and "stop". If the data provided is not sufficient the drive will stop, wait for a buffer to fill, then resume. Each time this happens it imposes a time penalty. If the server and it's related communication elements (bus speed, network speed, source data rate, etc) are just fast enough to provide sufficient data for this tape drive, then upgrading the drive can slow down overall throughput because the new drive may have a higher minimum speed that exceeds the server's capability. It was difficult in our scenario to prove that the server (network card) was the bottleneck because the server/card does not use 100% of its capacity if the tape drive is taking time to stop and start so often.

A potential reference: http://storagemagazine.techtarget.com/ and search "How faster tape drives can slow down your backups"

I'd love to see an animation that might illustrate this or similar phenomena. Can someone think of other examples of this relationship between throughput and latency?

The idea could use some more work. Feel free to point me somewhere more appropriate if there is another label for this concept.

Walkingstick3 (talk) 02:09, 26 July 2008 (UTC)[reply]

Essay-like and hardly relevant article. Rename or delete?[edit]

What makes this article relevant to Wikipedia? Only one other wp article links to it. I believe it originally aimed at promoting two white papers by Stuart Cheshire: It's the Latency, Stupid and Latency and the Quest for Interactivity. There are a few other Wikipedia articles of the character "relationship between a and b" or "a vs b", but those are typically well-known debates, and have numerous references. Part of this article, for example the article lead, suffer from essay-style rather than encyclopedic style.

Would it be possible to make the article more encyclopedic, if it was renamed e.g. perceived network speed (see related academic papers and books), or focused upon web response time (see "academic papers" and books). Would it help if it relied upon more academic papers related to latency and network throughput? Or is it better to simply delete it? Mange01 (talk) 22:06, 9 February 2009 (UTC)[reply]

Requested move[edit]

The following discussion is an archived discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the move request was merge to Network performance. This isn't conclusive, though, since merge discussions are traditionally carried out on the talk page of the target. I'll start the discussion at Talk:Network performance and add the merge tags to the articles, so knowledgeable authors there can perform the merge. Aervanath (talk) 07:19, 9 June 2009 (UTC)[reply]


Relationship between latency and throughputperceived network speed — Maybe the above discussion has something to do with this? —harej (talk) 03:33, 30 May 2009 (UTC)[reply]

Really? The proposed title sounds all right to me. Dekimasuよ! 11:04, 31 May 2009 (UTC)[reply]
"Network speed perceptions" gives 0 hits in scholar.google.com and books.google.com. Perceived network speed gives a handful hits. "Web response time" gives a couple of hundreds of hits, but is only one aspect of the article topic. "Perceived speed"+network gives several hundreds of hits. Mange01 (talk) 20:27, 31 May 2009 (UTC)[reply]
The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.