Round-trip delay time
|This article needs additional citations for verification. (August 2014)|
In telecommunications, the round-trip delay time (RTD) or round-trip time (RTT) is the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgment of that signal to be received. This time delay therefore consists of the propagation times between the two points of a signal.
Network links with both a high bandwidth and a high RTT can have a very large amount of data (the bandwidth-delay product) "in flight" at any given time. Such "long fat pipes" require a special protocol design. One example is the TCP window scale option.
The RTT was originally estimated in TCP by:
- RTT = (α · Old_RTT) + ((1 − α) · New_Round_Trip_Sample)
Where α is constant weighting factor(0 ≤ α < 1). Choosing a value α close to 1 makes the weighted average immune to changes that last a short time (e.g., a single segment that encounters long delay). Choosing a value for α close to 0 makes the weighted average respond to changes in delay very quickly.
This was improved by the Jacobson/Karels algorithm, which takes standard deviation into account as well.
Once a new RTT is calculated, it is entered into the equation above to obtain an average RTT for that connection, and the procedure continues for every new calculation.
- 200 ms RTT for a connection using UDT (UDP-based Data Transfer Protocol) equates to a 12,000 mile round trip path length.
- Comer, Douglas. Internetworking with TCP/IP. Page 226. Upper Saddle River, N.J.: Prentice Hall, 2000. Print.
Definition by the National Telecommunications and Information Administration's Institute for Telecommunication Sciences in Boulder, Colorado
|This article related to telecommunications is a stub. You can help Wikipedia by expanding it.|
|This computer networking article is a stub. You can help Wikipedia by expanding it.|