|This is the talk page for discussing improvements to the ALOHAnet article.|
|This article is of interest to the following WikiProjects:|
- 1 Major errors and a factual basis - August 2010
- 2 In the technical context of radio communications
- 3 Not the original network?
- 4 Two reasons?
- 5 Menehune
- 6 The ALOHAnet did not have CS!
- 7 The ALOHA protocol
- 8 Ouch
- 9 Updated
- 10 possible mistake
- 11 Miscellaneous Mistakes
- 12 Wireless
- 13 I was there
Major errors and a factual basis - August 2010
The following is offered to this article's writers/editors by Norman Abramson and Richard Binder, two of the ALOHAnet developers, to provide a factual basis for the article and also to point out some of its major errors. This and more information is available online in the following two references:
- a detailed description of the implemented system is given in "ALOHA packet broadcasting - A retrospect" by Binder, Abramson, Kuo, Okinaka, and Wax, 1975 Proceedings of the National Computer Conference.
- a less technical article which discusses ALOHAnet evolution and later commercial ALOHA channel applications may be found in "The ALOHAnet - Surfing for Wireless Data" by N. Abramson, IEEE Communications Magazine, December 2009.
Some major errors
- in the article opening, "... It was first deployed in 1970 by Bruce Rights ..." [fixed]
- this person had nothing to do with ALOHAnet, also the year is wrong - this line should be removed
- in the overview and other places, "...Data received was immediately re-sent, allowing clients to determine whether or not their data had been received properly. Any machine noticing corrupted data would wait a short time and then re-send the packet. This mechanism was also used to detect and correct for "collisions" created when two client machines both attempted to send a packet at the same time...." [fixed]
- this is incorrect - ALOHAnet used positive acknowledgment packets and automatic retransmissions as described in the "Operation" section below
- also in the overview: "...The idea was to use low-cost amateur radio-like systems..." [fixed]
- this is misleading and should be deleted; low-cost commercial radios were used
- 3rd para of the overview states "...Because data was sent via a teletype the data rate usually did not go beyond 80 characters per second" [fixed]
- also incorrect: characters typed by a user were buffered by the TCU or PCU (see below) and sent in a single packet at 9600 bps (although an early ALOHA paper stated that 24,000 bps was planned, to simplify the implementation a data rate of 9600 bps was actually used, which proved more than sufficient for the user loading experienced throughout the project's lifetime)
- the discussion at times implies CSMA could have been used in the two-channel ALOHAnet system [fixed]
- this was not possible because of the star configuration (again, see below). Also, while slotted ALOHA *could* have been introduced later, it wasn't - ALOHAnet used 'pure' (unslotted) ALOHA random accessing. Also, note that Ethernet was able to introduce CSMA/CD because it used both a single common channel and wired connections
- in the last para of the 'description' section, eg "Historical details of the original wireless network are now rather difficult to come by ... "
- historical details are available - see the references above
- ALOHAnet designed its own packet formats in 1969 - Internet packets didn't appear until later
- also best not to confuse the radio relays introduced late in the project for communication between Oahu and other islands with the basic network configuration described below for operation within the Oahu campus, which was the main focus of the project [all fixed]
More generally, here are the basic facts of ALOHAnet Rationale and Operation
The ALOHAnet project began in 1968, and the experimental UHF network was operational from June, 1971 to late 1974.
Two fundamental choices which dictated much of the ALOHAnet design were the two-channel star configuration of the original network and the use of random accessing for user transmissions. The random accessing principle using radio channels turned out to be the primary technical innovation of ALOHAnet, while the two-channel configuration was primarily chosen to allow the efficient transmission of the relatively dense total traffic stream being returned to users by the central time-sharing computer. An additional reason for the star configuration was the desire to centralize as many communication functions as possible at the central network node (the MENEHUNE), minimizing the cost of the original all-hardware terminal control unit (TCU) at each user node.
The random access channel for communication between users and the MENEHUNE was designed specifically for the traffic characteristics of interactive computing. In a conventional communication system a user might be assigned a portion of the channel on either an FDMA or TDMA basis. Since it is well known that in time-sharing systems [circa 1970], computer and user data systems are bursty, such fixed assignments are generally wasteful of bandwidth because of the high peak-to-average data rates that characterize the traffic. The multiplexing technique that was utilized by the ALOHAnet is a purely random access packet switching method that has come to be known as an ALOHA channel.
Two 100 KHz channels in the UHF band were used, a random access channel for user-to-computer communication and a broadcast channel for computer-to-user packets. The system was configured as a star network, allowing only a central node [the MENEHUNE] to receive transmissions in the random access channel; all users received each transmission made by the central node in the broadcast channel. All transmissions were made in bursts at 9600 bps, with data and control information encapsulated in packets.
Each packet consisted of a header (32 bits) and a header parity check word (16 bits), followed by up to 80 bytes of data and a 16-bit parity check word. The header contains information identifying the particular user so that when the MENEHUNE broadcasts a packet, only the intended user's node will accept it. Under a pure ALOHA mode of operation, packets are sent by the user nodes to the MENEHUNE in a completely unsynchronized manner - when a node is idle it uses none of the channel resources.
Each active user node is in contention with all other active users for use of the MENEHUNE's receiver. If two nodes transmit packets at the same time, a collision occurs and neither packet is acknowledged. In the ALOHAnet, a positive acknowledgment protocol was used for packets sent on the random-access channel. Whenever a node sends a packet it must receive an acknowledgment packet from the MENEHUNE within a certain time-out period. If the ACK is not received within this interval the node automatically retransmits the packet after a randomized delay to avoid further collisions. These collisions will limit the number of users and the amount of data which can be transmitted over the channel as loading is increased, up to about 18% of the maximum channel data rate under the simplest channel scheme. (More complicated schemes can increase the throughput by a variety of techniques such as slotting, carrier sense, and by employing a traffic model with a variety of different user packet transmission rates.)
ALOHAnet Remote Units
The original user interface developed for the system was an all-hardware unit called an ALOHAnet Terminal Control Unit (TCU), and was the sole piece of equipment necessary to connect any terminal or minicomputer into the ALOHA channel. The TCU was composed of a UHF antenna, transceiver, modem, buffer and control unit. The buffer was designed for a full line length of 80 characters, which allowed handling of both the 40 and 80 character fixed-length packets defined for the system. The typical user terminal in the original system consisted of a model 33 teletype or a dumb CRT user terminal connected to the TCU using a standard RS 232C interface. Shortly after the original ALOHA network went into operation in June, 1971, the TCU was redesigned with one of the first Intel microprocessors, and the resulting upgrade was called the PCU (Programmable Control Unit).
Additional basic functions performed by the TCU's and PCU’s were generation of a cyclic-parity-check code vector and decoding of received packets for packet error-detection purposes, and generation of packet retransmissions using a simple random interval generator. If an acknowledgment was not received from the MENEHUNE after the prescribed number of automatic retransmissions, a flashing light was used as an indicator to the human user. Also, since the TCU's and PCU’s did not send acknowledgments to the MENEHUNE, a steady warning light was displayed to the human user when an error was detected in a received packet. Thus it can be seen that considerable simplification was incorporated into the initial design of the TCU as well as the PCU, making use of the fact that it was interfacing a human user into the network.
The central node communications processor was an HP 2100 minicomputer called the MENEHUNE. MENEHUNE is the Hawaiian word for “imp”, or dwarf people, and was named for its similar role to the original ARPANET IMP (Interface Message Processor) which was being deployed at about the same time. In the original system, the MENEHUNE forwarded correctly-received user data to the UH central computer, an IBM System 360/65 time-sharing system. Outgoing messages from the 360 were converted into packets by the MENEHUNE, which were queued and broadcast to the remote users at a data rate of 9600 bps. Unlike the half-duplex radios at the user TCUs, the MENEHUNE was interfaced to the radio channels with full-duplex radio equipment.
[In later versions of the system, MENEHUNE routing capabilities were expanded to allow user nodes to also exchange packets with other user nodes, the ARPANET, and an experimental satellite network. More details are available in the references.]
In the technical context of radio communications
In the technical context of radio communications the word "ALOHA" is always to be written in ALL-CAPS writing like this: ALOHA. There are no exceptions such as, "I didn't feel like it, today." It is just like typing "U.S.A.": all-capitals and never otherwise.
On a similar vein, I have fussed at people who could not capitalize "English", "British", "American", "Canadian", or NATO, each and every time, without ANY exceptions. People who do not do so simply have worms in their brains!22.214.171.124 (talk) 18:36, 6 April 2010 (UTC)
Not the original network?
Surely the protocol in which a sender listens to the "network" IS NOT the original ALOHA protocol? I believe that such listening was not a feature of the original protocol, and this is why it had such a low bandwidth utilization - around 18-19%.
Later modifications included using clock pulses - (Slotted ALOHA), and very probably listening before transmiting - as suggested here. I have seen it written that Metcalfe's modifications brought the efficiency of the system up to around 90% channel utilisation - though how many tweaks were needed to get this I'm not sure.
It's difficult to get all the details - there is much which is not easily accessible, or interpretable, at the current time. David Martland 14:56, 22 Oct 2003 (UTC)
There are other details which it would be good to know about for the original implementation. For example, was the transmitter on Oahu (and indeed those at the other islands) "on" all the time? Obviously the receivers would have been on permanently. It might have been possible to somehow idle the transmitter, and then only apply power when there was a packet to send. This might have meant that there was not always a carrier to detect. With that sort of technology, the presence or absence of an FM carrier could have been used to determine whether the "channel" was active or not. However, if the carrier was always on, then what distinguished packet data from idling? Was there some form of idling bit sequence - alternating 0s and 1s perhaps, so that detecting "silence" would have been done by detecting a sufficient number of bits from this bit pattern? Where most data was character data, it would probably only have been necessary to detect a couple of bytes worth to be reasonably sure that it wasn't data. In the case where this pattern actually did occur within a packet, it would simply register as a collision if another station tried to transmit over it.
The situation with inbound (towards Menehune/Oahu) signals would also have been different, as there would have been several transmitters all capable of transmitting on the same frequency. For that situation it would seem necessary to reduce power, or switch off each transmitter when not sending a packet, in order to not mask out the other stations. Perhaps it was this realisation which led the developers eventually to suggest that their decision to use two frequencies - one for outbound and one for inbound data, was in fact the wrong decision - and that a single frequency network should have been developed.
What was the effect of capture ratio on the signals? Since FM was used, and since the stations were quite widely separated, it would actually have been possible for two stations to try to communicate simultaneously, and for only one to fail, due to one having a stronger signal at the receiver. A few dB difference in signal might have rendered this quite feasible. If indeed the acknowledgements were done by echoing the message, then the stronger message could have been echoed back, and then the other station could retry. Was this significant at all? It would clearly have improved the overall capacity, though it could also have meant that some stations tended to mask out others - perhaps consistently, and hence unfairly.
Does anyone know? David Martland 18:43, 22 Oct 2003 (UTC)
Further question: It would also be good to know something about how ALOHA was "really" developed - if anyone knows, and/or is willing to spill the beans. Was the system really carefully worked out, or was it put together by a "go down to Radio Shack, buy it, and try it" approach, and then discover the problems which arise. I suspect that the real developement was a combination of "discovery" and predictive design - nothing wrong with that really - lots of systems get developed this way. Most people would probably be concerned to get wireless communication between remote locations working first, and then worry about problems with collisions etc. later. Is that what happened? David Martland 18:50, 22 Oct 2003 (UTC)
First section said it was important for two reasons, but only listed one. Was the other removed? Removed "two reasons". Tualha 16:30, 30 Nov 2003 (UTC)
I know the answers to lots of the questions here, after doing a lot of digging. I am writing a paper/talk about this, and will write up a summary for this place, but please be a bit patient. [Ignatios Souvatzis]
I figure the menehune needs a mention (and some explanation). The page at http://research.microsoft.com/~gbell/Computer_Structures_Principles_and_Examples/csp0432.htm explains it a bit (with a uselessly small diagram). And I think we should say something like "...this network concentrator was named the MENEHUNE, after a mischievous type of polynesian fairy (see Menehune)". I'd add it in myself, but I can't really figure out where it belongs in the article. -- Finlay McWalter | Talk 21:37, 16 Feb 2004 (UTC)
The ALOHAnet did not have CS!
It seems clear to me that the article is incorrect in stating that the ALOHAnet network was using CSMA.
With one frequency being used for "multiple access" and the other for the acknowledgements "broadcast" it is clear that this cannot be the case.
Stations would not be able to "listen" (detect carrier) since they just transmitted on MA channel and listened only to the "broadcast" channel for the acknowledgements of their messages.
The system would therefore be best described as MA/CD, but even the "CD" is with a twist. The stations did not really detect collisions, however, they "knew" there had been a collission (or some other problem) when they did not get their acknowledgement on the broadcast channel.
CS was only "invented" by Metcalfe around 1976 and he also made CD a feature of every station ...
I'm going to rewrite this later... Notanotheridiot 18:26, 24 April 2007 (UTC)
The ALOHA protocol
I removed: (like a grade school classroom at recess) from the end of: This means that 81.6% of the total available bandwidth is basically being wasted due to stations trying to talk at the same time.
People, this article is getting worse with every edit. It's now completely disorganized, filled with jargon, and has mixed tenses and styles. There's several sections on how the protocol works, two on the history, and an intro that is much too short. I'm a little overwhelmed with other articles right now to jump in though. Maury 00:51, 23 January 2007 (UTC)
I have made a few changes. I removed the erroneous statement that stations listen before send, which is not the case for vanilla ALOHA. I'm not sure if any variant tried that. It might work on 2short distance wireless networks, and does of course work on short wired LANs. Hidden station problems for wireless operation make this problematic.
I also added more about the historical information, such as we have it. Metcalfe did apparently get the system working much more effectively, but this very likely used protocols more advanced than either ALOHA or slotted ALOHA. It may have included packet reservation, or packet scheduling. It may also have used additional knowledge of traffic patterns, as for quite a while I think it was a centralised system. I am unaware of the details. Initially the network would have had only a few internal nodes and connected units, so traffic would have been relatively light. Did the number of nodes and connnected units ever increase, say to 100 or more? Did they ever change the signaling rate to use higher data rates?
Also, was all communication on ALOHA node to node, or did they ever use broadcast modes, which might be useful for software updates etc.?
How long was the basic network running before it was substantially overhauled? Was it ever replaced, or did it just gradually evolve into a significantly different system?
Indeed, what system are they using in Hawaii now? David Martland 09:48, 15 October 2007 (UTC)
In addition - quoting from the article:
"This shared transmission medium system generated interest by others.
ALOHA's scheme was very simple. Because data was sent via a teletype the data
rate usually did not go beyond 80 characters per second. When two stations
tried to send at the same time, both transmissions were garbled."
Even if teletypes were used, this might only have restricted the mean transfer rate. The data transfer rate across the radio links could have been higher, as surely the data was buffered up into frames before sending. In those days it was common for local echo to be used with transmission only on pressing the RETURN key.
In addition, the University of Hawaii was relatively well-equipped with modern equipment for the time, and it has been documented that they used VDUS, which were probably operating at 9600 baud - approx one kbyte/second. The return rate for outbound content from Oahu, for example if requests for Unix help files were made across the network, could have been close to 1000 characters/second. Then, as now, it is possible that the network was used in an asymmetric fashion. David Martland 21:09, 15 October 2007 (UTC)
I have never cotributed to wikipedia before, so I won't change the article itself, since I don't really know how. But, as I am currently doing a project concerning ALOHANET, while searching for the actual bitrate of ALOHANET I found this document: http://ethernethistory.typepad.com/papers/ALOHAnet.pdf On page 12 it says that ALOHANET was run on two [u]24000 baud[/u] channels, not 9600 as it is written in this article. Someone visiting this place - please verify my information.
Reading the original ALOHANET paper  is helpful. Errors:
- The data rate on the radio channels was 24,000 baud, not 2400 or 9600 baud. That's consistent with the allocation of 100 kilohertz of spectrum for each channel. By modern standards, that's terrible bandwidth utilization, but modems were very primitive then.
- The remote terminals were not "teletypes". The article uses that word, but the paper does not. The paper describes a typical load as "each user sending one message every 30 seconds". So this was a store-and-transmit terminal, like some predecessor of an IBM 3270. That makes sense: the central machine was an IBM 360/65. which was designed to work with such terminals, not with teletypewriters. It also meant that response delays on the order of seconds weren't a problem.
- Note that there's no contention on the outbound traffic from the Menuhene, and that most of the traffic is outbound (computer to user).
Incidentally, when I first saw an Ethernet on a tour of the Xerox PARC in 1975, and it was described to me by Alan Kay as "an ALOHAnet with a captive ether". --John Nagle (talk) 06:33, 24 October 2008 (UTC)
That it was the world's first wireless packet-switched network is one of the most interesting details about ALOHAnet. I think this should be considered for including in the introduction. Lloydbudd (talk) 05:22, 6 July 2009 (UTC)
Is that true? At least the ARPANET article it says: "In 1968, two satellite links, traversing the Pacific and Atlantic oceans, to Hawaii and Norway, one, the Norwegian Seismic Array (NORSAR), were connected to the ARPANET. Moreover, from Norway, a terrestrial circuit added a London IMP to the network in 1973." Dokaspar (talk) 17:13, 25 March 2010 (UTC)
I was there
I notice that there are several places in this article where people have questions about how the original ALOHANet worked. I will be happy to answer any questions you may have about ALOHA.