Jump to content

Ethernet

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 195.30.37.9 (talk) at 13:17, 26 June 2006 (spelling only). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Ethernet is a large and diverse family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the physical layer, two means of network access at the Media Access Control (MAC)/data link layer, and a common addressing format.

Ethernet has been standardized as IEEE 802.3. Its star-topology, twisted-pair wiring form became the most widespread LAN technology in use from the 1990s to the present, largely replacing competing LAN standards such as Coaxial-cable Ethernet, Token Ring, FDDI, and ARCNET. The primary competitor to Ethernet in the Local Area Network market of the present is WiFi, the wireless LAN standardized by IEEE 802.11.

General description

A 1990s Ethernet network interface card. This is a combo card that supports both coaxial-based 10BASE2 (BNC connector, left) and Twisted-pair-based 10BASE-T (RJ-45 connector, right).

Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems (though there are major differences, like the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast). The common cable providing the communication channel was likened to the ether (a reference to the luminiferous ether) and it was from this reference that the name 'Ethernet' was derived.

From this early and comparatively simple concept Ethernet evolved into the complex networking technology that today powers the vast majority of local computer networks. The coaxial cable was later replaced with point-to-point links connected together by hubs and/or switches in order to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted pair network. The advent of twisted-pair wiring enabled Ethernet to become a commercial success.

On top of the physical layer Ethernet stations communicate to each other by sending each other data packets, small blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address but this can be overridden either to avoid an address change when an adapter is replaced or to use locally administered addresses.

Despite the huge changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, the different variants remain essentially the same from the programmer's point of view and are easily interconnected using readily available inexpensive hardware. This is because the frame format remains the same, even though network access procedures are radically different.

Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards obviating the need for installation of a separate network card.

Physical layer

Ethernet evolved over a considerable time span and encompasses quite a few physical media interfaces. The commonly installed Gigabit Ethernet over copper wiring use a PAM-5 modulation scheme and over fiber uses 8B/10B encoding.

Dealing with multiple users

CSMA/CD shared medium Ethernet

Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers share the channel. The scheme was relatively simple compared to competing technologies token ring or Token Bus. When one computer wanted to send some information, it followed the following algorithm:

Main procedure

  1. Frame ready for transmission
  2. Is medium idle? If not, wait until it becomes ready and wait the Interframe gap period (9.6 µs in 10 Mbit/s Ethernet).
  3. Start transmitting
  4. Does a collision occur? If so, go to collision detected procedure.
  5. End successful transmission

Collision detected procedure

  1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision
  2. Is maximum number of transmission attempts reached? If so, abort transmission.
  3. Calculate and wait random backoff period
  4. Re-enter main procedure at stage 1

This works something like a dinner party, where all the guests talk to each other through a common medium (the air). Before speaking, each guest politely waits for the current guest to finish. If two guests start speaking at the same time, both stop and wait for short, random periods of time (measured in microseconds). The hope is that by each choosing a random period of time, both guests will not choose the same time to try to speak again, thus avoiding another collision. Exponentially increasing back-off times (determined using the truncated binary exponential backoff algorithm) are used when there is more than one failed attempt to transmit.

Computers were connected to an Attachment Unit Interface (AUI) transceiver, which in turn connected to the cable. While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work just fine while others would work slowly due to excessive retries or not at all (see standing wave for an explanation of why); these could be much more painful to diagnose than a complete failure of the segment. Debugging such failures often involved several people crawling around wiggling connectors while others watched the displays of computers running ping and shouted out reports as performance changed.

Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information was intended for just one destination. The network interface card filters out information not addressed to it, interrupting the CPU only when applicable packets are received unless the card is put into "promiscuous mode". This "one speaks, all listen" property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses. Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.

Ethernet repeaters and hubs

For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size which depended on the medium used. For example, 10BASE5 coax cables had a maximum length of 500 metres (1,640 feet). Also, as was the case with most other high-speed buses, Ethernet segments had to be terminated with a resistor at both ends. For coaxial cable based Ethernet, each end of the cable had a 50-ohm resistor and heat sink attached. Typically this was built into a male BNC or N connector and attached to the last device on the bus (or if vampire taps were in use to a socket mounted on the end of the cable just past the last device). If this was not done or if there was a break in the cable the AC signal on the bus was reflected, rather than dissipated, when it reached the end. This reflected signal was indistinguishable from a collision, and so no communication could take place.

A greater length could be obtained by an Ethernet repeater, which took the signal from one Ethernet cable and repeated it onto another cable. If a collision was detected, the repeater transmitted a jam signal onto all ports to ensure collision detection. Repeaters could be used to connect segments such that there were up to five Ethernet segments between any two hosts, three of which could have attached devices. Repeaters could detect an improperly terminated link from the continuous collisions and stop forwarding data from it. Hence they alleviated the problem of cable breakages: when an Ethernet coax segment broke, while all devices on that segment were unable to communicate, repeaters allowed the other segments to continue working (though depending on which segment was broken and the layout of the network the partitioning that resulted may have made other segments unable to reach important servers and thus effectively useless).

People recognized the advantages of cabling in a star topology (primarily that only faults at the star point will result in a badly partitioned network), and network vendors started creating repeaters having multiple ports, thus reducing the number of repeaters required at the star point; multiport Ethernet repeaters became known as "hubs". Network vendors such as DEC and SynOptics sold hubs that connected many 10BASE2 thin coaxial segments. There were also "multi-port transceivers" or "fan-outs". These could be connected to each other and/or a coax backbone. The best-known early example was DEC's DELNI. These devices allowed multiple hosts with AUI connections to share a single transceiver. They also allowed creation of a small standalone Ethernet segment without using a coaxial cable.

A Twisted pair 10BASE-T Cable is used to transmit 10BASE-T Ethernet

Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T was designed for point-point links only and all termination was built into the device. This changed hubs from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. This structure made Ethernet networks more reliable by preventing faults with (but not deliberate misbehaviour of) one peer or its associated cable from affecting other devices on the network (though a failure of a hub or an inter hub link could still affect lots of users). Also as twisted-pair ethernet is point-to-point and terminated inside the hardware the total empty panel space required around a port is much reduced (making it easier to design hubs with lots of ports and to integrate ethernet onto computer motherboards).

Despite the physical star topology, hubbed Ethernet networks use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed.

Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment[citation needed] . The results showed that, even for minimal Ethernet frames (64B), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.

This report was wildly controversial, as modeling showed that collision-based networks became unstable under loads as low as 40% of nominal capactity.

Bridging and Switching

While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created significant limits on how many machines could communicate on an Ethernet network. To alleviate this, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.

Early bridges examined each packet one by one, and were significantly slower than hubs (repeaters) at forwarding traffic, especially when handling many ports at the same time. In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. An Ethernet switch does bridging in hardware, allowing it to forward packets at full wire speed. Bridges and switches also allow mixing of speeds, an imporant feature when equipment of mixed age is in use. Even more importantly (especially with fast ethernet) they overcome the cascading limits of hubs as a collision does not have to be detected by equipment on the other side of the switch or bridge.

Initially, Ethernet bridges and switches work somewhat like Ethernet hubs, with all traffic being echoed to all ports. However, as the switch "learns" the end-points associated with each port, it ceases to send non-broadcast traffic to ports other than the intended destination. In this way, Ethernet switching can allow the full wire speed of Ethernet to be used by any given pair of ports on a single switch.

Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other and the elimination of the chaining limits inherent in hubbed Ethernet have made switched Ethernet the dominant network technology.

When only a single device (anything but a hub) is connected to a switch port, full-duplex Ethernet becomes possible. In full duplex mode both devices can transmit to each other at the same time and there is no collision domain. This doubles the aggregate bandwidth of the link and was sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used (collisions can occupy a lot of bandwidth as links get busy) and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).

Dual speed hubs

In the early days of Fast Ethernet, fast ethernet switches were relatively expensive devices. However, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole system would have to run at 10 Mbit. Therefore a compromise between a hub and a switch appeared known as a dual speed hub. These effectively split the network into two sections, each acting like a hubbed network at its respective speed then acted as a two port switch between those two sections. This meant they allowed mixing of the two speeds without the cost of a Fast Ethernet switch.

More advanced networks

Simple switched Ethernet networks still suffer from a number of issues:

  • They suffer from single points of failure; e.g., if one link or switch goes down in the wrong place the network ends up partitioned.
  • It is possible to trick switches or hosts into sending data to your machine even if it's not intended for it, as indicated above.
  • It is possible for any host to flood the network with broadcast traffic forming a denial of service attack against any hosts that run at the same or lower speed as the attacking device.
  • They suffer from bandwidth choke points where a lot of traffic is forced down a single link.

Some Managed switches offer a variety of tools to combat these issues including:

  • spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy.
  • Various port protection features (as it is far more likely an attacker will be on an end system port than on a switch-switch link)
  • VLANs to keep different classes of users separate while using the same physical infrastructure.
  • fast routing at higher levels (to route between those VLANs).
  • Link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy, although the links won't protect against switch failure because they connect the same pair of switches.

Autonegotiation and Duplex mismatch

It is essential that both the switch port and the device connected to it use the same speed and duplex settings. To that end, autonegotiation was introduced in 1995 as an option for 100BASE-TX devices (802.3u). Although it worked correctly in many applications, it had two problems. One mistake was that its implementation was optional which led to some devices incapable of using autonegotiation. Secondly, a portion of the specification was not tightly written. Although most manufacturers implemented it one way, some, including network giant Cisco, implemented it the other way. This unfortunately led to autonegotiation getting a bad name and, moreover, for Cisco to basically recommend to its customers and administrators to not use it.

The debatable portions of the autonegotiation specifications were eliminated by the 1998 release of 802.3z (1000BASE-X) followed by the negotiation protocols over twisted pair being significantly enhanced for 802.3ab (1000BASE-T). More notably, the new standard specified that to achieve gigabit speed over copper wiring, it was required for autonegotiation to be enabled. Now, all network equipment manufacturers—including Cisco[1]—recommend to use autonegotiation whenever possible.

Note that some switch OSes, as well as some card drivers, still have the option to disable autonegotiation and force a twisted pair connection to 1000Full or 1000Half, but doing that is against specification and should never be used as you won't properly negotiate any of the other parameters. Instead the proper way, for example, to force gigabit ethernet over a Cat 5 connection is to still specify autonegotiation, but limit the advertised capabilities to only 1000Base-T[2].

Due to the early interoperability issues with autonegotiation, some well-intentioned people got into the bad habit of automatically locking all ports (on 10/100 Mbps equipment) to 100 Mbps and full duplex, to ensure maximum performance. While this works if both ends of the connection are locked to the same settings, it's very difficult to maintain such a network and guarantee consistency, especially if the settings are not used universally. Since autonegotiation is generally the manufacturers default setting; sooner or later, a connection is bound to be locked at full duplex on one end while attempting to use autonegotiation at the other end.

The effects of this situation are subtle and pernicious. Autonegotiation will fail, and the autonegotiating end of the connection must use half duplex, as required by the standard. (Without autonegotiation, it has no way to know that the peer is configured for full duplex.) The correct speed can still be selected, even without autonegotiation, because the networking hardware can sense the Ethernet carrier speed directly. Therefore, the autonegotiating end of the connection selects 100 Mbps, half duplex while its peer is locked at 100 Mbps, full duplex.

Despite the duplex mismatch, communication is possible over the connection. Single packets can be sent and acknowledged without a problem, which is why a simple ping command will fail to highlight a duplex mismatch -- single packets (with accompanying acknowlegements) at 1-second intervals work fine. A terminal session which sends data slowly (in very short bursts) can also communicate successfully. However, as soon as either end of the connection attempts to send any significant amount of data, the problem becomes obvious, even if the cause is not so readily apparent.

A large data transfer sent over a TCP connection will stream the data in multiple packets, each of which will trigger an acknowledgement packet back to the sender. The full-duplex end of the connection will merrily send its packets while receiving other packets -- that's the whole point of a full-duplex connection, after all. Meanwhile, the half-duplex end cannot accept the incoming data while it's sending -- it will either ignore the incoming data or sense it as a collision. As a result, almost all of the packets sent by the full-duplex end will be lost because the half-duplex end is streaming either data packet or acknowledgements at the time.

The lost packets will force the TCP protocol to perform error recovery, but the initial (streamlined) recovery attempts will fail because the retransmitted packets will be lost in exactly the same way as the original packets. Eventually, the TCP transmission window will become full and the TCP protocol will refuse to transmit any further data until the previously-transmitted data is acknowledged. This, in turn, will quiesce the new traffic over the connection, leaving only the retransmissions and acknowledgements. Since the retransmission timer grows progressively longer between attempts, eventually a retransmission will occur when there is no reverse traffic on the connection, and the acknowledgement will finally be received correctly. This will restart the TCP traffic, which in turn immediately causes lost packets as streaming resumes. Repeat ad nauseam.

The end result is a connection that does work (sort of), but performs extremely poorly (think modem speeds) because of the pathological behavior caused by the duplex mismatch. Symptoms to watch for are connections that seem to work fine with a ping command, but seem to "lock up" easily with pathetic throughput on data transfers. (The effective data transfer rate is likely to be asymetrical, performing much worse in one direction than the other.)

Once a duplex mismatch is found it can be fixed by either enabling autonegotiation (working autonegotiation permitting) on both ends or by forcing the same settings on both ends (availibility of a configuration interface permitting). If there is no option but to have a locked setting on one end and autonegotiation the other (say old equipment with broken autonegotiation connected to an unmanaged switch) half duplex must be used.

Ethernet frame types and the EtherType field

Frames are the format of data packets on the wire.

Note that a frame viewed on the actual physical hardware would show start bits (sometimes called the preamble) and the trailing Frame Check Sequence. These are required by all physical hardware and is seen in all four following frame types. They do not show in any packet sniffing software because these bits are removed by the NIC before being passed on to the network stack software.

There are several types of Ethernet frame:

In addition, Ethernet frames may optionally contain a IEEE 802.1Q tag to identify what VLAN it belongs to and its IEEE 802.1p priority (quality of service). This doubles the potential number of frame types.

The different frame types have different formats and MTU values, but can coexist on the same physical medium.

Ethernet Type II Frame format

The most common Ethernet Frame format, type II

It is claimed that some older (Xerox?) Ethernet specification had a 16-bit length field, although the maximum length of a packet was 1500 bytes. Versions 1.0 and 2.0 of the Digital/Intel/Xerox (DIX) Ethernet specification, however, have a 16-bit sub-protocol label field called the EtherType, with the convention that values between 0 and 1500 indicated the use of the original Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the new frame format with an EtherType sub-protocol identifier.

IEEE 802.3 defined the 16-bit field after the MAC addresses as a length field again, with the MAC header followed by an IEEE 802.2 LLC header. The convention described earlier allows software to determine whether a frame is an Ethernet II frame or an IEEE 802.3 frame, allowing the coexistence of both standards on the same physical medium. All 802.3 frames have an IEEE 802.2 logical link control (LLC) header. By examining this header, it is possible to determine whether it is followed by a SNAP (subnetwork access protocol) header. (Some protocols, particularly those designed for the OSI networking stack, operate directly on top of 802.2 LLC, which provides both datagram and connection-oriented network services.) The LLC header includes two additional eight-bit address fields (called service access points or SAPs in OSI terminology); when both source and destination SAP are set to the value 0xAA, the SNAP service is requested. The SNAP header allows EtherType values to be used with all IEEE 802 protocols, as well as supporting private protocol ID spaces. In IEEE 802.3x-1997, the IEEE Ethernet standard was changed to explicitly allow the use of the 16-bit field after the MAC addresses to be used as a length field or a type field.

Novell's "raw" 802.3 frame format was based on early IEEE 802.3 work. Novell used this as a starting point to create the first implementation of its own IPX Network Protocol over Ethernet. They did not use any LLC header but started the IPX packet directly after the length field. In principle this is not interoperable with the other later variants of 802.x Ethernet, but since IPX has always FF at the first byte (while LLC has not), this mostly coexists on the wire with other Ethernet implementations (with the notable exception of some early forms of DECnet which got confused by this).

Novell NetWare used this frame type by default until the mid nineties, and since Netware was very widespread back then (while IP was not) at some point in time most of the world's Ethernet traffic ran over "raw" 802.3 carrying IPX. Since Netware 4.10 Netware now defaults to IEEE 802.2 with LLC (Netware Frame Type Ethernet_802.2) when using IPX. (See "Ethernet Framing" in References for details)

Mac OS uses 802.2/SNAP framing for the AppleTalk protocol suite on Ethernet ("EtherTalk") and Ethernet II framing for TCP/IP.

The 802.2 variants of Ethernet are not in widespread use on common networks currently, with the exception of large corporate Netware installations that have not yet migrated to Netware over IP. In the past, many corporate networks supported 802.2 Ethernet to support transparent translating bridges between Ethernet and IEEE 802.5 Token Ring or FDDI networks. The most common framing type used today is Ethernet Version 2, as it is used by most Internet Protocol-based networks, with its EtherType set to 0x0800 for IPv4 and 0x86DD for IPv6

There exists an Internet standard for encapsulating IP version 4 traffic in IEEE 802.2 frames with LLC/SNAP headers.[3] It is almost never implemented on Ethernet (although it is used on FDDI and on Token ring, IEEE 802.11, and other IEEE 802 networks). IP traffic can not be encapsulated in IEEE 802.2 LLC frames without SNAP because, although there is an LLC protocol type for IP, there is no LLC protocol type for ARP. IP Version 6 can also be transmitted over Ethernet using IEEE 802.2 with LLC/SNAP, but, again, that's almost never used (although LLC/SNAP encapsulation of IPv6 is used on IEEE 802 networks).

The IEEE 802.1Q tag, if present, is placed between the Source Address and the EtherType or Length fields. The first two bytes of the tag are the Tag Protocol Identifier (TPID) value of 0x8100. This is located in the same place as the EtherType/Length field in untagged frames, so an EtherType value of 0x8100 means the frame is tagged, and the true EtherType/Length is located after the tag. The TPID is followed by two bytes containing the Tag Control Information (TCI) (the IEEE 802.1p priority (quality of service) and VLAN id). The tag is followed by the rest of the frame, using one of the types described above.

Varieties of Ethernet

The first Ethernet networks, 10BASE5, used thick yellow cable with vampire taps as a shared medium (using CSMA/CD). Later, 10BASE2 Ethernet used thinner coaxial cable (with BNC connectors) as the shared CSMA/CD medium. The later StarLAN 1BASE5 and 10BASE-T used twisted pair connected to Ethernet hubs with RJ-45 connectors.

Currently Ethernet has many varieties that vary both in speed and physical medium used. Perhaps the most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and RJ-45 connectors. They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. However each version has become steadily more selective about the cable it runs on and some installers have avoided 1000baseT for everything except short connections to servers.

Fiber optic variants of Ethernet are commonly seen connecting buildings or network cabinets in different parts of a building but are rarely seen connected to end systems for cost reasons. Their advantages lie in performance (fiber versions of a new speed almost invariablly come out before copper), distance (up to tens of kilometers with some versions) and electrical isolation. 10-gigabit Ethernet is becoming more popular in both enterprise and carrier networks, with discussions starting on 40G and 100G Ethernet.

Through Ethernet's history there have also been RF versions of Ethernet, both wireline and wireless. However, the current major wireless standards are based on 802.11, which is not a version of Ethernet, though often it is bridged to an Ethernet backbone network.

History

Ethernet was originally developed as one of the many pioneering projects at Xerox PARC. A common story states that Ethernet was invented in 1972, when Robert Metcalfe wrote a memo to his bosses at PARC about Ethernet's potential. But Metcalfe claims Ethernet was actually invented over a period of several years. In 1976, Metcalfe and his assistant David Boggs published a paper titled Ethernet: Distributed Packet-Switching For Local Computer Networks.

The experimental Ethernet described in that paper ran at 3 Mbit/s, and had 8-bit destination and source address fields, so Ethernet addresses weren't the global addresses they are today. By software convention, the 16 bits after the destination and source address fields were a packet type field, but, as the paper says, "different protocols use disjoint sets of packet types", so those were packet types within a given protocol, rather than the packet type in current Ethernet, which specifies the protocol being used.

Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced DEC, Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it standardized the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit type field. The standard was first published on September 30 1980. It competed with two largely proprietary systems, token ring and ARCNET, but those soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company.

Metcalfe sometimes jokingly credits Jerry Saltzer for 3Com's success. Saltzer cowrote an influential paper suggesting that token-ring architectures were theoretically superior to Ethernet-style technologies. This result, the story goes, left enough doubt in the minds of computer manufacturers that they decided not to make Ethernet a standard feature, which allowed 3Com to build a business around selling add-in Ethernet network cards. This also led to the saying "Ethernet works better in practice than in theory," which, though a joke, actually makes a valid technical point: the characteristics of typical traffic on actual networks differ from what had been expected before LANs became common in ways that favor the simple design of Ethernet. Add to this the real speed/cost advantage Ethernet products have continually enjoyed over other (Token, FDDI, ATM, etc.) LAN implementations and we see why today's result is that "connect the PC to the network" means connect it via Ethernet.

Metcalfe and Saltzer worked on the same floor at MIT's Project MAC while Metcalfe was doing his Harvard dissertation, in which he worked out the theoretical foundations of Ethernet.

  • Networking standards that are not part of the IEEE 802.3 Ethernet standard, but support the Ethernet frame format, and are capable of interoperating with it.
    • LattisNet — A SynOptics pre-standard twisted-pair 10 Mbit/s variant.
    • 100BaseVG — An early contender for 100 Mbit/s Ethernet. It runs over Category 3 cabling. Uses four pairs. Commercial failure.
    • TIA 100BASE-SX — Promoted by the Telecommunications Industry Association. 100BASE-SX is an alternative implementation of 100 Mbit/s Ethernet over fiber; it is incompatible with the official 100BASE-FX standard. Its main feature is interoperability with 10BASE-FL, supporting autonegotiation between 10 Mbit/s and 100 Mbit/s operation -- a feature lacking in the official standards due to the use of differing LED wavelengths. It is targeted at the installed base of 10 Mbit/s fiber network installations.
    • TIA 1000BASE-TX — Promoted by the Telecommunications Industry Association, it was a commercial failure, and no products exist. 1000BASE-TX uses a simpler protocol than the official 1000BASE-T standard so the electronics can be cheaper, but requires Category 6 cabling.
  • Networking standards that do not use the Ethernet frame format but can still be connected to Ethernet using MAC-based bridging.
    • 802.11 — A standard for wireless networking, often known as wireless Ethernet and usually operated with an Ethernet backbone.
  • Long Reach Ethernet
  • Avionics Full-Duplex Switched Ethernet

See also

Implementations

References

  • Metcalfe, Robert M. and Boggs, David R. (1976). "Ethernet: Distributed Packet Switching for Local Computer Networks". Communications of the ACM. 19 (5): 395–405. {{cite journal}}: Unknown parameter |month= ignored (help)CS1 maint: multiple names: authors list (link) - the original Metcalfe and Boggs paper on Ethernet
  • Digital Equipment Corporation, Intel Corporation, Xerox Corporation (September, 1980). "The Ethernet: A Local Area Network". {{cite journal}}: Check date values in: |date= (help); Cite journal requires |journal= (help)CS1 maint: multiple names: authors list (link) - Version 1.0 of the DIX specification
  • Boggs, David R. and Mogul, Jeffrey C. and Kent, Christopher A. (1988). "Measured capacity of an Ethernet: myths and reality". SIGCOMM88 - Symposium proceedings on Communications architectures and protocols. pp. 222–234. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: multiple names: authors list (link) - on the issue of Ethernet bandwidth collapse (Full text from DEC research)
  • IEEE 802.3 2002 standard
  • Don Provan (1993-09-17). "Ethernet Framing". Newsgroupcomp.sys.novell. 1993Sep17.190654.13335@novell.com. {{cite newsgroup}}: Check date values in: |date= (help) - a classic series of Usenet postings by Novell's Don Provan that have found their way into numerous FAQs and are widely considered the definitive answer to the Novell Frame Type jungle