Systems communicating over Ethernet divide a stream of data into shorter pieces calledframes. Each frame contains source and destination addresses, anderror-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols triggerretransmission of lost frames. Per theOSI model, Ethernet provides services up to and including thedata link layer.[3] The 48-bitMAC address was adopted by otherIEEE 802 networking standards, includingIEEE 802.11 (Wi-Fi), as well as byFDDI.EtherType values are also used inSubnetwork Access Protocol (SNAP) headers.
Ethernet is widely used in homes and industry, and interworks well with wirelessWi-Fi technologies. TheInternet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up theInternet.
The original forms of Ethernet used a shared communications channel. This concept originated inALOHAnet, designed in the late 1960s byNorman Abramson. ALOHANet was a 4800 bps radio network used by the University of Hawaii. When a sender detected that its message hadn't been received, it would resend the message after waiting for a randomly selected period of time.[4]: 3–4
In 1972,Robert Metcalfe andDavid Boggs adapted the ALOHAnet approach to transmission over a shared coaxial cable in theXerox Palo Alto Research Center (Xerox PARC). This network connected ALTO computers using a coaxial cable. It first ran on May 22, 1973 with a bit rate of 2.94 Mbps. In a memo written at that time, Metcalfe named the concept “Ethernet.” The name was inspired by the former idea that the universe was filled with a "luminiferous aether" that carried electromagnetic waves, and calling it Ethernet emphasized its ability to run over any transmission medium.[5] Ethernet improved the original ALOHANet design because a sender would first listen to the channel to determine if it was already in use. The combination of the new idea ofCarrier Sense withMultiple Access andCollision Detection from ALOHANet became Carrier-Sense Multiple Access/Collision Detection, orCSMA/CD.[4]: 6–7
In 1975, Metcalfe, Boggs and their colleaguesCharles Thacker andButler Lampson filed for a patent on Ethernet, which was granted in 1977.[6] By 1976, 100 ALTOs at Xerox PARC were connected using Ethernet. In July 1976, Metcalfe and Boggs published the seminal paperEthernet: Distributed Packet Switching for Local Computer Networks inCommunications of the ACM (CACM).[4]: 7 [7] Subsequently between 1976-1978Ron Crane, Bob Garner, Hal Murray, and Roy Ogus designed a 10Mbps version of Ethernet running over coaxial cable.[8]
There were multiple local area network technologies in the 1970s. These included IBM'stoken ring, Network Systems Corporation'sHYPERchannel and Datapoint'sARCnet. All were proprietary at the time. Metcalfe andDavid Liddle developed a strategy of standardizing Ethernet rather than keeping it vendor-specific, and convincedDigital Equipment Corporation (DEC),Intel, andXerox to work together on a standard, subsequently known as the DIX standard, based on the 10Mbps version of Ethernet and published in 1980 as the Ethernet Blue Book.[9] Version 2 was published in November 1982.[10][4]: 7–8
In June 1981, theInstitute of Electronic and Electrical Engineers (IEEE) Project802 (for local area network standards) created an802.3 subcommittee to produce an Ethernet standard based on DIX. In 1983, a standard was published for 10 Mbps Ethernet over a coaxial cable of up to 500 meters (10BASE5). It differed only in some details from the DIX standard. As part of the standardization process, Xerox turned over all its Ethernet patents to the IEEE, and anyone can implement 802.3. IEEE 802.3 is now considered the same as Ethernet.[4]: 8
In June 1979, Metcalfe left Xerox to found the Computer, Communication, and Compatibility Corporation, better known as3Com, along withHoward Charney,Ron Crane,Greg Shaw, andBill Kraus. Metcalfe's vision was to sell Ethernet adapters for all personal computers. Apple quickly agreed, but IBM was committed to their own LAN protocol, the Token Ring. Nonetheless, 3Com developed the EtherLinkISA adapter and started shipping it with DOS driver software, making it usable on IBM PCs.[4]: 9
The EtherLink adapter had several advantages over competitors. It was the first network interface card (NIC) to use VLSI semiconductor technology (developed in partnership withSeeq Technologies). This meant most of the functions, including the transceiver, could be contained on a single chip, so the price for Etherlink ($950) was significantly lower than of its competitors. 3Com introduced a new, thinner coaxial cable for the card, calledThin Ethernet, making it more convenient to install and use. Finally, Etherlink was the first Ethernet adapter for the IBM PC.[4]: 9–10
Because both businesses and home users adopted the IBM PC, its market expanded rapidly, and by 1982, IBM was shipping 200,000 units a month. Since IBM hadn't realized that businesses would want the computers connected by a network, Etherlink sales filled the vacuum and in 1984 3Com was able to file for a public stock offering. The Etherlink approach was standardized by IEEE as 10BASE2 in 1984.[4]: 11
Also in the early 1980s,Novell began selling Network Interface Cards (NICs) to go with its NetWare operating system. These NE2000 NICs were all Ethernet, and because NetWare became an important application for businesses, this increased the demand for Ethernet adapters. Then in 1989, Novell sold its NIC business and licensed the NE2000 card, creating a highly competitive market and driving the price of Ethernet cards down, while cards for other technologies such as IBM’s token ring remained high.[4]: 16–17
Starting in late 1983,AT&T andNCR promoted a star configuration using unshielded twisted pair cabling (UTP), or regular telephone wire. This becameStarLAN, running at 1Mbps over cables up to 500 meters, and was standardized as 1BASE5 by IEEE 802.3,[4]: 12–13 but on August 17, 1987,SynOptics introduced LATTISNET with 10Mbps Ethernet also over regular telephone wire (UTP).[4]: 14 In the fall of 1990, the IEEE issued the 802.3i standard for 10BASE-T, Ethernet over twisted pairs, and the following year, Ethernet sales nearly doubled.[4]: 15–16 By 1992, Ethernet was the de facto standard for LANS.[4]: 17
An Intel 82574L Gigabit Ethernet NIC, PCI Express ×1 card
In February 1980, theInstitute of Electrical and Electronics Engineers (IEEE) started project802 to standardize local area networks (LAN).[11][12] The DIX group with Gary Robinson (DEC), Phil Arst (Intel), and Bob Printis (Xerox)[13] submitted the so-calledBlue BookCSMA/CD specification as a candidate for the LAN specification.[14] In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported byGeneral Motors) were also considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, and standardization proceeded separately for each proposal.[15][11]
Delays in the standards process put at risk the market introduction of theXerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind,David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens' representative to IEEE 802, quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24.[16] In March 1984, ECMA TC24 with its corporate members reached an agreement on a standard for CSMA/CD based on the IEEE 802 draft[11]: 364 and the International Standards organization adopted ISO 88023 for Ethernet[4]: 8 Because the DIX proposal was most technically complete and because of the speedy action taken by ECMA which decisively contributed to the conciliation of opinions within IEEE, the IEEE 802.3 CSMA/CD standard was approved in December 1982.[11] IEEE published the 802.3 standard as a draft in 1983 and as a standard in 1985.[17]
Ethernet has evolved to include higher bandwidth, improvedmedium access control methods, and different physical media. Themultidrop coaxial cable was replaced with physical point-to-point links connected byEthernet repeaters orswitches.[19]
Ethernet stations communicate by sending each otherdata packets: blocks of data individually sent and delivered. As with other IEEE 802 LANs, adapters come programmed with globally unique 48-bitMAC address so that each Ethernet station has a unique address.[a] The MAC addresses are used to specify both the destination and the source of each data packet. Ethernet establishes link-level connections, which can be defined using both the destination and source addresses. On reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Ethernet stations.[b][c]
An EtherType field in each frame is used by the operating system on the receiving station to select the appropriate protocol module (e.g., anInternet Protocol version such asIPv4).Ethernet frames are said to beself-identifying, because of the EtherType field. Self-identifying frames make it possible to intermix multiple protocols on the same physical network and allow a single computer to use multiple protocols together.[20] Despite the evolution of Ethernet technology, all generations of Ethernet (excluding early experimental versions) use the same frame formats.[21] Mixed-speed networks can be built using Ethernet switches and repeaters supporting the desired Ethernet variants.[22]
Due to the ubiquity of Ethernet, and the ever-decreasing cost of the hardware needed to support it, by 2004 most manufacturers built Ethernet interfaces directly intoPC motherboards, eliminating the need for a separate network card.[23]
Older Ethernet equipment. Clockwise from top-left: An Ethernet transceiver with an in-line10BASE2 adapter, a similar model transceiver with a10BASE5 adapter, anAUI cable, a different style of transceiver with 10BASE2BNC T-connector, two 10BASE5 end fittings (N connectors), an orangevampire tap installation tool (which includes a specialized drill bit at one end and a socket wrench at the other), and an early model 10BASE5 transceiver (h4000) manufactured by DEC. The short length of yellow 10BASE5 cable has one end fitted with an N connector and the other end prepared to have an N connector shell installed; the half-black, half-grey rectangular object through which the cable passes is an installed vampire tap.
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The method used was similar to those used in radio systems,[d] with the common cable providing the communication channel likened to theLuminiferous aether in 19th-century physics, and it was from this reference that the nameEthernet was derived.[24]
The original Ethernet's shared coaxial cable (the shared medium) traversed a building or campus to connect every attached machine. A scheme known ascarrier-sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than competing Token Ring orToken Bus technologies.[e] Computers are connected to anAttachment Unit Interface (AUI)transceiver, which is in turn connected to the cable (withthin Ethernet the transceiver is usually integrated into the network adapter). While a simple passive wire is highly reliable for small networks, it is not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, can make the whole Ethernet segment unusable.[f]
Through the first half of the 1980s, Ethernet's10BASE5 implementation utilised a coaxial cable 0.375 inches (9.5 mm) in diameter, later referred to asthick Ethernet orthicknet. Its successor,10BASE2, calledthin Ethernet orthinnet, used theRG-58 coaxial cable. The emphasis was on making installation of the cable easier and less costly.[25]: 57
Since all communication happens on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination.[g] The network interface card interrupts theCPU only when applicable packets are received: the card ignores information not addressed to it.[b] Use of a single cable also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are simultaneously active.[26]
A collision happens when two stations attempt to transmit at the same time. They corrupt transmitted data and require stations to re-transmit. The loss of data and retransmission reduce throughput. In the worst case, where multiple active hosts connected with maximum allowed cable length attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, aXerox report in 1980, published inCommunications of the ACM, studied the performance of an existing Ethernet installation under both normal and artificially generated heavy load. The report claimed that 98% throughput on the LAN was observed.[27] This is in contrast withtoken passing LANs (Token Ring, Token Bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits. This report was controversial, as modeling showed that collision-based networks theoretically became unstable under loads as low as 37% of nominal capacity. Many early researchers failed to understand these results. Performance on real networks is significantly better.[28]
In a modern Ethernet, the stations do not all share one channel through a shared cable or a simplerepeater hub; instead, each station communicates with a switch, which in turn forwards that traffic to the destination station. In this topology, collisions are only possible if the station and switch attempt to communicate with each other at the same time, and collisions are limited to this link. Furthermore, the10BASE-T standard introduced afull duplex mode of operation, which became familiar withFast Ethernet and the de facto standard withGigabit Ethernet. In full duplex, a switch and a station can send and receive simultaneously, and therefore modern Ethernet networks are completely collision-free.
Comparison between original Ethernet and modern Ethernet
The original Ethernet implementation: shared medium, collision-prone. All computers trying to communicate share the same cable, and so compete with each other.
Modern Ethernet implementation: switched connection, collision-free. Each computer communicates only with its own switch, without competition for the cable with others.
For signal degradation and timing reasons,coaxialEthernet segments have a restricted size.[29] Somewhat larger networks can be built by using anEthernet repeater. Early repeaters had only two ports, allowing, at most, a doubling of network size. Once repeaters with more than two ports became available, it was possible to wire the network in astar topology. Early experiments with star topologies (calledFibernet) usingoptical fiber were published by 1978.[30]
Shared cable Ethernet is always hard to install in offices because its bus topology is in conflict with the star topology cable plans designed into buildings for telephony. Modifying Ethernet to conform to twisted-pair telephone wiring already installed in commercial buildings provided another opportunity to lower costs, expand the installed base, and leverage building design, and, thus, twisted-pair Ethernet was the next logical development in the mid-1980s.
Ethernet on unshieldedtwisted-pair cables (UTP) began withStarLAN at 1 Mbit/s in the mid-1980s.[4]: 12-13 In 1987SynOptics introduced the first twisted-pair Ethernet at 10 Mbit/s in a star-wired cabling topology with a central hub, later calledLattisNet.[11][24]: 29 [31] These evolved into 10BASE-T, which was designed for point-to-point links only, and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks easier to maintain by preventing most faults with one peer or its associated cable from affecting other devices on the network.[citation needed]
Despite the physical star topology and the presence of separate transmit and receive channels in the twisted pair and fiber media, repeater-based Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the repeater, primarily the generation of thejam signal in dealing with packet collisions. Every packet is sent to every other port on the repeater, so bandwidth and security problems are not addressed. The total throughput of the repeater is limited to that of a single link, and all links must operate at the same speed.[24]: 278
While repeaters can isolate some aspects ofEthernet segments, such as cable breakages, they still forward all traffic to all Ethernet devices. The entire network is onecollision domain, and all hosts have to be able to detect collisions anywhere on the network. This limits the number of repeaters between the farthest nodes and creates practical limits on how many machines can communicate on an Ethernet network. Segments joined by repeaters have to all operate at the same speed, making phased-in upgrades impossible.[citation needed]
To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. At initial startup, Ethernet bridges work somewhat like Ethernet repeaters, passing all traffic between segments. By observing the source addresses of incoming frames, the bridge then builds an address table associating addresses to segments. Once an address is learned, the bridge forwards network traffic destined for that address only to the associated segment, improving overall performance.Broadcast traffic is still forwarded to all network segments. Bridges also overcome the limits on total segments between two hosts and allow the mixing of speeds, both of which are critical to the incremental deployment of faster Ethernet variants.[citation needed]
In 1989,Motorola Codex introduced their 6310 EtherSpan, andKalpana introduced their EtherSwitch; these were examples of the first commercial Ethernet switches.[h] Early switches such as this usedcut-through switching where only the header of the incoming packet is examined before it is either dropped or forwarded to another segment.[32] This reduces the forwarding latency. One drawback of this method is that it does not readily allow a mixture of different link speeds. Another is that packets that have been corrupted are still propagated through the network. The eventual remedy for this was a return to the originalstore and forward approach of bridging, where the packet is read into a buffer on the switch in its entirety, itsframe check sequence verified and only then the packet is forwarded.[32] In modern network equipment, this process is typically done usingapplication-specific integrated circuits allowing packets to be forwarded atwire speed.[citation needed]
When a twisted pair or fiber link segment is used and neither end is connected to a repeater,full-duplex Ethernet becomes possible over that segment. In full-duplex mode, both devices can transmit and receive to and from each other at the same time, and there is no collision domain.[33] This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (for example, 200 Mbit/s for Fast Ethernet).[i] The elimination of the collision domain for these connections also means that all the link's bandwidth can be used by the two devices on that segment and that segment length is not limited by the constraints of collision detection.
Since packets are typically delivered only to the port they are intended for, traffic on a switched Ethernet is less public than on shared-medium Ethernet.Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such asARP spoofing andMAC flooding.[citation needed][34]
The bandwidth advantages, the improved isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.[35]
Simple switched Ethernet networks, while a great improvement over repeater-based Ethernet, suffer from single points of failure, attacks that trick switches or hosts into sending data to a machine even if it is not intended for it, scalability and security issues with regard toswitching loops,broadcast radiation, andmulticast traffic.[citation needed]
Advanced networking features in switches useShortest Path Bridging (SPB) or theSpanning Tree Protocol (STP) to maintain a loop-free, meshed network, allowing physical loops for redundancy (STP) or load-balancing (SPB). Shortest Path Bridging includes the use of thelink-state routing protocolIS-IS to allow larger networks with shortest path routes between devices.
Advanced networking features also ensure port security, provide protection features such as MAC lockdown[36] and broadcast radiation filtering, useVLANs to keep different classes of users separate while using the same physical infrastructure,[37] and uselink aggregation to add bandwidth to overloaded links and to provide some redundancy.[38]
In 2016, Ethernet replacedInfiniBand as the most popular system interconnect ofTOP500 supercomputers.[39]
The Ethernet physical layer evolved over a considerable time span and encompasses coaxial, twisted pair and fiber-optic physical media interfaces, with speeds from1 Mbit/s to400 Gbit/s.[40] The first introduction of twisted-pair CSMA/CD wasStarLAN, standardized as 802.3 1BASE5.[41] While 1BASE5 had little market penetration, it defined the physical apparatus (wire, plug/jack, pin-out, and wiring plan) that would be carried over to 10BASE-T through 10GBASE-T.
Fiber optic variants of Ethernet (that commonly useSFP modules) are also very popular in larger networks, offering high performance, better electrical isolation and longer distance (tens of kilometers with some versions). In general, networkprotocol stack software will work similarly on all varieties.[45]
In IEEE 802.3, adatagram is called apacket orframe.Packet is used to describe the overall transmission unit and includes thepreamble,start frame delimiter (SFD) and carrier extension (if present).[j] Theframe begins after the start frame delimiter with a frame header featuring source and destination MAC addresses and the EtherType field giving either the protocol type for the payload protocol or the length of the payload. The middle section of the frame consists of payload data including any headers for other protocols (for example, Internet Protocol) carried in the frame. The frame ends with a 32-bitcyclic redundancy check, which is used to detect corruption ofdata in transit.[46]: sections 3.1.1 and 3.2 Notably, Ethernet packets have notime-to-live field, leading to possible problems in the presence of a switching loop.
Autonegotiation is the procedure by which two connected devices choose common transmission parameters, e.g. speed and duplex mode. Autonegotiation was initially an optional feature, first introduced with 100BASE-TX (1995 IEEE 802.3u Fast Ethernet standard), and is backward compatible with 10BASE-T. The specification was improved in the 1998 release of IEEE 802.3. Autonegotiation is mandatory for 1000BASE-T and faster.
A switching loop or bridge loop occurs incomputer networks when there is more than oneLayer 2 (OSI model) path between two endpoints (e.g. multiple connections between twonetwork switches or two ports on the same switch connected to each other). The loop createsbroadcast storms as broadcasts andmulticasts are forwarded by switches out everyport, the switch or switches will repeatedly rebroadcast the broadcast messages flooding the network. Since the Layer 2 header does not support atime to live (TTL) value, if a frame is sent into a looped topology, it can loop forever.[47]
A physical topology that contains switching or bridge loops is attractive for redundancy reasons, yet a switched network must not have loops. The solution is to allow physical loops, but create a loop-free logical topology using the SPB protocol or the older STP on the network switches.[48]
A node that is sending longer than the maximum transmission window for an Ethernet packet is considered to bejabbering. Depending on the physical topology, jabber detection and remedy differ somewhat.
AnMAU is required to detect and stop abnormally long transmission from theDTE (longer than 20–150 ms) in order to prevent permanent network disruption.[49]
On an electrically shared medium (10BASE5, 10BASE2, 1BASE5), jabber can only be detected by each end node, stopping reception. No further remedy is possible.[50]
A repeater/repeater hub uses a jabber timer that ends retransmission to the other ports when it expires. The timer runs for 25,000 to 50,000 bit times for 1 Mbit/s,[51] 40,000 to 75,000 bit times for 10 and 100 Mbit/s,[52][53] and 80,000 to 150,000 bit times for 1 Gbit/s.[54] Jabbering ports are partitioned off the network until a carrier is no longer detected.[55]
End nodes utilizing a MAC layer will usually detect an oversized Ethernet frame and cease receiving. A bridge/switch will not forward the frame.[56]
A non-uniform frame size configuration in the network usingjumbo frames may be detected as jabber by end nodes.[citation needed] Jumbo frames are not part of the officialIEEE 802.3 Ethernet standard.
A packet detected as jabber by an upstream repeater and subsequently cut off has an invalidframe check sequence and is dropped.[57]
^In some cases, the factory-assigned address can be overridden, either to avoid an address change when an adapter is replaced or to uselocally administered addresses.
^Of course bridges and switches will accept other addresses for forwarding the packet.
^There are fundamental differences between wireless and wired shared-medium communication, such as the fact that it is much easier to detect collisions in a wired system than a wireless system.
^In a CSMA/CD system packets must be large enough to guarantee that the leading edge of the propagating wave of a message gets to all parts of the medium and back again before the transmitter stops transmitting, guaranteeing thatcollisions (two or more packets initiated within a window of time that forced them to overlap) are discovered. As a result, the minimum packet size and the physical medium's total length are closely linked.
^Multipoint systems are also prone to strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly, while others work slowly because of excessive retries or not at all. Seestanding wave for an explanation. These could be much more difficult to diagnose than a complete failure of the segment.
^Thisone speaks, all listen property is a security weakness of shared-medium Ethernet, since a node on an Ethernet network can eavesdrop on all traffic on the wire if it so chooses.
^The termswitch was invented by device manufacturers and does not appear in the IEEE 802.3 standard.
^This is misleading, as performance will double only if traffic patterns are symmetrical.
^The carrier extension is defined to assist collision detection on shared-media gigabit Ethernet.
^Charles M. Kozierok (September 20, 2005)."Data Link Layer (Layer 2)".tcpipguide.com.Archived from the original on May 20, 2019. RetrievedJanuary 9, 2016.
^Vic Hayes (August 27, 2001)."Letter to FCC"(PDF). Archived fromthe original(PDF) on July 27, 2011. RetrievedOctober 22, 2010.IEEE 802 has the basic charter to develop and maintain networking standards... IEEE 802 was formed in February 1980...
^Froehlich, Fritz E.; Kent, Allen, eds. (1990).The Froehlich/Kent Encyclopedia of Telecommunications. Vol. 9. IEEE 802.3 and Ethernet Standards to Interrelationship of the SS7 Protocol Architecture and the OSI Reference Model and Protocols. New York, Basel, Hong Kong. pp. 1–2.ISBN0-8247-2907-2.{{cite book}}: CS1 maint: location missing publisher (link)
^Liddle, David (October 11, 1988)."Oral History of David Liddle"(PDF) (Interview). Interviewed by James L. Pelkey. Mountain View, California: Computer History Museum. RetrievedNovember 18, 2025.
^Douglas E. Comer (2000).Internetworking with TCP/IP – Principles, Protocols and Architecture (4th ed.). Prentice Hall.ISBN0-13-018380-6. 2.4.9 – Ethernet Hardware Addresses, p. 29, explains the filtering.
^Geetaj Channana (November 1, 2004)."Motherboard Chipsets Roundup". PCQuest. Archived fromthe original on July 8, 2011. RetrievedOctober 22, 2010.While comparing motherboards in the last issue we found that all motherboards support Ethernet connection on board.
^"Token Ring-to-Ethernet Migration". Cisco.Archived from the original on July 8, 2011. RetrievedOctober 22, 2010.Respondents were first asked about their current and planned desktop LAN attachment standards. The results were clear—switched Fast Ethernet is the dominant choice for desktop connectivity to the network
^Tholeti, Bhanu Prakash Reddy (2013). "Hypervisors, Virtualization, and Networking".Handbook of Fiber Optic Data Communication. pp. 387–416.doi:10.1016/B978-0-12-401673-6.00016-7.ISBN978-0-12-401673-6.A link aggregation, or EtherChannel, device is a network port-aggregation technology that allows several Ethernet adapters to be aggregated. The adapters can then act as a single Ethernet device. Link aggregation helps to provide more throughput over a single IP address than would be possible with a single Ethernet adapter.
^"HIGHLIGHTS – JUNE 2016". June 2016.Archived from the original on January 30, 2021. RetrievedFebruary 19, 2021.InfiniBand technology is now found on 205 systems, down from 235 systems, and is now the second most-used internal system interconnect technology. Gigabit Ethernet has risen to 218 systems up from 182 systems, in large part thanks to 176 systems now using 10G interfaces.