The nodes of a computer network can includepersonal computers,servers,networking hardware, or other specialized or general-purposehosts. They are identified bynetwork addresses and may havehostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as theInternet Protocol.
Computer networking may be considered a branch ofcomputer science,computer engineering, andtelecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technological developments and historical milestones.
In the late 1950s, a network of computers was built for the U.S. militarySemi-Automatic Ground Environment (SAGE)radar system[1][2][3] using theBell 101 modem. It was the first commercialmodem for computers, released byAT&T Corporation in 1958. The modem alloweddigital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s).
In 1959,Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers.[9] Kitov's proposal was rejected, as later was the 1962OGAS economy management network project.[10]
In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1965,Western Electric introduced the first widely usedtelephone switch that implemented computer control in the switching fabric.
In 1969, the first four nodes of theARPANET were connected using50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah.[31] Designed principally byBob Kahn, the network's routing, flow control, software design and network control were developed by theIMP team working forBolt Beranek & Newman.[32][33][34] In the early 1970s,Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET.[35][36] His theoretical work onhierarchical routing in the late 1970s with studentFarouk Kamoun remains critical to the operation of the Internet today.[37][38]
In 1973, the FrenchCYCLADES network, directed byLouis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself.[41]
In 1974,Vint Cerf andBob Kahn published their seminal 1974 paper on internetworking,A Protocol for Packet Network Intercommunication.[49] Later that year, Cerf,Yogen Dalal, and Carl Sunshine wrote the firstTransmission Control Protocol (TCP) specification,RFC675, coining the termInternet as a shorthand for internetworking.[50]
In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks"[51] and in December 1977, together withButler Lampson andCharles P. Thacker, they receivedU.S. patent 4063220A for their invention.[52][53]
Public data networks in Europe, North America and Japan began usingX.25 in the late 1970s and interconnected withX.75.[14] This underlying infrastructure was used for expanding TCP/IP networks in the 1980s.[54]
In 1976, John Murphy ofDatapoint Corporation createdARCNET, a token-passing network first used to share storage devices.
In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California.
In 1979, Robert Metcalfe pursued making Ethernet an open standard.[55]
In 1980, Ethernet was upgraded from the original2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed byRon Crane, Bob Garner, Roy Ogus,[56] and Yogen Dalal.[57]
In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 400 Gbit/s were added (as of 2018[update]). The scaling of Ethernet has been a contributing factor to its continued use.[55]
Computer networks enhance how users communicate with each other by using various electronic methods like email, instant messaging, online chat, voice and video calls, and video conferencing. Networks also enable the sharing of computing resources. For example, a user can print a document on a shared printer or use shared storage devices. Additionally, networks allow for the sharing of files and information, giving authorized users access to data stored on other computers.Distributed computing leverages resources from multiple computers across a network to perform tasks collaboratively.
Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destinationnetwork addresses,error detection codes, and sequencing information. Typically, control information is found inpacket headers andtrailers, withpayload data in between.
With packets, thebandwidth of the transmission medium can be better shared among users than if the network werecircuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet isqueued and waits until a link is free.
The physical link technologies of packet networks typically limit the size of packets to a certainmaximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message.
The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, mostnetwork diagrams are arranged by theirnetwork topology which is the map of logical interconnections of network hosts.
Star network: all nodes are connected to a special central node. This is the typical layout found in a smallswitched Ethernet LAN, where each client connects to a central network switch, and logically in awireless LAN, where each wireless client associates with the centralwireless access point.
Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards.Token ring networks, and theFiber Distributed Data Interface (FDDI), made use of such a topology.
Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing.
The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, withFDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding.
Anoverlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, manypeer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of theInternet.[58]
Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed.
The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on thetelephone network.[58] Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies.Address resolution androuting are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is adistributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually amap) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as throughquality of service guarantees achieve higher-qualitystreaming media. Previous proposals such asIntServ,DiffServ, andIP multicast have not seen wide acceptance largely because they require modification of allrouters in the network.[citation needed] On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation fromInternet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination[citation needed].
For example,Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind ofmulticast). Academic research includes end system multicast,[59] resilient routing and quality of service studies, among others.
The transmission media (often referred to in the literature as thephysical medium) used to link devices to form a computer network includeelectrical cable,optical fiber, and free space. In theOSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adoptedfamily that uses copper and fiber media inlocal area network (LAN) technology are collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined byIEEE 802.3. Wireless LAN standards useradio waves, others useinfrared signals as a transmission medium.Power line communication uses a building'spower cabling to transmit data.
Fiber-optic cables are used to transmit light from one computer/network node to another.
The following classes of wired technologies are used in computer networking.
Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.[citation needed]
Twisted pair cabling is used for wiredEthernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reducecrosstalk andelectromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in severalcategory ratings, designed for use in various scenarios.
2007 map showing submarine optical fiber telecommunication cables around the world
Anoptical fiber is a glass fiber. It carries pulses of light that represent data via lasers andoptical amplifiers. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Using densewave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used forundersea communications cables to interconnect continents. There are two basic types of fiber optics,single-mode optical fiber (SMF) andmulti-mode optical fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade.[60]
Network connections can be established wirelessly using radio or other electromagnetic means of communication.
Terrestrialmicrowave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 40 miles (64 km) apart.
Communications satellites – Satellites also communicate via microwave. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
Cellular networks use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area is served by a low-powertransceiver.
Radio andspread spectrum technologies – Wireless LANs use a high-frequency radio technology similar to digital cellular. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area.IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known asWi-Fi.
The last two cases have a largeround-trip delay time, which gives slowtwo-way communication but does not prevent sending large amounts of information (they can have high throughput).
Apart from any physical transmission media, networks are built from additional basic system building blocks, such asnetwork interface controllers,repeaters,hubs,bridges,switches,routers, modems, andfirewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions.
AnATM network interface in the form of an accessory card. A lot of network interfaces are built-in.
A network interface controller (NIC) iscomputer hardware that connects the computer to thenetwork media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
In Ethernet networks, each NIC has a uniqueMedia Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, theInstitute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is sixoctets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
A repeater is an electronic device that receives a networksignal, cleans it of unnecessary noise and regenerates it. The signal isretransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause apropagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet5-4-3 rule.
An Ethernet repeater with multiple ports is known as anEthernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches.
Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports.[63] Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches.
Bridges and switches operate at thedata link layer (layer 2) of the OSI model and bridge traffic between two or morenetwork segments to form a single local network. Both are devices that forwardframes of data betweenports based on the destination MAC address in each frame.[64]They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply.
Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks.
A typical home or small office router showing theADSL telephone line andEthernet network cable connections
A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with therouting table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks.
Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or morecarrier signals aremodulated by the digital signal to produce ananalog signal that can be tailored to give the required properties for transmission. Early modems modulatedaudio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using adigital subscriber line technology and cable television systems usingDOCSIS technology.
This is an image of a firewall separating a private network from a public network
A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase incyber attacks.
The TCP/IP model and its relation to common protocols used at different layers of the modelMessage flows between two devices (A-B) at the four layers of the TCP/IP model in the presence of a router (R). Red flows are effective communication paths, black paths are across the actual network links.
Acommunication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics. They may beconnection-oriented orconnectionless, they may usecircuit mode or packet switching, and they may use hierarchical addressing or flat addressing.
In aprotocol stack, often constructed per the OSI model, communications functions are divided up into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack isHTTP (the World Wide Web protocol) running overTCP over IP (the Internet protocols) overIEEE 802.11 (the Wi-Fi protocol). This stack is used between thewireless router and the home user's personal computer when the user is surfing the web.
There are many communication protocols, a few of which are described below.
TheInternet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications forInternet Protocol Version 4 (IPv4) and forIPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet.[65]
IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model.
Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together calledIEEE 802.3 published by the Institute of Electrical and Electronics Engineers.
Wireless LAN based on theIEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of theIEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet.
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardizedmultiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switcheddigital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transportingAsynchronous Transfer Mode (ATM) frames.
Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronoustime-division multiplexing and encodes data into small, fixed-sizedcells. This differs from other protocols such as the Internet protocol suite orEthernet that use variable-sized packets orframes. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time,low-latency content such as voice and video. ATM uses a connection-oriented model in which avirtual circuit must be established between two endpoints before the actual data exchange begins.
ATM still plays a role in thelast mile, which is the connection between an Internet service provider and the home user.[68][needs update]
Routing calculates good paths through a network for information to take. For example, from node 1 to node 6 the best routes are likely to be 1-8-7-6, 1-8-10-6 or 1-9-10-6, as these are the shortest routes.
Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks.
In packet-switched networks,routing protocols directpacket forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purposecomputers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis ofrouting tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time.Multipath routing techniques enable the use of multiple alternative paths.
Routing can be contrasted withbridging in its assumption thatnetwork addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks.
Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale.
Ananoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques.[70]
Apersonal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters.[71] A wired PAN is usually constructed withUSB andFireWire connections while technologies such asBluetooth andinfrared communication typically form a wireless PAN.
Alocal area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such asITU-TG.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines.[72]
A LAN can be connected to awide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higherdata transfer rates, limited geographic range, and lack of reliance onleased lines to provide connectivity.[citation needed] Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of100 Gbit/s,[73] standardized by IEEE in 2010.
Ahome area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through acable Internet access ordigital subscriber line (DSL) provider.
Astorage area network (SAN) is a dedicated network that provides access to consolidated, block-level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the storage appears as locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.[citation needed]
Acampus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber,Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.).
For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls.
Abackbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone,network performance andnetwork congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.
For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is theInternet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data betweenwide area networks (WANs), metro, regional, national and transoceanic networks.
Awide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided bycommon carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, thedata link layer, and thenetwork layer.
Anenterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
Avirtual private network (VPN) is anoverlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
VPN may have best-effort performance or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider.
Aglobal area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrialwireless LANs.[74]
Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.
Anintranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information.
Anextranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology.
Partial map of the Internet based on 2005 data.[75] Each line is drawn between two nodes, representing twoIP addresses. The length of the lines indicates the delay between those two nodes.
Aninternetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers.
TheInternet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of theAdvanced Research Projects Agency Network (ARPANET) developed byDARPA of theUnited States Department of Defense. The Internet utilizes copper communications and anoptical networking backbone to enable theWorld Wide Web (WWW), theInternet of things, video transfer, and a broad range of information services.
Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by theInternet Assigned Numbers Authority andaddress registries. Service providers and large enterprises exchange information about the reachability of their address spaces through theBorder Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
Adarknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes calledfriends (F2F)[76] — using non-standard protocols andports.
Darknets are distinct from other distributedpeer-to-peer networks assharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.[77]
Network services are applications hosted by servers on a computer network, toprovide some functionality for members or users of the network, or to help the network itself to operate.
Services are usually based on aservice protocol that defines the format and sequencing of messages between clients and servers of that network service.
Network delay is a design and performance characteristic of atelecommunications network. It specifies thelatency for a bit of data to travel across the network from onecommunication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay:
Processing delay – time it takes a router to process the packet header
Queuing delay – time the packet spends in routing queues
Transmission delay – time it takes to push the packet's bits onto the link
Propagation delay – time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes totransmit a packet serially through alink. This delay is extended by more variable levels of delay due tonetwork congestion.IP network delays can range from less than a microsecond to several hundred milliseconds.
In circuit-switched networks, network performance is synonymous with thegrade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads.[81] Other types of performance measures can include the level of noise and echo.
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example,state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.[83]
Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely onretransmission to maintainreliable communications. Typical effects of congestion includequeueing delay,packet loss or theblocking of new connections. A consequence of these latter two is that incremental increases inoffered load lead either to only a small increase in the networkthroughput or to a potential reduction in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known ascongestive collapse.
Modern networks usecongestion control,congestion avoidance andtraffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include:exponential backoff in protocols such as802.11'sCSMA/CA and the original Ethernet,window reduction in TCP, andfair queueing in devices such as routers.
Another method to avoid the negative effects of network congestion is implementingquality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in theITU-TG.hn home networking standard.
For the Internet,RFC2914 addresses the subject of congestion control in detail.
Network Security consists of provisions and policies adopted by thenetwork administrator to prevent and monitorunauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources.[85] Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals.
Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.
Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.
End-to-end encryption (E2EE) is adigital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating partyencrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers orapplication service providers, from reading or tampering with communications. End-to-end encryption generally protects bothconfidentiality andintegrity.
Typicalserver-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications betweenclients andservers, not between the communicating parties themselves. Examples of non-E2EE systems areGoogle Talk,Yahoo Messenger,Facebook, andDropbox.
The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as thetechnical exploitation ofclients, poor qualityrandom number generators, orkey escrow. E2EE also does not addresstraffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent.
The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed.Netscape took the first shot at a new standard. At the time, the dominant web browser wasNetscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list ofroot certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates asymmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.[60]
Users and network administrators typically have different views of their networks. Users can share printers and some servers from aworkgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a network administrator is responsible for keeping that network up and running. Acommunity of interest has less of a connection of being in a local area and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate viapeer-to-peer technologies.
Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges andapplication-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture,subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using VLANs.
Users and administrators are aware, to varying extents, of a network's trust and scope characteristics. Again using TCP/IP architectural terminology, anintranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees).[90] Intranets do not have to be connected to the Internet, but generally have a limited connection. Anextranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).[90]
Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, that share the registeredIP address space and exchange information about the reachability of those IP addresses using theBorder Gateway Protocol. Typically, thehuman-readable names of servers are translated to IP addresses, transparently to users, via the directory function of theDomain Name System (DNS).
Over the Internet, there can bebusiness-to-business,business-to-consumer andconsumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form ofcommunications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure VPN technology.
^Corbató, F. J.; et al. (1963).The Compatible Time-Sharing System A Programmer's Guide](PDF). MIT Press.ISBN978-0-262-03008-3.Archived(PDF) from the original on 2012-05-27. Retrieved2020-05-26.Shortly after the first paper on time-shared computers by C. Strachey at the June 1959 UNESCO Information Processing conference, H. M. Teager and J. McCarthy at MIT delivered an unpublished paper "Time-shared Program Testing" at the August 1959 ACM Meeting.
^Kleinrock, L. (1978)."Principles and lessons in packet communications".Proceedings of the IEEE.66 (11):1320–1329.doi:10.1109/PROC.1978.11143.ISSN0018-9219.Paul Baran ... focused on the routing procedures and on the survivability of distributed communication systems in a hostile environment, but did not concentrate on the need for resource sharing in its form as we now understand it; indeed, the concept of a software switch was not present in his work.
^Pelkey, James L."6.1 The Communications Subnet: BBN 1969".Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968–1988.As Kahn recalls: ... Paul Baran's contributions ... I also think Paul was motivated almost entirely by voice considerations. If you look at what he wrote, he was talking about switches that were low-cost electronics. The idea of putting powerful computers in these locations hadn't quite occurred to him as being cost effective. So the idea of computer switches was missing. The whole notion of protocols didn't exist at that time. And the idea of computer-to-computer communications was really a secondary concern.
^Waldrop, M. Mitchell (2018).The Dream Machine. Stripe Press. p. 286.ISBN978-1-953953-36-0.Baran had put more emphasis on digital voice communications than on computer communications.
^abCampbell-Kelly, Martin (1987)."Data Communications at the National Physical Laboratory (1965-1975)".Annals of the History of Computing.9 (3/4):221–247.doi:10.1109/MAHC.1987.10023.S2CID8172150.the first occurrence in print of the term protocol in a data communications context ... the next hardware tasks were the detailed design of the interface between the terminal devices and the switching computer, and the arrangements to secure reliable transmission of packets of data over the high-speed lines
^Guardian Staff (2013-06-25)."Internet pioneers airbrushed from history".The Guardian.ISSN0261-3077.Archived from the original on 2020-01-01. Retrieved2020-07-31.This was the first digital local network in the world to use packet switching and high-speed links.
^Roberts, Lawrence G. (November 1978)."The Evolution of Packet Switching"(PDF).IEEE Invited Paper. Archived fromthe original(PDF) on 31 December 2018. RetrievedSeptember 10, 2017.In nearly all respects, Davies' original proposal, developed in late 1965, was similar to the actual networks being built today.
^Norberg, Arthur L.; O'Neill, Judy E. (1996).Transforming computer technology: information processing for the Pentagon, 1962-1986. Johns Hopkins studies in the history of technology New series. Baltimore: Johns Hopkins Univ. Press. pp. 153–196.ISBN978-0-8018-5152-0. Prominently cites Baran and Davies as sources of inspiration.
^A History of the ARPANET: The First Decade(PDF) (Report). Bolt, Beranek & Newman Inc. 1 April 1981. pp. 13, 53 of 183 (III-11 on the printed copy).Archived from the original on 1 December 2012.Aside from the technical problems of interconnecting computers with communications circuits, the notion of computer networks had been considered in a number of places from a theoretical point of view. Of particular note was work done by Paul Baran and others at the Rand Corporation in a study "On Distributed Communications" in the early 1960's. Also of note was work done by Donald Davies and others at the National Physical Laboratory in England in the mid-1960's. ... Another early major network development which affected development of the ARPANET was undertaken at the National Physical Laboratory in Middlesex, England, under the leadership of D. W. Davies.
^Roberts, Lawrence G. (November 1978)."The evolution of packet switching"(PDF).Proceedings of the IEEE.66 (11):1307–13.doi:10.1109/PROC.1978.11141.S2CID26876676.Significant aspects of the network's internal operation, such as routing, flow control, software design, and network control were developed by a BBN team consisting of Frank Heart, Robert Kahn, Severo Omstein, William Crowther, and David Walden
^F.E. Froehlich, A. Kent (1990).The Froehlich/Kent Encyclopedia of Telecommunications: Volume 1 - Access Charges in the U.S.A. to Basics of Digital Communications. CRC Press. p. 344.ISBN0824729005.Although there was considerable technical interchange between the NPL group and those who designed and implemented the ARPANET, the NPL Data Network effort appears to have had little fundamental impact on the design of ARPANET. Such major aspects of the NPL Data Network design as the standard network interface, the routing algorithm, and the software structure of the switching node were largely ignored by the ARPANET designers. There is no doubt, however, that in many less fundamental ways the NPL Data Network had and effect on the design and evolution of the ARPANET.
^Heart, F.; McKenzie, A.; McQuillian, J.; Walden, D. (January 4, 1978).Arpanet Completion Report(PDF) (Technical report). Burlington, MA: Bolt, Beranek and Newman. Archived fromthe original(PDF) on 2023-05-27.
^Clarke, Peter (1982).Packet and circuit-switched data networks(PDF) (PhD thesis). Department of Electrical Engineering, Imperial College of Science and Technology, University of London. "Many of the theoretical studies of the performance and design of the ARPA Network were developments of earlier work by Kleinrock ... Although these works concerned message switching networks, they were the basis for a lot of the ARPA network investigations ... The intention of the work of Kleinrock [in 1961] was to analyse the performance of store and forward networks, using as the primary performance measure the average message delay. ... Kleinrock [in 1970] extended the theoretical approaches of [his 1961 work] to the early ARPA network."
^Davies, Donald Watts (1979).Computer networks and their protocols. Internet Archive. Wiley. pp. See page refs highlighted at url.ISBN978-0-471-99750-4.In mathematical modelling use is made of the theories of queueing processes and of flows in networks, describing the performance of the network in a set of equations. ... The analytic method has been used with success by Kleinrock and others, but only if important simplifying assumptions are made. ... It is heartening in Kleinrock's work to see the good correspondence achieved between the results of analytic methods and those of simulation.
^Davies, Donald Watts (1979).Computer networks and their protocols. Internet Archive. Wiley. pp. 110–111.ISBN978-0-471-99750-4.Hierarchical addressing systems for network routing have been proposed by Fultz and, in greater detail, by McQuillan. A recent very full analysis may be found in Kleinrock and Kamoun.
^Feldmann, Anja; Cittadini, Luca; Mühlbauer, Wolfgang; Bush, Randy; Maennel, Olaf (2009)."HAIR: Hierarchical architecture for internet routing"(PDF).Proceedings of the 2009 workshop on Re-architecting the internet. ReArch '09. New York, NY, USA: Association for Computing Machinery. pp. 43–48.doi:10.1145/1658978.1658990.ISBN978-1-60558-749-3.S2CID2930578.The hierarchical approach is further motivated by theoretical results (e.g., [16]) which show that, by optimally placing separators, i.e., elements that connect levels in the hierarchy, tremendous gain can be achieved in terms of both routing table size and update message churn. ... [16] KLEINROCK, L., AND KAMOUN, F. Hierarchical routing for large networks: Performance evaluation and optimization. Computer Networks (1977).
^Derek Barber."The Origins of Packet Switching".Computer Resurrection Issue 5. Retrieved2024-06-05.The Spanish, dark horses, were the first people to have a public network. They'd got a bank network which they craftily turned into a public network overnight, and beat everybody to the post.
^Kirstein, P.T. (1999). "Early experiences with the Arpanet and Internet in the United Kingdom".IEEE Annals of the History of Computing.21 (1):38–44.doi:10.1109/85.759368.S2CID1558618.
^"Xerox Researcher Proposes 'Ethernet'".computerhistory.org. Retrieved2025-03-08.Robert Metcalfe, a researcher at the Xerox Palo Alto Research Center in California, writes his original memo proposing an 'Ethernet', a means of connecting computers together.
^Hsu, Hansen; McJones, Paul."Xerox PARC file system archive".xeroxparcarchive.computerhistory.org.Pup (PARC Universal Packet) was a set of internetworking protocols and packet format designed and first implemented (in BCPL) by David R. Boggs, John F. Shoch, Edward A. Taft, and Robert M. Metcalfe. It became a key influence on the later design of TCP/IP.
^Cerf, V.; Kahn, R. (1974)."A Protocol for Packet Network Intercommunication"(PDF).IEEE Transactions on Communications.22 (5):637–648.doi:10.1109/TCOM.1974.1092259.ISSN1558-0857.The authors wish to thank a number of colleagues for helpful comments during early discussions of international network protocols, especially R. Metcalfe, R. Scantlebury, D. Walden, and H. Zimmerman; D. Davies and L. Pouzin who constructively commented on the fragmentation and accounting issues; and S. Crocker who commented on the creation and destruction of associations.
^"Ethernet and Robert Metcalfe and Xerox PARC 1971-1975 | History of Computer Communications".historyofcomputercommunications.info. Retrieved2025-03-08.Once successful, Xerox filed for patents covering the Ethernet technology under the names of Metcalfe, Boggs, Butler Lampson and Chuck Thacker. (Metcalfe insisted Lampson, the 'intellectual guru under whom we all had the privilege to work' and Thacker 'the guy who designed the Altos' names were on the patent.)
^Council, National Research; Sciences, Division on Engineering and Physical; Board, Computer Science and Telecommunications; Applications, Commission on Physical Sciences, Mathematics, and; Committee, NII 2000 Steering (1998-02-05).The Unpredictable Certainty: White Papers. National Academies Press.ISBN978-0-309-17414-5.Archived from the original on 2023-02-04. Retrieved2021-03-08.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
^Pelkey, James L. (2007)."Yogen Dalal".Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968-1988. Retrieved2023-05-07.
^abMeyers, Mike (2012).CompTIA Network+ exam guide : (Exam N10-005) (5th ed.). New York: McGraw-Hill.ISBN9780071789226.OCLC748332969.
^A. Hooke (September 2000),Interplanetary Internet(PDF), Third Annual International Symposium on Advanced Radio Technologies, archived fromthe original(PDF) on 2012-01-13, retrieved2011-11-12
^Paetsch, Michael (1993).The evolution of mobile communications in the US and Europe: Regulation, technology, and markets. Boston, London: Artech House.ISBN978-0-8900-6688-1.
^Bush, S. F. (2010).Nanoscale Communication Networks. Artech House.ISBN978-1-60807-003-9.