The history of theInternet originated in the efforts of scientists and engineers to build and interconnectcomputer networks. TheInternet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in theUnited States and involved international collaboration, particularly with researchers in theUnited Kingdom andFrance.[1][2][3]
ARPA awarded contracts in 1969 for the development of theARPANET project, directed byRobert Taylor and managed byLawrence Roberts. ARPANET adopted the packet switching technology proposed by Davies and Baran. The network ofInterface Message Processors (IMPs) was built by a team atBolt, Beranek, and Newman, with the design and specification led byBob Kahn. The host-to-host protocol was specified by a group of graduate students atUCLA, led bySteve Crocker, along withJon Postel and others. The ARPANET expanded rapidly across the United States with connections to the United Kingdom and Norway.
In the late 1970s, national and internationalpublic data networks emerged based on theX.25 protocol, designed byRémi Després and others. In the United States, theNational Science Foundation (NSF) funded nationalsupercomputing centers at several universities in the United States, and provided interconnectivity in 1986 with theNSFNET project, thus creating network access to these supercomputer sites for research and academic organizations in the United States. International connections to NSFNET, the emergence of architecture such as theDomain Name System, and theadoption of TCP/IP on existing networks in the United States and around the world marked the beginnings of theInternet.[4][5][6] CommercialInternet service providers (ISPs) emerged in 1989 in the United States and Australia.[7] Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990.[8] The optical backbone of the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic, as traffic transitioned to optical networks managed by Sprint, MCI and AT&T in the United States.
Research atCERN inSwitzerland by the British computer scientistTim Berners-Lee in 1989–90 resulted in theWorld Wide Web, linkinghypertext documents into an information system, accessible from anynode on the network.[9] The dramatic expansion of the capacity of the Internet, enabled by the advent ofwave division multiplexing (WDM) and the rollout offiber optic cables in the mid-1990s, had a revolutionary impact on culture, commerce, and technology. This made possible the rise of near-instant communication byelectronic mail,instant messaging,voice over Internet Protocol (VoIP) telephone calls,video chat, and the World Wide Web with itsdiscussion forums,blogs,social networking services, andonline shopping sites. Increasing amounts of data are transmitted at higher and higher speeds overfiber-optic networks operating at 1Gbit/s, 10 Gbit/s, and 800 Gbit/s by 2019.[10] The Internet's takeover of the global communication landscape was rapid in historical terms: it only communicated 1% of the information flowing through two-waytelecommunications networks in the year 1993, 51% by 2000, and more than 97% of the telecommunicated information by 2007.[11] The Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, andsocial networking services. However, the future of the global network may be shaped by regional differences.[12]
Foundations
Precursors
Telegraphy
The practice of transmitting messages between two different places through an electromagnetic medium dates back to theelectrical telegraph in the late 19th century, which was the first fully digital communication system.Radiotelegraphy began to be used commercially in the early 20th century.Telex became an operationalteleprinter service in the 1930s. Such systems were limited topoint-to-point communication between twoend devices.
Early fixed-programcomputers in the 1940s were operated manually by entering small programs viaswitches in order to load and run a series of programs. Astransistor technology evolved in the 1950s,central processing units and userterminals came into use by 1955. Themainframe computer model was devised, andmodems, such as theBell 101, alloweddigital data to be transmitted over regular unconditionedtelephone lines at low speeds by the late 1950s. These technologies made it possible to exchange data betweenremote computers. However, a fixed-line link was still necessary; the point-to-point communication model did not allow for direct communication between any two arbitrary systems. In addition, the applications were specific and not general purpose. Examples includedSAGE (1958) andSABRE (1960).
J. C. R. Licklider, while working at BBN, proposed a computer network in his March 1960 paperMan-Computer Symbiosis:[18]
A network of such centers, connected to one another by wide-band communication lines [...] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper
In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication"[19] which was one of the first descriptions of a networked future.
In October 1962, Licklider was hired byJack Ruina as director of the newly establishedInformation Processing Techniques Office (IPTO) within ARPA, with a mandate to interconnect the United States Department of Defense's main computers atCheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of theIntergalactic Computer Network".[20]
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors,Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.[21]
Packet switching
The "message block", designed byPaul Baran in 1962 and refined in 1964, is the first proposal of adata packet.[22][23]
The infrastructure fortelephone systems at the time was based oncircuit switching, which requires pre-allocation of a dedicated communication line for the duration of the call.Telegram services had developedstore and forward telecommunication techniques.Western Union's Automatic Telegraph Switching SystemPlan 55-A was based onmessage switching. The U.S. military'sAUTODIN network became operational in 1962. These systems, like SAGE and SBRE, still required rigid routing structures that were prone tosingle point of failure.[24]
The technology was considered vulnerable for strategic and military use because there were no alternative paths for the communication in case of a broken link. In the early 1960s,Paul Baran of theRAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war.[25][26] Information would be transmitted across a "distributed" network, divided into what he called "message blocks".[27][28][29][30] Baran's design was not implemented.[31]
In addition to being prone to a single point of failure, existing telegraphic techniques were inefficient and inflexible. Beginning in 1965Donald Davies, at theNational Physical Laboratory in the United Kingdom, independently developed a more advanced proposal of the concept, designed for high-speedcomputer networking, which he calledpacket switching, the term that would ultimately be adopted.[32][33][34][35]
Packet switching is a technique for transmitting computer data by splitting it into very short, standardized chunks, attaching routing information to each of these chunks, and transmitting them independently through acomputer network. It provides better bandwidth utilization than traditional circuit-switching used for telephony, and enables the connection of computers with different transmission and receive rates. It is a distinct concept to message switching.[36]
Following discussions withJ. C. R. Licklider in 1965,Donald Davies became interested indata communications for computer networks.[37][38] Later that year, at theNational Physical Laboratory (NPL) in the United Kingdom, Davies designed and proposed a national commercial data network based on packet switching.[39] The following year, he described the use of "switching nodes" to act asrouters in a digital communication network.[40][41] The proposal was not taken up nationally but he produced a design for a local network to serve the needs of the NPL and prove the feasibility of packet switching using high-speed data transmission.[42][43] To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control",[44] thus inventing what came to be known as theend-to-end principle. In 1967, he and his team were the first to use the term 'protocol' in a modern data-commutation context.[45]
In 1968,[46] Davies began building the Mark I packet-switched network to meet the needs of his multidisciplinary laboratory and prove the technology under operational conditions.[47][48] The network's development was described at a 1968 conference.[49][50] Elements of the network became operational in early 1969,[47][51] the first implementation of packet switching,[52][53] and the NPL network was the first to use high-speed links.[54] Many other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design.[37] The Mark II version which operated from 1973 used a layered protocol architecture.[54] In 1977, there were roughly 30 computers, 30 peripherals and 100 VDU terminals all able to interact through the NPL Network.[55] The NPL team carried outsimulation work on wide-area packet networks, includingdatagrams andcongestion; and research intointernetworking andsecure communications.[47][56][57] The network was replaced in 1986.[54]
For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.... I said, oh man, it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.[59]
Steve Crocker formed the "Network Working Group" in 1969 at UCLA. Working withJon Postel and others,[73] he initiated and managed theRequest for Comments (RFC) process, which is still used today for proposing and distributing contributions. RFC 1, entitled "Host Software", was written by Steve Crocker and published on April 7, 1969. The protocol for establishing links between network sites in the ARPANET, theNetwork Control Program (NCP), was completed in 1970. These early years were documented in the 1972 filmComputer Networks: The Heralds of Resource Sharing.
Roberts presented the idea of packet switching to the communication professionals, and faced anger and hostility. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran faced the same rejection and thus failed to convince the military into constructing a packet switching network.[74][75]
Early international collaborations via the ARPANET were sparse. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR),[76] via a satellite link at theTanum Earth Station in Sweden, and toPeter Kirstein's research group atUniversity College London, which provided a gateway toBritish academic networks, the first international heterogenousresource sharing network.[77] Throughout the 1970s, Leonard Kleinrock developed the mathematical theory to model and measure the performance of packet-switching technology, building on his earlier work on the application ofqueueing theory to message switching systems.[78] By 1981, the number of hosts had grown to 213.[79] The ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used.
TheMerit Network[80] was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[81] With initial support from theState of Michigan and theNational Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between theIBMmainframe computer systems at theUniversity of Michigan inAnn Arbor andWayne State University inDetroit.[82] In October 1972 connections to theCDC mainframe atMichigan State University inEast Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to theTymnet andTelenetpublic data networks,X.25 host attachments, gateways to X.25 data networks,Ethernet attached hosts, and eventuallyTCP/IP and additionalpublic universities in Michigan join the network.[82][83] All of this set the stage for Merit's role in theNSFNET project starting in the mid-1980s.
TheCYCLADES packet switching network was a French research network designed and directed byLouis Pouzin. In 1972, he began planning the network to explore alternatives to the early ARPANET design and to supportinternetworking research. First demonstrated in 1973, it was the first network to implement theend-to-end principle conceived by Donald Davies and make the hosts responsible for reliable delivery of data, rather than the network itself, usingunreliable datagrams.[84][85] Concepts implemented in this network influencedTCP/IP architecture.[86][87]
Based on international research initiatives, particularly the contributions ofRémi Després, packet switching network standards were developed by theInternational Telegraph and Telephone Consultative Committee (ITU-T) in the form ofX.25 and related standards.[88][89] X.25 is built on the concept ofvirtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later becameJANET, the United Kingdom's high-speednational research and education network (NREN). The initial ITU Standard on X.25 was approved in March 1976.[90] Existing networks, such asTelenet in the United States adopted X.25 as well as newpublic data networks, such asDATAPAC in Canada andTRANSPAC in France.[88][89]X.25 was supplemented by theX.75 protocol which enabled internetworking between national PTT networks in Europe and commercial networks in North America.[91][92][93]
Unlike ARPANET, X.25 was commonly available for business use.Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.
The first public dial-in networks used asynchronousteleprinter (TTY) terminal protocols to reach a concentrator operated in the public network. Some networks, such asTelenet andCompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such asTymnet, used proprietary protocols. In 1979, CompuServe became the first service to offerelectronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offerreal-time chat with itsCB Simulator. Other major dial-in networks wereAmerica Online (AOL) andProdigy that also provided communications, content, and entertainment features.[95] Manybulletin board system (BBS) networks also provided on-line access, such asFidoNet which was popular amongst hobbyist computer users, many of themhackers andamateur radio operators.[citation needed]
In 1979, two students atDuke University,Tom Truscott andJim Ellis, originated the idea of usingBourne shell scripts to transfer news and messages on a serial lineUUCP connection with nearbyUniversity of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links betweenFidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines,X.25 links or evenARPANET connections, and the lack of strict use policies compared to later networks likeCSNET andBITNET. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.[96]
Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network evolved into one of the first examples of Internet technology coming into use through popular diffusion.
1973–1989: Merging the networks and creating the Internet
Cerf and Kahn published their ideas in May 1974,[103] which incorporated concepts implemented by Louis Pouzin and Hubert Zimmermann in the CYCLADES network.[84][104] The specification of the resulting protocol, theTransmission Control Program, was published asRFC675 by the Network Working Group in December 1974.[105] It contains the first attested use of the terminternet, as a shorthand for internetwork. This software was monolithic in design using twosimplex communication channels for each user session.
With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other networks independently from their detailed characteristics, thereby solving the fundamental problems of internetworking. DARPA agreed to fund the development of prototype software, work on which was documented in theInternet Experiment Notes. Testing began in 1975 through concurrent implementations at Stanford, BBN andUniversity College London (UCL).[3] After several years of work, the first demonstration of a gateway between thePacket Radio network (PRNET) in the SF Bay area and the ARPANET was conducted by theStanford Research Institute. On November 22, 1977, a three network demonstration was conducted including the ARPANET, the SRI'sPacket Radio Van on the Packet Radio Network and theAtlantic Packet Satellite Network (SATNET) including a node at UCL.[106][107]
The software was redesigned as a modular protocol stack, using full-duplex channels; between 1976 and 1977,Yogen Dalal and Robert Metcalfe among others, proposed separating TCP'srouting and transmission control functions into two discrete layers,[108][109] which led to the splitting of the Transmission Control Program into theTransmission Control Protocol (TCP) and theInternet Protocol (IP) in version 3 in 1978.[109][110]Version 4 was described inIETF publication RFC 791 (September 1981), 792 and 793. It was installed onSATNET in 1982 and the ARPANET in January 1983 after the DoD made it standard for all military computer networking.[111][112] This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model or DARPA model.[113] Cerf credits his graduate students Yogen Dalal, Carl Sunshine,Judy Estrin, Richard A. Karp, andGérard Le Lann with important work on the design and testing.[114] DARPA sponsored or encouraged thedevelopment of TCP/IP implementations for many operating systems.
Decomposition of the quad-dotted IPv4 address representation to itsbinary value
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. In July 1975, the network was turned over to theDefense Communications Agency, also part of theDepartment of Defense. In 1983, theU.S. military portion of the ARPANET was broken off as a separate network, theMILNET. MILNET subsequently became the unclassified but military-onlyNIPRNET, in parallel with the SECRET-levelSIPRNET andJWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden.[115] This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and a growing number of companies such asDigital Equipment Corporation andHewlett-Packard, which were participating in research projects or providing services to those who were. Data transmission speeds depended upon the type of connection, the slowest being analog telephone lines and the fastest using optical networking technology.
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, theDECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.
In 1981, NSF supported the development of theComputer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP overX.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. CSNET played a central role in popularizing the Internet outside the ARPANET.[23]
In 1986, the NSF createdNSFNET, a 56 kbit/sbackbone to support the NSF-sponsoredsupercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks.[116] The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with theMerit Network in partnership withIBM,MCI, and theState of Michigan. The existence of NSFNET and the creation ofFederal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990.
NSFNET was expanded and upgraded to dedicated fiber, optical lasers and optical amplifier systems capable of delivering T3 start up speeds or 45 Mbit/s in 1991. However, the T3 transition by MCI took longer than expected, allowing Sprint to establish a coast-to-coast long-distance commercial Internet service. When NSFNET was decommissioned in 1995, its optical networking backbones were handed off to several commercial Internet service providers, including MCI,PSI Net and Sprint.[117] As a result, when the handoff was complete, Sprint and its Washington DC Network Access Points began to carry Internet traffic, and by 1996, Sprint was the world's largest carrier of Internet traffic.[118]
The research and academic community continues to develop and use advanced networks such asInternet2 in the United States andJANET in the United Kingdom.
Transition towards the Internet
The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675:[119] Internet Transmission Control Program, December 1974) as a short form ofinternetworking, when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formedNSFNET project in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network.[120]
Opening the Internet and the fiber optic backbone to corporate and consumers increased demand for network capacity. The expense and delay of laying new fiber led providers to test a fiber bandwidth expansion alternative that had been pioneered in the late 1970s byOptelecom using "interactions between light and matter, such as lasers and optical devices used foroptical amplification and wave mixing".[121] This technology became known aswave division multiplexing (WDM). Bell Labs deployed a 4-channel WDM system in 1995.[122] To develop a mass capacity (dense) WDM system,Optelecom and its former head of Light Systems Research,David R. Huber formed a new venture,Ciena Corp., that deployed the world's first dense WDM system on the Sprint fiber network in June 1996.[122] This was referred to as the real start of optical networking.[123]
As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the Internet technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as theInternational Packet Switched Service (IPSS) X.25 network, to carry Internet traffic.
Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections usedUUCP orFidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access toFile Transfer Protocol (FTP) sites via UUCP or mail.[124]
Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. TheExterior Gateway Protocol (EGP) was replaced by a new protocol, theBorder Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994,Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use ofroute aggregation to decrease the size ofrouting tables.[125]
Forty years later, on November 13, 1957,Columbia University physics studentGordon Gould first realized how to make light by stimulated emission through a process ofoptical amplification. He coined the term LASER for this technology—Light Amplification by Stimulated Emission of Radiation.[127] Using Gould's light amplification method (patented as "Optically Pumped Laser Amplifier"),[128]Theodore Maiman made the first working laser on May 16, 1960.[129]
Gould co-foundedOptelecom in 1973 to commercialize his inventions in optical fiber telecommunications,[130] just asCorning Glass was producing the first commercial fiber optic cable in small quantities. Optelecom configured its own fiber lasers and optical amplifiers into the first commercial optical communication systems which it delivered toChevron and the US Army Missile Defense.[131] Three years later,GTE deployed the first optical telephone system in 1977 in Long Beach, California.[132] By the early 1980s, optical networks powered by lasers,LED and optical amplifier equipment supplied byBell Labs,NTT andPerelli[clarification needed] were used by select universities and long-distance telephone providers.[citation needed]
In 1982, Norway (NORSAR/NDRE) andPeter Kirstein's research group at University College London (UCL) left the ARPANET and reconnected using TCP/IP overSATNET.[102][133] There were 40British research groups using UCL's link to ARPANET in 1975;[77] by 1984 there was a user population of about 150 people on both sides of the Atlantic.[134]
Between 1984 and 1988,CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs, and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989, when a transatlantic connection to Cornell University was established.[135][136][137]
TheComputer Science Network (CSNET) began operation in 1981 to provide networking connections to institutions that could not connect directly to ARPANET. Its first international connection was to Israel in 1984. Soon after, connections were established to computer science departments in Canada, France, and Germany.[23]
In 1988, the first international connections toNSFNET was established by France'sINRIA,[138][139] andPiet Beertema at theCentrum Wiskunde & Informatica (CWI) in the Netherlands.[140] Daniel Karrenberg, from CWI, visitedBen Segal, CERN's TCP/IP coordinator, looking for advice about the transition ofEUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met withLen Bosack from the then still small companyCisco about purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. TheNORDUnet connection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden.[141]
In January 1989, CERN opened its first external TCP/IP connections.[142] This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as acooperative in Amsterdam.
Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations werepolarized over the issue of which standard, theOSI model or the Internet protocol suite would result in the best and most robust computer networks.[100][147][148]
The link to the Pacific
Japan, which had built the UUCP-based networkJUNET in 1984, connected to CSNET,[23] and later to NSFNET in 1989, marking the spread of the Internet to Asia.
South Korea set up a two-node domestic TCP/IP network in 1982, the System Development Network (SDN), adding a third node the following year. SDN was connected to the rest of the world in August 1983 using UUCP (Unix-to-Unix-Copy); connected to CSNET in December 1984;[23] and formally connected to the NSFNET in 1990.[149][150][151]
In Australia, ad hoc networking to ARPA and in-between Australian universities formed in the late 1980s, based on various technologies such as X.25,UUCPNet, and via a CSNET.[23] These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures.AARNet was formed in 1989 by theAustralian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.
New Zealand adopted the UK'sColoured Book protocols as an interim standard and established its first international IP connection to the U.S. in 1989.[152]
While developed countries with technological infrastructures were joining the Internet,developing countries began to experience adigital divide separating them from the Internet. On an essentially continental basis, they built organizations for Internet resource administration and to share operational experience, which enabled more transmission facilities to be put into place.
Africa
At the beginning of the 1990s, African countries relied upon X.25IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.[156]
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.
Africa is building an Internet infrastructure.AFRINIC, headquartered inMauritius, manages IP address allocation for the continent. As with other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.[157]
There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort betweenNew Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.[158]
Asia and Oceania
TheAsia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).[159]
In South Korea, VDSL, a last mile technology developed in the 1990s by NextLevel Communications, connected corporate and consumer copper-based telephone lines to the Internet.[160]
The People's Republic of China established its first TCP/IP college network,Tsinghua University's TUNET in 1991. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration andStanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-widecontent filter.[161]
Japan hosted the annual meeting of theInternet Society, INET'92, inKobe. Singapore developedTECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.[162]
Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective.UUCPNet and the X.25IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use ofARPANET andNSFNET connections.
As a result, during the late 1980s, the firstInternet service provider (ISP) companies were formed. Companies likePSINet,UUNET,Netcom, andPortal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email andUsenet News to the public. In 1989,MCI Mail became the first commercial email provider to get an experimental gateway to the Internet.[164] The first commercial dialup ISP in the United States wasThe World, which opened in 1989.[165]
In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act,42 U.S.C.§ 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks.[166][167] This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.[168]
By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers includingPSINet,Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers.NSFNET was no longer the de facto backbone and exchange point of the Internet. TheCommercial Internet eXchange (CIX),Metropolitan Area Exchanges (MAEs), and laterNetwork Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995, when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service.[169][170] NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored thevery high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.[171]
An event held on 11 January 1994,The Superhighway Summit atUCLA's Royce Hall, was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about theInformation Superhighway and its implications".[172]
Internet use in wider society
The invention of theWorld Wide Web byTim Berners-Lee atCERN, as an application on the Internet,[173] brought many social and commercial uses to what was, at the time, a network of networks for academic and research institutions.[174][175] The Web opened to the public in 1991 and began to enter general use in 1993–4, whenwebsites for everyday use started to become available.[176]
During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period,mobile cellular devices ("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide.Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly fromanalog tape todigitaloptical discs (DVD and to an extent still,floppy disc toCD). Enabling technologies used from the early 2000s such asPHP, modernJavaScript andJava, technologies such asAJAX,HTML 4 (and its emphasis onCSS), and varioussoftware frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption.
The Internet was widely used formailing lists,emails,creating and distributing maps with tools likeMapQuest,e-commerce and early popularonline shopping (Amazon andeBay for example),online forums andbulletin boards, and personal websites andblogs, and use was growing rapidly, but by more modern standards, the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure.
During the period 1997 to 2001, the firstspeculative investmentbubble related to the Internet took place, in which"dot-com" companies (referring to the ".com"top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stokedstock values, followed by amarket crash; the firstdot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.
In the final stage ofIPv4 address exhaustion, the last IPv4 address block was assigned in January 2011 at the level of the regional Internet registries.[182] IPv4 uses 32-bit addresses which limits theaddress space to 232 addresses, i.e.4294967296 addresses.[110] IPv4 is in the process of replacement byIPv6, its successor, which uses 128-bit addresses, providing 2128 addresses, i.e.340282366920938463463374607431768211456,[183] a vastly increased address space. The shift to IPv6 is expected to take a long time to complete.[182]
2004–present: Web 2.0, global ubiquity, social media
The rapid technical advances that would propel the Internet into its place as a social system, which has completely transformed the way humans interact with each other, took place during a relatively short period from around 2005 to 2010, coinciding with the point in time in whichIoT devices surpassed the number of humans alive at some point in the late 2000s. They included:
The call to "Web 2.0" in 2004 (first suggested in 1999).
Accelerating adoption and commoditization among households of, and familiarity with, the necessary hardware (such as computers).
Accelerating storage technology and data access speeds –hard drives emerged, took over from far smaller, slowerfloppy discs, and grew frommegabytes togigabytes (and by around 2010,terabytes),RAM from hundreds ofkilobytes to gigabytes as typical amounts on a system, andEthernet, the enabling technology for TCP/IP, moved from common speeds of kilobits to tens of megabits per second, to gigabits per second.
High speed Internet and wider coverage of data connections, at lower prices, allowing larger traffic rates, more reliable simpler traffic, and traffic from more locations.
The public's accelerating perception of the potential of computers to create new means and approaches to communication, the emergence of social media and websites such asTwitter andFacebook to their later prominence, and global collaborations such asWikipedia (which existed before but gained prominence as a result).
The mobile device revolution, particularly with smartphones and tablet computers becoming widespread, which began to provide easy access to the Internet to much of human society of all ages, in their daily lives, and allowed them to share, discuss, and continually update, inquire, and respond.
Non-volatile RAM rapidly grew in size and reliability, and decreased in price, becoming a commodity capable of enabling high levels of computing activity on these small handheld devices as well assolid-state drives (SSD).
An emphasis on power efficient processor and device design, rather than purely high processing power; one of the beneficiaries of this wasArm, a British company which had focused since the 1980s on powerful but low cost simple microprocessors. TheARM architecture family rapidly gained dominance in the market for mobile and embedded devices.
The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven.
The term resurfaced during 2002–2004,[188][189][190][191] and gained prominence in late 2004 following presentations byTim O'Reilly and Dale Dougherty at the firstWeb 2.0 Conference. In their opening remarks,John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[192][non-primary source needed] They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value.
"Web 2.0" does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. "Web 2.0" describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in asocial media dialogue as creators ofuser-generated content in avirtual community, in contrast to Web sites where people are limited to the passive viewing ofcontent. Examples of Web 2.0 includesocial networking services,blogs,wikis,folksonomies,video sharing sites,hosted services,Web applications, andmashups.[193]Terry Flew, in his 3rd edition ofNew Media, described what he believed to characterize the differences between Web 1.0 and Web 2.0:
[The] move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on tagging (folksonomy).[194]
This era saw several household names gain prominence through their community-oriented operation –YouTube, Twitter, Facebook,Reddit and Wikipedia being some examples.
Telephone networks convert to VoIP
Telephone systems have been slowly adoptingvoice over IP since 2003. Early experiments proved that voice can be converted to digital packets and sent over the Internet. The packets are collected and converted back to analog voice.[195][196][197]
The process of change that generally coincided with Web 2.0 was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.[citation needed]
Location-based services, services using location and other sensor information, andcrowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.example.com") became common, designed especially for the new devices used.Netbooks,ultrabooks, widespread4G andWi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" (short for "Application program" or "Program") became popularized, as did the "App store".
This "mobile revolution" has allowed for people to have a nearly unlimited amount of information at all times. With the ability to access the internet from cell phones came a change in the way media was consumed. Media consumption statistics show that over half of media consumption between those aged 18 and 34 were using a smartphone.[198]
The first Internet link intolow Earth orbit was established on January 22, 2010, when astronautT. J. Creamer posted the first unassisted update to his Twitter account from theInternational Space Station, marking the extension of the Internet into space.[199] (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speedKu band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth usingVoice over IP equipment.[200]
Communication with spacecraft beyond Earth orbit has traditionally been over point-to-point links through theDeep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol,delay-tolerant networking (DTN), which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or becausespace weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008.[201] Testing of DTN-based communications between the International Space Station and Earth (now termed disruption-tolerant networking) has been ongoing since March 2009, and was scheduled to continue until March 2014.[202][needs update]
This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-Earth downlinks. According to a February 2011 statement by Google'sVint Cerf, the so-called "bundle protocols" have been uploaded to NASA'sEPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.[203]
The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to theNetwork Information Center (NIC) atStanford Research Institute (SRI International) inMenlo Park, California. ISI'sJonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his death in 1998.[206]
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed fromSRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of theDomain Name System, created by ISI'sPaul Mockapetris in 1983.[207] The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including thetop-level domains (TLDs) of.mil,.gov,.edu,.org,.net,.com and.us,root nameserver administration and Internet number assignments under aUnited States Department of Defense contract.[205] In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sectorNetwork Solutions, Inc.[208][209]
The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366,[210] which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region.The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.[211]
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that theDepartment of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S.National Science Foundation, after a competitive bidding process in 1992, created theInterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided byNetwork Solutions; Directory and Database Services would be provided byAT&T; and Information Services would be provided byGeneral Atomics.[212]
Over time, after consultation with the IANA, theIETF,RIPE NCC,APNIC, and theFederal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers.[211] Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, theAmerican Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of theNational Science Foundation and became the third Regional Internet Registry.[213]
In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control ofICANN, a Californianon-profit corporation contracted by theUnited States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with theIAB to define the technical work to be carried out by the Internet Assigned Numbers Authority.[214] The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure.[215] ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.
The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized intoWorking Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet.[216][217]
The IETF grew out of quarterly meetings with U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, theInternet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, the Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States.[216]
The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of theInternet Engineering Steering Group (IESG)[218] and theInternet Architecture Board (IAB).[219] TheInternet Research Task Force (IRTF) and theInternet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer-term research issues.[216][220]
RFCs
RFCs are the main documentation for the work of the IAB, IESG, IETF, and IRTF.[221] Originally intended as requests for comments, RFC 1, "Host Software", was written by Steve Crocker atUCLA in April 1969. These technical memos documented aspects of ARPANET development. They were edited byJon Postel, the firstRFC Editor.[216][222]
RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics.[223] RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.[216][222]
The Internet Society
TheInternet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, US, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world.[224]
Globalization and Internet governance in the 21st century
Since the 1990s, theInternet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first theUniversity of Southern California in 2000,[226] and in September 2009 gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued.[227][228][229] Finally, on October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community.[230]
The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issuesRequest for Comments.
In November 2005, theWorld Summit on the Information Society, held inTunis, called for anInternet Governance Forum (IGF) to be convened byUnited Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter.[231] Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.[232][233]
Tim Berners-Lee, inventor of the web, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched theWorld Wide Web Foundation (WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all.[234][235] In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch theContract for the Web, a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good).[236]
Politicization of the Internet
Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become morepoliticized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.
Recruitment of followers, and "coming together" of members of the public, for ideas, products, and causes;
Providing and widely distributing and sharing information that might be deemed sensitive or relates towhistleblowing (and efforts by specific countries to prevent this bycensorship);
On March 12, 2015, the FCC released the specific details of the net neutrality rules.[262][263][264] On April 13, 2015, the FCC published the final rule on its new "Net Neutrality" regulations.[265][266]
On December 14, 2017, the FCC repealed their March 12, 2015 decision by a 3–2 vote regarding net neutrality rules.[267]
The ARPANET computer network made a large contribution to the evolution of electronic mail. An experimental inter-system transferred mail on the ARPANET shortly after its creation.[269] In 1971Ray Tomlinson created what was to become the standard Internet electronic mail addressing format, using the@ sign to separate mailbox names from host names.[270]
A number of protocols were developed to deliver messages among groups of time-sharing computers over alternative transmission systems, such asUUCP andIBM'sVNET email system. Email could be passed this way between a number of networks, includingARPANET,BITNET andNSFNET, as well as to hosts connected directly to other sites via UUCP. See thehistory of SMTP protocol.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel andTom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known asnewsgroups, on a wide range of topics. On ARPANET and NSFNET similar discussion groups would form viamailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
During the early years of the Internet, email and similar mechanisms were also fundamental to allow people to access resources that were not available due to the absence of online connectivity. UUCP was often used to distribute files using the 'alt.binary' groups. Also,FTP e-mail gateways allowed people that lived outside the US and Europe to download files using ftp commands written inside email messages. The file was encoded, broken in pieces and sent by email; the receiver had to reassemble and decode it later, and it was the only way for people living overseas to download items such as the earlier Linux versions using the slow dial-up connections available at the time. After the popularization of the Web and the HTTP protocol such tools were slowly abandoned.
In 1999,Napster became the firstpeer-to-peer file sharing system.[272] Napster used a central server for indexing and peer discovery, but the storage and transfer of files was decentralized. A variety of peer-to-peer file sharing programs and services with different levels of decentralization andanonymity followed, including:Gnutella,eDonkey2000, andFreenet in 2000,FastTrack,Kazaa,Limewire, andBitTorrent in 2001, and Poisoned in 2003.[273]
All of these tools are general purpose and can be used to share a wide variety of content, but sharing of music files, software, and later movies and videos are major uses.[274] And while some of this sharing is legal, large portions are not. Lawsuits and other legal actions caused Napster in 2001, eDonkey2000 in 2005,Kazaa in 2006, and Limewire in 2010 to shut down or refocus their efforts.[275][276]The Pirate Bay, founded in Sweden in 2003, continues despite atrial and appeal in 2009 and 2010 that resulted in jail terms and large fines for several of its founders.[277] File sharing remains contentious and controversial with charges of theft ofintellectual property on the one hand and charges ofcensorship on the other.[278][279]
File hosting allowed for people to expand their computer's hard drives and "host" their files on a server. Most file hosting services offer free storage, as well as larger storage amount for a fee. These services have greatly expanded the internet for business and personal use.
Google Drive, launched on April 24, 2012, has become the most popular file hosting service. Google Drive allows users to store, edit, and share files with themselves and other users. Not only does this application allow for file editing, hosting, and sharing. It also acts as Google's own free-to-access office programs, such asGoogle Docs,Google Slides, andGoogle Sheets. This application served as a useful tool for University professors and students, as well as those who are in need ofCloud storage.[280][281]
Dropbox, released in June 2007 is a similar file hosting service that allows users to keep all of their files in a folder on their computer, which is synced with Dropbox's servers. This differs from Google Drive as it is not web-browser based. Now, Dropbox works to keep workers and files in sync and efficient.[282]
Mega, having over 200 million users, is an encrypted storage and communication system that offers users free and paid storage, with an emphasis on privacy.[283] Being three of the largest file hosting services, Google Drive, Dropbox, and Mega all represent the core ideas and values of these services.
The earliest form of online piracy began with a P2P (peer to peer) music sharing service namedNapster, launched in 1999. Sites likeLimeWire,The Pirate Bay, andBitTorrent allowed for anyone to engage in online piracy, sending ripples through the media industry. With online piracy came a change in the media industry as a whole.[284]
Total global mobile data traffic reached 588 exabytes during 2020,[285] a 150-fold increase from 3.86 exabytes/year in 2010.[286] Most recently, smartphones accounted for 95% of this mobile data traffic with video accounting for 66% by type of data.[285] Mobile traffic travels by radio frequency to the closest cell phone tower and its base station where the radio signal is converted into an optical signal that is transmitted over high-capacity optical networking systems that convey the information to data centers. The optical backbones enable much of this traffic as well as a host of emerging mobile services including the Internet of things, 3-D virtual reality, gaming and autonomous vehicles. The most popular mobile phone application is texting, of which 2.1 trillion messages were logged in 2020.[287] The texting phenomenon began on December 3, 1992, when Neil Papworth sent the first text message of "Merry Christmas" over a commercial cell phone network to the CEO of Vodafone.[288]
The first mobile phone with Internet connectivity was theNokia 9000 Communicator, launched in Finland in 1996. The viability of Internet services access on mobile phones was limited until prices came down from that model, and network providers started to develop systems and services conveniently accessible on phones.NTT DoCoMo in Japan launched the first mobile Internet service,i-mode, in 1999 and this is considered the birth of the mobile phone Internet services. In 2001, the mobile phone email system by Research in Motion (nowBlackBerry Limited) for theirBlackBerry product was launched in America. To make efficient use of the small screen andtiny keypad and one-handed operation typical of mobile phones, a specific document and networking model was created for mobile devices, theWireless Application Protocol (WAP). Most mobile device Internet services operate using WAP. The growth of mobile phone services was initially a primarily Asian phenomenon with Japan, South Korea and Taiwan all soon finding the majority of their Internet users accessing resources by phone rather than by PC.[289] Developing countries followed, with India, South Africa, Kenya, the Philippines, and Pakistan all reporting that the majority of their domestic users accessed the Internet from a mobile phone rather than a PC. The European and North American use of the Internet was influenced by a large installed base of personal computers, and the growth of mobile phone Internet access was more gradual, but had reached national penetration levels of 20–30% in most Western countries.[290] The cross-over occurred in 2008, when more Internet access devices were mobile phones than personal computers. In many parts of the developing world, the ratio is as much as 10 mobile phone users to one PC user.[291]
Growth in demand
Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020 to 2021[292] when the number of active Internet users reached 4.66 billion people, representing half of the global population. Further demand for data, and the capacity to satisfy this demand, are forecast to increase to 717 terabits per second in 2021.[293] This capacity stems from theoptical amplification andWDM systems that are the common basis of virtually every metro, regional, national, international and submarine telecommunications networks.[294] Theseoptical networking systems have been installed throughout the 5 billion kilometers offiber optic lines deployed around the world.[295] Continued growth in traffic is expected for the foreseeable future from a combination of new users, increased mobile phone adoption, machine-to-machine connections, connected homes, 5G devices and the burgeoning requirement for cloud and Internet services such asAmazon,Facebook,Apple Music andYouTube.
There are nearly insurmountable problems in supplying ahistoriography of the Internet's development. The process of digitization represents a twofold challenge both for historiography in general and, in particular, for historical communication research.[296] A sense of the difficulty in documenting early developments that led to the internet can be gathered from the quote:
"The Arpanet period is somewhat well documented because the corporation in charge –BBN – left a physical record. Moving into theNSFNET era, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. ... So much of what happened was done verbally and on the basis of individual trust."
Notable works on the subject were published byKatie Hafner and Matthew Lyon,Where Wizards Stay Up Late: The Origins Of The Internet (1996),Roy Rosenzweig,Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet (1998), andJanet Abbate,Inventing the Internet (2000).[298]
Most scholarship and literature on the Internet lists ARPANET as the prior network that was iterated on and studied to create it,[299] although other early computer networks and experiments existed alongside or before ARPANET.[300]
These histories of the Internet have since been criticized asteleologies orWhig history; that is, they take the present to be the end point toward which history has been unfolding based on a single cause:
In the case of Internet history, the epoch-making event is usually said to be the demonstration of the 4-node ARPANET network in 1969. From that single happening the global Internet developed.
In addition to these characteristics, historians have cited methodological problems arising in their work:
"Internet history" ... tends to be too close to its sources. Many Internet pioneers are alive, active, and eager to shape the histories that describe their accomplishments. Many museums and historians are equally eager to interview the pioneers and to publicize their stories.
^Abbate 1999, p. 3 "The manager of the ARPANET project, Lawrence Roberts, assembled a large team of computer scientists ... and he drew on the ideas of network experimenters in the United States and the United Kingdom. Cerf and Kahn also enlisted the help of computer scientists from England, France and the United States"
^abby Vinton Cerf, as told to Bernard Aboba (1993)."How the Internet Came to Be". Archived fromthe original on September 26, 2017. RetrievedSeptember 25, 2017.We began doing concurrent implementations at Stanford, BBN, and University College London. So effort at developing the Internet protocols was international from the beginning.
^"The Untold Internet".Internet Hall of Fame. October 19, 2015. RetrievedApril 3, 2020.many of the milestones that led to the development of the modern Internet are already familiar to many of us: the genesis of the ARPANET, the implementation of the standard network protocol TCP/IP, the growth of LANs (Large Area Networks), the invention of DNS (the Domain Name System), and the adoption of American legislation that funded U.S. Internet expansion—which helped fuel global network access—to name just a few.
^"Study into UK IPv4 and IPv6 allocations"(PDF).Reid Technical Facilities Management LLP. 2014.As the network continued to grow, the model of central co-ordination by a contractor funded by the US government became unsustainable. Organisations were using IP-based networking even if they were not directly connected to the ARPAnet. They needed to get globally unique IP addresses. The nature of the ARPAnet was also changing as it was no longer limited to organisations working on ARPA-funded contracts. The US National Science Foundation set up a national IP-based backbone network, NSFnet, so that its grant-holders could be interconnected to supercomputer centres, universities and various national/regional academic/research networks, including ARPAnet. That resulting network of networks was the beginning of today's Internet.
^"Reminiscences on the Theory of Time-Sharing".John McCarthy's Original Website. RetrievedJanuary 23, 2020.in 1960 'time-sharing' as a phrase was much in the air. It was, however, generally used in my sense rather than in John McCarthy's sense of a CTSS-like object.
^"About Rand".Paul Baran and the Origins of the Internet. RetrievedJuly 25, 2012.
^Pelkey, James L."6.1 The Communications Subnet: BBN 1969".Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968–1988.As Kahn recalls: ... Paul Baran's contributions ... I also think Paul was motivated almost entirely by voice considerations. If you look at what he wrote, he was talking about switches that were low-cost electronics. The idea of putting powerful computers in these locations hadn't quite occurred to him as being cost effective. So the idea of computer switches was missing. The whole notion of protocols didn't exist at that time. And the idea of computer-to-computer communications was really a secondary concern.
^Barber, Derek (Spring 1993)."The Origins of Packet Switching".The Bulletin of the Computer Conservation Society (5).ISSN0958-7403. RetrievedSeptember 6, 2017.There had been a paper written by [Paul Baran] from the Rand Corporation which, in a sense, foreshadowed packet switching in a way for speech networks and voice networks
^Waldrop, M. Mitchell (2018).The Dream Machine. Stripe Press. p. 286.ISBN978-1-953953-36-0.Baran had put more emphasis on digital voice communications than on computer communications.
^"On packet switching".Net History. RetrievedJanuary 8, 2024.[Scantlebury said] Clearly Donald and Paul Baran had independently come to a similar idea albeit for different purposes. Paul for a survivable voice/telex network, ours for a high-speed computer network.
^Metz, Cade (September 3, 2012)."What Do the H-Bomb and the Internet Have in Common? Paul Baran".WIRED.He was very conscious of people mistaken belief that the work he did at RAND somehow led to the creation of the ARPAnet. It didn't, and he was very honest about that.
^Edmondson-Yurkanan, Chris (2007)."SIGCOMM's archaeological journey into networking's past".Communications of the ACM.50 (5):63–68.doi:10.1145/1230819.1230840.ISSN0001-0782.In his first draft dated Nov. 10, 1965 [5], Davies forecast today's "killer app" for his new communication service: "The greatest traffic could only come if the public used this means for everyday purposes such as shopping... People sending enquiries and placing orders for goods of all kinds will make up a large section of the traffic... Business use of the telephone may be reduced by the growth of the kind of service we contemplate."
^Davies, D. W. (1966)."Proposal for a Digital Communication Network"(PDF).Computer developments in the distant future might result in one type of network being able to carry speech and digital messages efficiently.
^Roberts, Dr. Lawrence G. (May 1995)."The ARPANET & Computer Networks". Archived fromthe original on March 24, 2016. RetrievedApril 13, 2016.Then in June 1966, Davies wrote a second internal paper, "Proposal for a Digital Communication Network" In which he coined the word packet,- a small sub part of the message the user wants to send, and also introduced the concept of an "Interface computer" to sit between the user equipment and the packet network.
^Rayner, David; Barber, Derek; Scantlebury, Roger; Wilkinson, Peter (2001).NPL, Packet Switching and the Internet. Symposium of the Institution of Analysts & Programmers 2001. Archived fromthe original on August 7, 2003. RetrievedJune 13, 2024.The system first went 'live' early in 1969
^John S, Quarterman; Josiah C, Hoskins (1986)."Notable computer networks".Communications of the ACM.29 (10):932–971.doi:10.1145/6617.6618.S2CID25341056.The first packet-switching network was implemented at the National Physical Laboratories in the United Kingdom. It was quickly followed by the ARPANET in 1969.
^Haughney Dare-Bryan, Christine (June 22, 2023).Computer Freaks (Podcast). Chapter Two: In the Air. Inc. Magazine. 35:55 minutes in.Leonard Kleinrock: Donald Davies ... did make a single node packet switch before ARPA did
^Clarke, Peter (1982).Packet and circuit-switched data networks(PDF) (PhD thesis). Department of Electrical Engineering, Imperial College of Science and Technology, University of London. "As well as the packet switched network actually built at NPL for communication between their local computing facilities, some simulation experiments have been performed on larger networks. A summary of this work is reported in [69]. The work was carried out to investigate networks of a size capable of providing data communications facilities to most of the U.K. ... Experiments were then carried out using a method of flow control devised by Davies [70] called 'isarithmic' flow control. ... The simulation work carried out at NPL has, in many respects, been more realistic than most of the ARPA network theoretical studies."
^Press, Gil (January 2, 2015)."A Very Short History Of The Internet And The Web".Forbes.Archived from the original on January 9, 2015. RetrievedFebruary 7, 2020.Roberts' proposal that all host computers would connect to one another directly ... was not endorsed ... Wesley Clark ... suggested to Roberts that the network be managed by identical small computers, each attached to a host computer. Accepting the idea, Roberts named the small computers dedicated to network administration 'Interface Message Processors' (IMPs), which later evolved into today's routers.
^SRI Project 5890-1; Networking (Reports on Meetings), Stanford University, 1967, archived fromthe original on February 2, 2020, retrievedFebruary 15, 2020,W. Clark's message switching proposal (appended to Taylor's letter of April 24, 1967 to Engelbart)were reviewed.
^Roberts, L. (January 1, 1988). "The arpanet and computer networks".A history of personal workstations. New York, NY, USA: Association for Computing Machinery. pp. 141–172.doi:10.1145/61975.66916.ISBN978-0-201-11259-7.
^Roberts, Larry (1986). "The Arpanet and computer networks".Proceedings of the ACM Conference on the history of personal workstations. pp. 51–58.doi:10.1145/12178.12182.ISBN0897911768.
^TheMerit Network, Inc. is an independent non-profit 501(c)(3) corporation governed by Michigan's public universities. Merit receives administrative services under an agreement with theUniversity of Michigan.
^abGreen, Lelia (2010).The internet: an introduction to new media. Berg new media series. Berg. p. 31.ISBN978-1-84788-299-8.OCLC504280762.The original ARPANET design had made data integrity part of the IMP's store-and-forward role, but Cyclades end-to-end protocol greatly simplified the packet switching operations of the network. ... The idea was to adopt several principles from Cyclades and invert the ARPANET model to minimise international differences.
^Bennett, Richard (September 2009)."Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate"(PDF). Information Technology and Innovation Foundation. pp. 7, 9, 11. RetrievedSeptember 11, 2017.Two significant packet networks preceded the TCP/IP Internet: ARPANET and CYCLADES. The designers of the Internet borrowed heavily from these systems, especially CYCLADES ... The first end-to-end research network was CYCLA DES, designed by Louis Pouzin at IRIA in France with the support of BBN's Dave Walden and Alex McKenzie and deployed beginning in 1972.
^"A Technical History of CYCLADES".Technical Histories of the Internet & other Network Protocols. Computer Science Department, University of Texas Austin. Archived fromthe original on September 1, 2013.
^"The internet's fifth man".The Economist. November 30, 2013. RetrievedApril 22, 2020.In the early 1970s Mr Pouzin created an innovative data network that linked locations in France, Italy and Britain. Its simplicity and efficiency pointed the way to a network that could connect not just dozens of machines, but millions of them. It captured the imagination of Dr Cerf and Dr Kahn, who included aspects of its design in the protocols that now power the internet.
^abRybczynski, Tony (2009). "Commercialization of packet switching (1975–1985): A Canadian perspective [History of Communications]".IEEE Communications Magazine.47 (12):26–31.doi:10.1109/MCOM.2009.5350364.S2CID23243636.
^abSchwartz, Mischa (2010). "X.25 Virtual Circuits - TRANSPAC IN France - Pre-Internet Data Networking [History of communications]".IEEE Communications Magazine.48 (11):40–46.doi:10.1109/MCOM.2010.5621965.S2CID23639680.
^Ikram, Nadeem (1985).Internet Protocols and a Partial Implementation of CCITT X.75 (Thesis). p. 2.OCLC663449435,1091194379.Two main approaches to internetworking have come into existence based upon the virtual circuit and the datagram services. The vast majority of the work on interconnecting networks falls into one of these two approaches: The CCITT X.75 Recommendation; The DoD Internet Protocol (IP).
^Unsoy, Mehmet S.; Shanahan, Theresa A. (1981). "X.75 internetworking of Datapac and Telenet".ACM SIGCOMM Computer Communication Review.11 (4):232–239.doi:10.1145/1013879.802679.
^Council, National Research; Sciences, Division on Engineering and Physical; Board, Computer Science and Telecommunications; Applications, Commission on Physical Sciences, Mathematics, and; Committee, NII 2000 Steering (February 5, 1998).The Unpredictable Certainty: White Papers. National Academies Press.ISBN978-0-309-17414-5.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
^Cerf, V.; Kahn, R. (May 1974). "A Protocol for Packet Network Intercommunication".IEEE Transactions on Communications.22 (5):637–648.Bibcode:1974ITCom..22..637C.doi:10.1109/TCOM.1974.1092259.The authors wish to thank a number of colleagues for helpful comments during early discussions of international network protocols, especially R. Metcalfe, R. Scantlebury, D. Walden, and H. Zimmerman; D. Davies and L. Pouzin who constructively commented on the fragmentation and accounting issues; and S. Crocker who commented on the creation and destruction of associations.
^"The internet's fifth man".Economist. December 13, 2013. RetrievedSeptember 11, 2017.In the early 1970s Mr Pouzin created an innovative data network that linked locations in France, Italy and Britain. Its simplicity and efficiency pointed the way to a network that could connect not just dozens of machines, but millions of them. It captured the imagination of Dr Cerf and Dr Kahn, who included aspects of its design in the protocols that now power the internet.
^Vint Cerf;Yogen Dalal; Carl Sunshine (December 1974).Specification of Internet Transmission Control Protocol.RFC675.
^Panzaris, Georgios (2008).Machines and romances: the technical and narrative construction of networked computing as a general-purpose platform, 1960–1995.Stanford University. p. 128.Despite the misgivings of Xerox Corporation (which intended to make PUP the basis of a proprietary commercial networking product), researchers at Xerox PARC, including ARPANET pioneers Robert Metcalfe and Yogen Dalal, shared the basic contours of their research with colleagues at TCP and Internet working group meetings in 1976 and 1977, suggesting the possible benefits of separating TCPs routing and transmission control functions into two discrete layers.
^abPelkey, James L. (2007)."Yogen Dalal".Entrepreneurial Capitalism and Innovation: A History of Computer Communications, 1968–1988. Archived fromthe original on September 5, 2019. RetrievedSeptember 5, 2019.
^Internet Traffic Exchange (Report). OECD Digital Economy Papers. Organisation for Economic Co-Operation and Development (OECD). April 1, 1998.doi:10.1787/236767263531.
^Cvijetic, M.; Djordjevic, I. (2013).Advanced Optical Communication Systems and Networks. Artech House applied photonics series. Artech House.ISBN978-1-60807-555-3.
^Garwin, Laura; Lincoln, Tim, eds. (2010). "The first laser: Charles H. Townes".A Century of Nature: Twenty-One Discoveries that Changed Science and the World. University of Chicago Press. p. 105.ISBN978-0-226-28416-3.
^Bertolotti, Mario (2015).Masers and Lasers: An Historical Approach (2nd ed.). Chicago: CRC Press. p. 151.
^"Internet History in Asia".16th APAN Meetings/Advanced Network Conference in Busan. Archived fromthe original on February 1, 2006. RetrievedDecember 25, 2005.
^Even after the appropriations act was amended in 1992 to give NSF more flexibility with regard to commercial traffic, NSF never felt that it could entirely do away with itsAcceptable Use Policy and its restrictions on commercial traffic, see the response to Recommendation 5 in NSF's response to the Inspector General's review (an April 19, 1993 memo from Frederick Bernthal, Acting Director, to Linda Sundro, Inspector General, that is included at the end ofReview of NSFNET, Office of the Inspector General, National Science Foundation, March 23, 1993)
^Management of NSFNET, a transcript of the March 12, 1992 hearing before the Subcommittee on Science of the Committee on Science, Space, and Technology, U.S. House of Representatives, One Hundred Second Congress, Second Session, Hon.Rick Boucher, subcommittee chairman, presiding
^NSF Solicitation 93-52Archived March 5, 2016, at theWayback Machine – Network Access Point Manager, Routing Arbiter, Regional Network Providers, and Very High Speed Backbone Network Services Provider for NSFNET and the NREN(SM) Program, May 6, 1993
^Jurgenson, Nathan; Ritzer, George (February 2, 2012), Ritzer, George (ed.), "The Internet, Web 2.0, and Beyond",The Wiley-Blackwell Companion to Sociology, John Wiley & Sons, Ltd, pp. 626–648,doi:10.1002/9781444347388.ch33,ISBN978-1-4443-4738-8
^William THOMAS, et al., Plaintiffs, v. NETWORK SOLUTIONS, INC., and National Science Foundation Defendants. Civ. No. 97-2412 (TFH), Sec. I.A., 2 F.Supp.2d 22 (D.D.C. April 6, 1998), archived fromthe original.
^Anderson, Nate (September 30, 2009)."ICANN cuts cord to US government, gets broader oversight".Ars Technica.ICANN, which oversees the Internet's domain name system, is a private nonprofit that reports to the US Department of Commerce. Under a new agreement, that relationship will change, and ICANN's accountability goes global
^DeNardis, Laura (March 12, 2013). "The Emerging Field of Internet Governance". In Dutton, William H. (ed.).Oxford Handbooks Online. Oxford University Press.doi:10.1093/oxfordhb/9780199589074.013.0026.
^Hillebrand, Friedhelm (2002). Hillebrand, Friedhelm (ed.).GSM and UMTS, The Creation of Global Mobile Communications. John Wiley & Sons.ISBN978-0-470-84322-2.
^Mauldin, Alan (September 7, 2021). "Global Internet Traffic and Capacity Return to Regularly Scheduled Programming".TeleGeography.
^Classen, Christoph; Kinnebrock, Susanne; Löblich, Maria (2012). "Towards Web History: Sources, Methods, and Challenges in the Digital Age. An Introduction".Historical Social Research / Historische Sozialforschung.37 (4 (142)). GESIS - Leibniz-Institute for the Social Sciences, Center for Historical Social Research:97–101.JSTOR41756476.
^"A Flaw in the Design".The Washington Post. May 30, 2015.Archived from the original on November 8, 2020. RetrievedFebruary 20, 2020.The Internet was born of a big idea: Messages could be chopped into chunks, sent through a network in a series of transmissions, then reassembled by destination computers quickly and efficiently... The most important institutional force ... was the Pentagon's Advanced Research Projects Agency (ARPA) ... as ARPA began work on a groundbreaking computer network, the agency recruited scientists affiliated with the nation's top universities.
^Campbell-Kelly, Martin; Garcia-Swartz, Daniel D (2013). "The History of the Internet: The Missing Narratives".Journal of Information Technology.28 (1):18–33.doi:10.1057/jit.2013.4.S2CID41013.SSRN867087.
Rosenzweig, Roy (December 1998). "Wizards, Bureaucrats, Warriors, and Hackers: Writing the History of the Internet".The American Historical Review.103 (5):1530–1552.doi:10.2307/2649970.JSTOR2649970.
Russell, Andrew L. (2014).Open Standards and the Digital Age: History, Ideology, and Networks. Cambridge University Press.ISBN978-1-139-91661-5.
Ryan, Johnny (2010).A history of the Internet and the digital future. London, England: Reaktion Books.ISBN978-1-86189-777-0.