Acontent delivery network orcontent distribution network (CDN) is a geographically distributed network ofproxy servers and theirdata centers. The goal is to provide highavailability and performance ("speed") by distributing the service spatially relative toend users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of theInternet[1][2] as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of Internet content, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce,portals),live streaming media, on-demand streaming media, andsocial media sites.[3]
CDNs are alayer in the internet ecosystem. Content owners such as media companies and e-commerce vendors pay CDN operators to deliver their content to their end users. In turn, a CDN paysInternet service providers (ISPs), carriers, and network operators for hosting its servers in their data centers.
CDN is an umbrella term spanning different types of content delivery services:video streaming, software downloads, web and mobile content acceleration, licensed/managed CDN, transparent caching, and services to measure CDN performance,load balancing, Multi CDN switching and analytics and cloud intelligence. CDN vendors may cross over into other industries like security,DDoS protection andweb application firewalls (WAF), and WAN optimization.
Content delivery service providers includeAkamai Technologies,Cloudflare,Amazon CloudFront, Qwilt (Cisco),Fastly, andGoogle Cloud CDN.
CDN nodes are usually deployed in multiple locations, often over multipleInternet backbones. Benefits include reducing bandwidth costs, improving page load times, and increasing the global availability of content. The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remotepoints of presence (PoPs). Others build a global network and have a small number of geographical PoPs.[4]
Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations that are the fewesthops or the shortest time to the requesting client, or the highest server performance, to optimize delivery across local networks. When optimizing for cost, locations that are the least expensive may be chosen instead. In an optimal scenario, these two goals tend to align, asedge servers that are close to the end user at the edge of the network may have an advantage in performance or cost.
Most CDN providers will provide their services over a varying, defined, set of PoPs, depending on the coverage desired, such as United States, International or Global, Asia-Pacific, etc. These sets of PoPs can be called "edges", "edge nodes", "edge servers", or "edge networks" as they would be the closest edge of CDN assets to the end user.[5]
CDN concepts:
CDN providers profit either from direct fees paid bycontent providers using their network, or profit from the user analytics and tracking data collected as their scripts are being loaded onto customers' websites inside theirbrowser origin. As such these services are being pointed out as potential privacy intrusions for the purpose ofbehavioral targeting[6] and solutions are being created to restore single-origin serving and caching of resources.[7]
In particular, a website using a CDN may violate the EU'sGeneral Data Protection Regulation (GDPR). For example, in 2021 a German court forbade the use of a CDN on a university website, because this caused the transmission of the user's IP address to the CDN, which violated the GDPR.[8]
CDNs serving JavaScript have also been targeted as a way to inject malicious content into pages using them.Subresource Integrity mechanism was created in response to ensure that the page loads a script whose content is known and constrained to a hash referenced by the website author.[9]
The Internet was designed according to theend-to-end principle.[10]This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. As a result, the core network is specialized, simplified, and optimized to only forward data packets.
Content Delivery Networks augment the end-to-end transport network by distributing on it a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server-load balancing, request routing, and content services.[11]
Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache. Web caches are populated based on requests from users (pull caching) or based on preloaded content disseminated from content servers (push caching).[12]
Server-load balancing uses one or more techniques including service-based (global load balancing) or hardware-based (i.e.layer 4–7 switches, also known as a web switch, content switch, or multilayer switch) to share traffic among a number of servers or web caches. Here the switch is assigned a single virtualIP address. Traffic arriving at the switch is then directed to one of the realweb servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks.
A content cluster or service node can be formed using a layer 4–7 switch to balance load across a number of servers or a number of web caches within the network.
Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms are used to route the request. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation, HTML rewriting,[13] andanycasting.[14] Proximity—choosing the closest service node—is estimated using a variety of techniques including reactive probing, proactive probing, and connection monitoring.[11]
CDNs use a variety of methods of content delivery including, but not limited to, manual asset copying, active web caches, and global hardware load balancers.
Several protocol suites are designed to provide access to a wide variety of content services distributed throughout a content network. TheInternet Content Adaptation Protocol (ICAP) was developed in the late 1990s[15][16] to provide an open standard for connecting application servers. A more recently defined and robust solution is provided by theOpen Pluggable Edge Services (OPES) protocol.[17] This architecture defines OPES service applications that can reside on the OPES processor itself or be executed remotely on a Callout Server.Edge Side Includes or ESI is a small markup language for edge-level dynamic web content assembly. It is fairly common for websites to have generated content. It could be because of changing content like catalogs or forums, or because of the personalization. This creates a problem for caching systems. To overcome this problem, a group of companies created ESI.
Inpeer-to-peer (P2P) content-delivery networks, clients provide resources as well as use them. This means that, unlikeclient–server systems, the content-centric networks can actually perform better as more users begin to access the content (especially with protocols such asBittorrent that require users to share). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.[18][19]
To incentive peers to participate in the P2P network,web3 andblockchain technologies can be used: participating nodes receivecrypto tokens in exchange of their involvement.
If content owners are not satisfied with the options or costs of a commercial CDN service, they can create their own CDN. This is called a private CDN. A private CDN consists of PoPs (points of presence) that are only serving content for their owner. These PoPs can be caching servers,[20]reverse proxies or application delivery controllers.[21] It can be as simple as two caching servers,[20] or large enough to serve petabytes of content.[22] When a private CDN is deployed within a company network, it is also referred as Entreprise CDN oreCDN.
Large content distribution networks may even build and set up their own private network to distribute copies of content across cache locations.[23][24] Such private networks are usually used in conjunction with public networks as a backup option in case the capacity of the private network is not enough or there is a failure which leads to capacity reduction. Since the same content has to be distributed across many locations, a variety ofmulticasting techniques may be used to reduce bandwidth consumption. Over private networks, it has also been proposed to select multicast trees according to network load conditions to more efficiently utilize available network capacity.[25][26]
The rapid growth ofstreaming video traffic[27] uses largecapital expenditures by broadband providers[28] in order to meet this demand and retain subscribers by delivering a sufficiently goodquality of experience.
To address this,telecommunications service providers have begun to launch their own content delivery networks as a means to lessen the demands on thenetwork backbone and reduce infrastructure investments.
Because they own the networks over which video content is transmitted,telco CDNs have advantages over traditional CDNs. They own thelast mile and can deliver content closer to the end-user because it can be cached deep in their networks. This deep caching minimizes thedistance that video data travels over the general Internet and delivers it more quickly and reliably.
Telco CDNs also have a built-in cost advantage since traditional CDNs must lease bandwidth from them and build the operator's margin into their own cost model. In addition, by operating their own content delivery infrastructure, telco operators have better control over the utilization of their resources. Content management operations performed by CDNs are usually applied without (or with very limited) information about the network (e.g., topology, utilization etc.) of the telco-operators with which they interact or have business relationships. These pose a number of challenges for the telco-operators who have a limited sphere of action in face of the impact of these operations on the utilization of their resources.
In contrast, the deployment of telco-CDNs allows operators to implement their own content management operations,[29][30] which enables them to have a better control over the utilization of their resources and, as such, provide better quality of service and experience to their end users.
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(June 2021) (Learn how and when to remove this message) |
In June 2011, StreamingMedia.com reported that a group of TSPs had founded an Operator Carrier Exchange (OCX)[31] to interconnect their networks and compete more directly against large traditional CDNs likeAkamai andLimelight Networks, which have extensive PoPs worldwide. This way, telcos are building a Federated CDN offering, which is more interesting for acontent provider willing to deliver its content to the aggregated audience of this federation.
It is likely that in a near future, other telco CDN federations will be created. They will grow by enrollment of new telcos joining the federation and bringing network presence and their Internet subscriber bases to the existing ones.[citation needed]
The Open Caching specification byStreaming Video Technology Alliance defines a set ofAPIs that allows a Content Provider to deliver its content using several CDNs in a consistent way, seeing each CDN provider the same way through these APIs.
Combining several CDN services allow Content Providers to not rely on a single CDN service, especially concerned to deal with high peak audience during live events. There are several ways to allocate traffic to a particular CDN among the list, either client-side CDN selection, or server-side (at the Content Provider's origin) or cloud-side (in the middle, between the content origin and the audience). CDN selection criteria can be performance, availability and cost.
Traditionally, CDNs have used the IP of the client's recursive DNS resolver to geo-locate the client. While this is a sound approach in many situations, this leads to poor client performance if the client uses a non-local recursive DNS resolver that is far away. For instance, a CDN may route requests from a client in India to its edge server in Singapore, if that client uses a public DNS resolver in Singapore, causing poor performance for that client. Indeed, a recent study[32] showed that in many countries where public DNS resolvers are in popular use, the median distance between the clients and their recursive DNS resolvers can be as high as a thousand miles. In August 2011, a global consortium of leading Internet service providers led by Google announced their official implementation of the edns-client-subnetIETF Internet Draft,[33] which is intended to accurately localize DNS resolution responses. The initiative involves a limited number of leading DNS service providers, such asGoogle Public DNS,[34] and CDN service providers as well. With the edns-client-subnetEDNS0 option, CDNs can now utilize the IP address of the requesting client's subnet when resolving DNS requests. This approach, called end-user mapping,[32] has been adopted by CDNs and it has been shown to drastically reduce the round-trip latencies and improve performance for clients who use public DNS or other non-local resolvers. However, the use of EDNS0 also has drawbacks as it decreases the effectiveness of caching resolutions at the recursive resolvers,[32] increases the total DNS resolution traffic,[32] and raises a privacy concern of exposing the client's subnet.
Virtualization technologies are being used to deploy virtual CDNs (vCDNs) (also known as a software-defined CDN or sd-CDN) with the goal to reducecontent provider costs, and at the same time, increase elasticity and decrease service delay. With vCDNs, it is possible to avoid traditional CDN limitations, such as performance, reliability and availability since virtual caches are deployed dynamically (as virtual machines or containers) in physical servers distributed across the provider's geographical coverage. As the virtual cache placement is based on both the content type and server or end-user geographic location, the vCDNs have a significant impact on service delivery and network congestion.[35][36][37][38]
To boost performance, delivery to clients from servers can use alternate non-HTTP protocols such asWebRTC andWebSockets.
In 2017, Addy Osmani ofGoogle started referring to software solutions that could integrate naturally with theResponsive Web Design paradigm (with particular reference to the <picture> element) asImage CDNs.[39] The expression referred to the ability of a web architecture to serve multiple versions of the same image through HTTP, depending on the properties of the browser requesting it, as determined by either the browser or the server-side logic. The purpose of Image CDNs was, in Google's vision, to serve high-quality images (or, better, images perceived as high-quality by the human eye) while preserving download speed, thus contributing to a greatUser experience (UX).[citation needed]
Arguably, theImage CDN term was originally a misnomer, as neitherCloudinary nor Imgix (the examples quoted by Google in the 2017 guide by Addy Osmani) were, at the time, a CDN in the classical sense of the term.[39] Shortly afterwards, though, several companies offered solutions that allowed developers to serve different versions of their graphical assets according to several strategies. Many of these solutions were built on top of traditional CDNs, such asAkamai,CloudFront,Fastly,Edgecast andCloudflare. At the same time, other solutions that already provided an image multi-serving service joined the Image CDN definition by either offering CDN functionality natively (ImageEngine)[40] or integrating with one of the existing CDNs (Cloudinary/Akamai, Imgix/Fastly).
While providing a universally agreed-on definition of what an Image CDN is may not be possible, generally speaking, an Image CDN supports the following three components:[41]
The following table summarizes the current situation with the main software CDNs in this space:[42]
Name | CDN | Image Optimization | Device Detection |
---|---|---|---|
Akamai ImageManager | Y | Batch mode | based on HTTP Accept header |
Cloudflare Polish | Y | fully-automatic | based on HTTP Accept header |
Cloudinary | Through Akamai | Batch, URL directives | Accept header, Client-Hints |
Fastly IO | Y | URL directives | based on HTTP Accept header |
ImageEngine | Y | fully-automatic | WURFL, Client-Hints, Accept header |
Imgix | Through Fastly | fully-automatic | Accept header / Client-Hints |
PageCDN | Y | URL directives | based on HTTP Accept header |
Tinify CDN | Multiple | fully-automatic | based on HTTP Accept header |
![]() | This articleis inlist format but may read better asprose. You can help byconverting this article, if appropriate.Editing help is available.(June 2024) |
{{cite web}}
: CS1 maint: numeric names: authors list (link)