Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Web server

From Wikipedia, the free encyclopedia
(Redirected fromWebserver)
Computer software that distributes web pages

This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Web server" – news ·newspapers ·books ·scholar ·JSTOR
(March 2009) (Learn how and when to remove this message)
PC clients communicating via the network with a web server serving static content only
The inside and front of aDell PowerEdge server, a computer designed to be mounted in arack mount environment. Servers similar to this one are often used as web servers.
Multiple web servers may be used for a high-traffic website.
Server farm with thousands of web servers used for super-high traffic websites
ADSL modem running anembedded web server servingdynamic web pages used for modem configuration

Aweb server iscomputersoftware and underlyinghardware that accepts requests viaHTTP (thenetwork protocol created to distributeweb content) or its secure variantHTTPS. A user agent, commonly aweb browser orweb crawler, initiates communication by making a request for aweb page or otherresource using HTTP, and theserver responds with the content of that resource or anerror message. A web server can also accept and store resources sent from the user agent if configured to do so.[1][2]

The hardware used to run a web server can vary according to the volume of requests that it needs to handle. At the low end of the range areembedded systems, such as arouter that runs a small web server as its configuration interface. A high-trafficInternetwebsite might handle requests with hundreds of servers that run on racks of high-speed computers.

A resource sent from a web server can be a pre-existingfile (static content) available to the web server, or it can be generated at the time of the request (dynamic content) by anotherprogram that communicates with the server software. The former usually can be served faster and can be more easilycached for repeated requests, while the latter supports a broader range of applications.

Technologies such asREST andSOAP, which use HTTP as a basis for general computer-to-computer communication, as well as support forWebDAV extensions, have extended the application of web servers well beyond their original purpose of serving human-readable pages.

History

[edit]
First web proposal (1989) evaluated as"vague but exciting..."
The world's first web server, aNeXT Computer workstation with Ethernet, 1990. The case label reads: "This machine is a server. DO NOT POWER IT DOWN!!"
See also:History of the web browser,History of the World Wide Web, andHistory of the Internet

This is a very brief history ofweb server programs, so some information necessarily overlaps with the histories of theweb browsers, theWorld Wide Web and theInternet; therefore, for the sake of clarity and understandability, some key historical information below reported may be similar to that found also in one or more of the above-mentioned history articles.[citation needed]

Initial WWW project (1989–1991)

[edit]

In March 1989,Sir Tim Berners-Lee proposed a new project to his employerCERN, with the goal of easing the exchange of information between scientists by using ahypertext system. The proposal titled"HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-authorRobert Cailliau), and finally, it was approved.[3][4][5]

Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran onNeXTSTEP OS installed onNeXT workstations:[6][7][5]

Those early browsers retrieved web pages written in asimple early form of HTML, from web server(s) using a new basic communication protocol that was namedHTTP 0.9.

In August 1991 Tim Berners-Lee announced thebirth of WWW technology and encouraged scientists to adopt and develop it.[8] Soon after, those programs, along with theirsource code, were made available to people interested in their usage.[6] Although the source code was not formally licensed or placed in the public domain, CERN informally allowed users and developers to experiment and further develop on top of them. Berners-Lee started promoting the adoption and the usage of those programs along with theirporting to otheroperating systems.[5]

Fast and wild development (1991–1995)

[edit]
Graphs are unavailable due to technical issues. Updates on reimplementing the Graph extension, which will be known as the Chart extension, can be found onPhabricator and onMediaWiki.org.
Number of active web sites (1991–1996)[9][10]

In December 1991, thefirst web server outside Europe was installed at SLAC (U.S.A.).[7] This was a very important event because it started trans-continental web communications between web browsers and web servers.

In 1991–1993, CERN web server program continued to be actively developed by the www group, meanwhile, thanks to the availability of its source code and the public specifications of the HTTP protocol, many other implementations of web servers started to be developed.

In April 1993, CERN issued a public official statement stating that the three components of Web software (the basic line-mode client, the web server and the library of common code), along with theirsource code, were put in thepublic domain.[11] This statement freed web server developers from any possible legal issue about the development ofderivative work based on that source code (a threat that in practice never existed).

At the beginning of 1994, the most notable among new web servers wasNCSA httpd which ran on a variety ofUnix-based OSs and could servedynamically generated content by implementing thePOST HTTP method and theCGI to communicate with external programs. These capabilities, along with the multimedia features of NCSA'sMosaic browser (also able to manageHTMLFORMs in order to send data to a web server) highlighted the potential of web technology for publishing anddistributed computing applications.

In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers,webmasters and other professional figures interested in that server, started to write and collectpatches thanks to the NCSA httpd source code being available to the public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, theApache HTTP server project was started.[12][13]

At the end of 1994, a new commercial web server, namedNetsite, was released with specific features. It was the first one of many other similar products that were developed first byNetscape, then also bySun Microsystems, and finally byOracle Corporation.

In mid-1995, the first version ofIIS was released, forWindows NT OS, byMicrosoft. This marked the entry, in the field of World Wide Web technologies, of a very important commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web.

In the second half of 1995, CERN and NCSA web servers started to decline (in global percentage usage) because of the widespread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied, and more performances than the previous ones.

Explosive growth and competition (1996–2014)

[edit]
Graphs are unavailable due to technical issues. Updates on reimplementing the Graph extension, which will be known as the Chart extension, can be found onPhabricator and onMediaWiki.org.
Number of active web sites (1996-2002)[10][14]
Sun'sCobalt Qube 3 – a computerserver appliance (2002, discontinued)

At the end of 1996, there were already overfifty known (different) web server software programs that were available to everybody who wanted to own an Internetdomain name and/or to host websites.[15] Many of them lived only shortly and were replaced by other web servers.

The publication ofRFCs about protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IPpersistent connections (HTTP/1.1) required web servers both to increase the maximum number of concurrent connections allowed and to improve their level of scalability.

Between 1996 and 1999,Netscape Enterprise Server and Microsoft's IIS emerged among the leading commercial options whereas among the freely available andopen-source programs Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features).

In those years there was also another commercial, highly innovative and thus notable web server calledZeus (now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till the first decade of 2000s, despite its low percentage of usage.

Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterward IIS dropped to much lower percentages of usage than Apache (see alsomarket share).

From 2005–2006, Apache started to improve its speed and its scalability level by introducing new performance features (e.g. event MPM and new content cache).[16][17] As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered, even more, the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features.

In fact, a few years after 2000 started, not only other commercial and highly competitive web servers, e.g.LiteSpeed, but also many other open-source programs, often of excellent quality and very high performances, among which should be notedHiawatha,Cherokee HTTP server,Lighttpd,Nginx and other derived/related products also available with commercial support, emerged.

Around 2007–2008, most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616)[18] to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages.[19] Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption ofreverse proxies in front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks).[20]

New challenges (2015 and later years)

[edit]

In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, adilemma arose among developers of less popular web servers (e.g. with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version.[21][22]

In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted fornot supporting new HTTP/2 version (at least in the near future) also because of these main reasons:[21][22]

  • protocols HTTP/1.x would have been supported anyway by browsers for a very long time (maybe forever) so that there would be no incompatibility between clients and servers in next future;
  • implementing HTTP/2 was considered a task ofoverwhelming complexity that could open the door to a whole new class ofbugs that till 2015 did not exist and so it would have required notable investments in developing and testing the implementation of the new protocol;
  • adding HTTP/2 support could always be done in future in case the efforts would be justified.

Instead, developers ofmost popular web servers, rushed to offer the availability of new protocol, not only because they had the work force and the time to do so, but also because usually their previous implementation ofSPDY protocol could be reused as a starting point and because most used web browsers implemented it very quickly for the same reason. Another reason that prompted those developers to act quickly was that webmasters felt the pressure of the ever increasingweb traffic and they really wanted to install and to try – as soon as possible – something that could drastically lower the number of TCP/IP connections and speedup accesses to hosted websites.[23]

In 2020–2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC aboutHTTP/3 protocol.

Technical overview

[edit]
PC clients connected to a web server via Internet

The following technical overview should be considered only as an attempt to give a few verylimited examples aboutsome features that may beimplemented in a web server andsome of the tasks that it may perform in order to have a sufficiently wide scenario about the topic.

Aweb server program plays the role of a server in aclient–server model by implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage.

The complexity and the efficiency of a web server program may vary a lot depending on (e.g.):[1]

  • common features implemented;
  • common tasks performed;
  • performances and scalability level aimed as a goal;
  • software model and techniques adopted to achieve wished performance and scalability level;
  • target hardware and category of usage, e.g. embedded system, low-medium traffic web server, high trafficInternet web server.

Common features

[edit]

Although web server programs differ in how they are implemented, most of them offer the following common features.

These arebasic features that most web servers usually have.

  • Static content serving: to be able to serve static content (web files) to clients via HTTP protocol.
  • HTTP: support for one or more versions of HTTP protocol in order to send versions of HTTP responses compatible with versions of client HTTP requests, e.g. HTTP/1.0, HTTP/1.1 (eventually also withencrypted connectionsHTTPS), plus, if available,HTTP/2,HTTP/3.
  • Logging: usually web servers have also the capability of logging some information, about client requests and server responses, tolog files for security and statistical purposes.

A few other moreadvanced and popularfeatures (only a very short selection) are the following ones.

Common tasks

[edit]

A web server program, when it is running, usually performs several generaltasks, (e.g.):[1]

  • starts, optionally reads and applies settings found in itsconfiguration file(s) or elsewhere, optionally opens log file, starts listening to client connections / requests;
  • optionally tries to adapt its general behavior according to its settings and its currentoperating conditions;
  • managesclient connection(s) (accepting new ones or closing the existing ones as required);
  • receives client requests (by reading HTTP messages):
  • executes or refuses requested HTTP method:
  • replies to client requests sending proper HTTP responses (e.g. requested resources or error messages) eventually verifying or addingHTTP headers to those sent by dynamic programs / modules;
  • optionallylogs (partially or totally)client requests and/or its responses to an external user log file or to a system log file bysyslog, usually usingcommon log format;
  • optionallylogs process messages aboutdetected anomalies or other notable events (e.g. in client requests or in its internal functioning) using syslog or some other system facilities; these log messages usually have a debug, warning, error, alert level which can be filtered (not logged) depending on some settings, see alsoseverity level;
  • optionally generatesstatistics about web traffic managed and/or its performances;
  • other custom tasks.

Read request message

[edit]

Web server programs are able:[24][25][26]

  • to read an HTTP request message;
  • to interpret it;
  • to verify its syntax;
  • to identify knownHTTP headers and to extract their values from them.

Once an HTTP request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not. This requires many other steps, includingsecurity checks.

URL normalization

[edit]
Main article:URL normalization

Web server programs usually perform some type ofURL normalization (URL found in most HTTP request messages) in order to:

  • make resource path always a clean uniform path from root directory of website;
  • lower security risks (e.g. by intercepting more easily attempts to access static resources outside the root directory of the website or to access to portions of path below website root directory that are forbidden or which require authorization);
  • make path of web resources more recognizable by human beings andweb log analysis programs (also known as log analyzers / statistical applications).

The termURL normalization refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed, including the conversion of the scheme and host to lowercase. Among the most important normalizations are the removal of "." and ".." path segments and adding trailing slashes to a non-empty path component.

URL mapping

[edit]
This section needs to beupdated. Please help update this article to reflect recent events or newly available information.(June 2023)
Main article:URL mapping

"URL mapping is the process by which a URL is analyzed to figure out what resource it is referring to, so that that resource can be returned to the requesting client. This process is performed with every request that is made to a web server, with some of the requests being served with a file, such as an HTML document, or a gif image, others with the results of running a CGI program, and others by some other process, such as a built-in module handler, a PHP document, or a Java servlet."[27][needs update]

In practice, web server programs that implement advanced features, beyond the simplestatic content serving (e.g. URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled, e.g. as a:

  • URL redirection, a redirection to another URL;
  • static request offile content;
  • dynamic request of:
    • directory listing of files or other sub-directories contained in that directory;
    • other types of dynamic request in order to identify the program / module processor able to handle that kind of URL path and to pass to it otherURL parts, i.e. usually path-info andquery string variables.

One or more configuration files of web server may specify the mapping of parts ofURL path (e.g. initial parts offile path,filename extension and other path components) to a specific URL handler (file, directory, external program or internal module).[28]

When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory infile system) because it can refer to a virtual name of an internal or external module processor for dynamic requests.

URL path translation to file system

[edit]

Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to anabsolute path under the target website's root directory.[28]

Website's root directory may be specified by a configuration file or by some internal rule of the web server by using the name of the website which is thehost part of the URL found in HTTP client request.[28]

Path translation to file system is done for the following types of web resources:

  • a local, usually non-executable, file (static request for file content);
  • a local directory (dynamic request: directory listing generated on the fly);
  • a program name (dynamic requests that is executed using CGI or SCGI interface and whose output is read by web server and resent to client who made the HTTP request).

The web server appends the path found in requested URL (HTTP request message) and appends it to the path of the (Host) website root directory. On anApache server, this is commonly/home/www/website (onUnix machines, usually it is:/var/www/website). See the following examples of how it may result.

URL path translation for a static file request

Example of astatic request of an existing file specified by the following URL:

http://www.example.com/path/file.html

The client'suser agent connects towww.example.com and then sends the followingHTTP/1.1 request:

GET /path/file.html HTTP/1.1Host: www.example.comConnection: keep-alive

The result is the local file system resource:

/home/www/www.example.com/path/file.html

The web server then reads thefile, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or its access is forbidden.

URL path translation for a directory request (without a static index file)

Example of an implicitdynamic request of an existing directory specified by the following URL:

http://www.example.com/directory1/directory2/

The client'suser agent connects towww.example.com and then sends the followingHTTP/1.1 request:

GET /directory1/directory2 HTTP/1.1Host: www.example.comConnection: keep-alive

The result is the local directory path:

/home/www/www.example.com/directory1/directory2/

The web server then verifies the existence of thedirectory and if it exists and it can be accessed then tries to find out an index file (which in this case does not exist) and so it passes the request to an internal module or a program dedicated to directory listings and finally reads data output and sends a response to the client's web browser. The response will describe the content of the directory (list of contained subdirectories and files) or an error message will return saying that the directory does not exist or its access is forbidden.

URL path translation for a dynamic program request

For adynamic request the URL path specified by the client should refer to an existing external program (usually an executable file with a CGI) used by the web server to generate dynamic content.[29]

Example of adynamic request using a program file to generate output:

http://www.example.com/cgi-bin/forum.php?action=view&orderby=thread&date=2021-10-15

The client'suser agent connects towww.example.com and then sends the followingHTTP/1.1 request:

GET /cgi-bin/forum.php?action=view&ordeby=thread&date=2021-10-15 HTTP/1.1Host: www.example.comConnection: keep-alive

The result is the local file path of the program (in this example, aPHP program):

/home/www/www.example.com/cgi-bin/forum.php

The web server executes that program, passing in the path-info and thequery stringaction=view&orderby=thread&date=2021-10-15 so that the program has the info it needs to run. (In this case, it will return an HTML document containing a view of forum entries ordered by thread from October 15, 2021). In addition to this, the web server reads data sent from the external program and resends that data to the client that made the request.

Manage request message

[edit]

Once a request has been read, interpreted, and verified, it has to be managed depending on its method, its URL, and its parameters, which may include values of HTTP headers.

In practice, the web server has to handle the request by using one of these response paths:[28]

  • if something in request was not acceptable (in status line or message headers), web server already sent an error response;
  • if request has a method (e.g.OPTIONS) that can be satisfied by general code of web server then a successful response is sent;
  • if URL requires authorization then anauthorization error message is sent;
  • if URL maps to a redirection then aredirect message is sent;
  • if URL maps to adynamic resource (a virtual path or a directory listing) then its handler (an internal module or an external program) is called and request parameters (query string and path info) are passed to it in order to allow it to reply to that request;
  • if URL maps to astatic resource (usually a file on file system) then the internal static handler is called to send that file;
  • if request method is not known or if there is some other unacceptable condition (e.g. resource not found, internal server error, etc.) then anerror response is sent.

Serve static content

[edit]
PC clients communicating via network with a web server serving static content only

If a web server program is capable ofserving static content and it has been configured to do so, then it is able to send file content whenever a request message has a valid URL path matching (after URL mapping, URL translation and URL redirection) that of an existing file under the root directory of a website and file has attributes which match those required by internal rules of web server program.[28]

That kind of content is calledstatic because usually it is not changed by the web server when it is sent to clients and because it remains the same until it is modified (file modification) by some program.

NOTE: when servingstatic content only, a web server program usuallydoes not change file contents of served websites (as they are only read and never written) and so it suffices to support only theseHTTP methods:

  • OPTIONS
  • HEAD
  • GET

Response of static file content can be sped up by afile cache.

Directory index files
[edit]
Main article:Web server directory index

If a web server program receives a client request message with an URL whose path matches one of an existingdirectory and that directory is accessible and serving directory index file(s) is enabled then a web server program may try to serve the first of known (or configured) static index file names (aregular file) found in that directory; if no index file is found or other conditions are not met then an error message is returned.

Most used names for static index files are:index.html,index.htm andDefault.htm.

Regular files
[edit]

If a web server program receives a client request message with an URL whose path matches the file name of an existingfile and that file is accessible by web server program and its attributes match internal rules of web server program, then web server program can send that file to client.

Usually, for security reasons, most web server programs are pre-configured to serve onlyregular files or to avoid to usespecial file types likedevice files, along withsymbolic links orhard links to them. The aim is to avoid undesirable side effects when serving static web resources.[30]

Serve dynamic content

[edit]
PC clients communicating via network with a web server serving static and dynamic content

If a web server program is capable ofserving dynamic content and it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it the parameters of the client request. After that, the web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request.[citation needed]

NOTE: when servingstatic and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safelyreceive data from client(s) and so to be able to host also websites with interactive form(s) that may send large data sets (e.g. lots ofdata entry orfile uploads) to web server / external programs / modules:

  • POST

In order to be able to communicate with its internal modules and/or external programs, a web server program must have implemented one or more of the many availablegateway interface(s) (see alsoWeb Server Gateway Interfaces used for dynamic content).

The threestandard and historicalgateway interfaces are the following ones.

CGI
An external CGI program is run by web server program for each dynamic request, then web server program reads from it the generated data response and then resends it to client.
SCGI
An external SCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for network connections; every time there is a new request for it, web server program makes a new network connection to it in order to send request parameters and to read its data response, then network connection is closed.
FastCGI
An external FastCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for a network connection which is established permanently by web server; through that connection are sent the request parameters and read data responses.
Directory listings
[edit]
Directory listing dynamically generated by a web server
Main article:Web server directory index

A web server program may be capable to manage the dynamic generation (on the fly) of adirectory index list of files and sub-directories.[31]

If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files and/or subdirectories of above mentioned directory, isdynamically generated (on the fly). If it cannot be generated an error is returned.

Some web server programs allow the customization of directory listings by allowing the usage of a web page template (an HTML document containing placeholders, e.g.$(FILE_NAME), $(FILE_SIZE), etc., that are replaced with the field values of each file entry found in directory by web server), e.g.index.tpl or the usage of HTML and embedded source code that is interpreted and executed on the fly, e.g.index.asp, and / or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FCGIs, e.g.index.cgi,index.php,index.fcgi.

Usage of dynamically generateddirectory listings is usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page.

The main usage ofdirectory listings is to allow the download of files (usually when their names, sizes, modification date-times orfile attributes may change randomly / frequently)as they are, without requiring to provide further information to requesting user.[32]

Program or module processing
[edit]

An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or moredata repositories, e.g.:[citation needed]

  • files (file system);
  • databases (DBs);
  • other sources located in local computer or in other computers.

Aprocessing unit can return any kind of web content, also by using data retrieved from a data repository, e.g.:[citation needed]

In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically.

Send response message

[edit]

Web server programs are able to send response messages as replies to client request messages.[24]

An error response message may be sent because a request message could not be successfully read or decoded or analyzed or executed.[25]

NOTE: the following sections are reported only as examples to help to understand what a web server, more or less, does; these sections are by any means neither exhaustive nor complete.

Error message

[edit]

A web server program may reply to a client request message with many kinds of error messages, anyway these errors are divided mainly in two categories:

When an error response / message is received by a client browser, then if it is related to the main user request (e.g. an URL of a web resource such as a web page) then usually that error message is shown in some browser window / message.

URL authorization

[edit]

A web server program may be able to verify whether the requested URL path:[35]

  • can be freely accessed by everybody;
  • requires a user authentication (request of user credentials, e.g. such asuser name andpassword);
  • access is forbidden to some or all kind of users.

If the authorization / access rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program:

  • can deny access by sending a specific error message (e.g. accessforbidden);
  • may deny access by sending a specific error message (e.g. accessunauthorized) that usually forces the client browser to ask human user to provide required user credentials; if authentication credentials are provided then web server program verifies and accepts or rejects them.

URL redirection

[edit]
Main article:URL redirection

A web server programmay have the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL).[36]

URL redirection of location is used:[36]

  • to fix a directory name by adding a final slash '/';[31]
  • to give a new URL for a no more existing URL path to a new path where that kind of web resource can be found.
  • to give a new URL to another domain when current domain has too much load.

Example 1: a URL path points to adirectory name but it does not have a final slash '/' so web server sends a redirect to client in order to instruct it to redo the request with the fixed path name.[31]

From:
  /directory1/directory2
To:
  /directory1/directory2/

Example 2: a whole set of documents has beenmoved inside website in order to reorganize their file system paths.

From:
  /directory1/directory2/2021-10-08/
To:
  /directory1/directory2/2021/10/08/

Example 3: a whole set of documents has beenmoved to a new website and now it is mandatory to use secure HTTPS connections to access them.

From:
  http://www.example.com/directory1/directory2/2021-10-08/
To:
  https://docs.example.com/directory1/2021-10-08/

Above examples are only a few of the possible kind of redirections.

Successful message

[edit]

A web server program is able to reply to a valid client request message with a successful message, optionally containing requestedweb resource data.[37]

If web resource data is sent back to client, then it can bestatic content ordynamic content depending on how it has been retrieved (from a file or from the output of some program / module).

Content cache

[edit]

In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more contentcaches, each one specialized in a content category.[38][39]

Content is usually cached by its origin, e.g.:

File cache

[edit]

Historically, static contents found infiles which had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanicaldisks since mid-late 1960s / 1970s; regrettably reads from and writes to those kind ofdevices have always been considered very slow operations when compared toRAM speed and so, since earlyOSs, first disk caches and then alsoOS filecache sub-systems were developed to speed upI/O operations of frequently accessed data / files.

Even with the aid of an OS file cache, the relative / occasional slowness of I/O operations involving directories and files stored on disks became soon abottleneck in the increase ofperformances expected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet / network lines.

The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests/responses per second (RPS), started to be studied / researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs.[40]

In practice, nowadays, many popular / high performance web server programs include their ownuserlandfile cache, tailored for a web server usage and using their specific implementation and parameters.[41][42][43]

The wide spread adoption ofRAID and/or fastsolid-state drives (storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server.

Dynamic cache

[edit]

Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys / parameters) and so, maybe for a while (e.g. from 1 second to several hours or more), the resulting output can be cached in RAM or even on a fastdisk.[44]

The typical usage of a dynamic cache is when a website hasdynamic web pages about news, weather, images, maps, etc. that do not change frequently (e.g. everyn minutes) and that are accessed by a huge number of clients per minute / hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches.[45]

Anyway, in most cases those kind of caches are implemented by external servers (e.g.reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g.memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web server(s).[46][47]

Kernel-mode and user-mode web servers

[edit]

A web server software can be either incorporated into theOS and executed inkernel space, or it can be executed inuser space (like other regular applications).

Web servers that run inkernel mode (usually calledkernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereasrun-timecritical errors may lead to serious problems in OS kernel.

Web servers that run inuser-mode have to ask the system for permission to use more memory or moreCPU resources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer/data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server.

Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OSsystem calls and new optimized web server software). See alsocomparison of web server software to discover which of them run in kernel mode or in user mode (also referred as kernel space or user space).

Performances

[edit]

Toimprove theuser experience (on client / browser side), a web server shouldreply quickly (as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g. big or huge files), also returned data content should be sent as fast as possible (high transfer speed).

In other words, aweb server should always be veryresponsive, even under high load of web traffic, in order to keeptotal user's wait (sum of browser time + network time +web server response time) for a responseas low as possible.

Performance metrics

[edit]

For web server software, main keyperformance metrics (measured under varyoperating conditions) usually are at least the following ones (i.e.):[48]

  • number ofrequests per second (RPS, similar toQPS, depending on HTTP version and configuration, type of HTTP requests and other operating conditions);
  • number ofconnections per second (CPS), is the number of connections per second accepted by web server (useful when using HTTP/1.0 or HTTP/1.1 with a very low limit of requests / responses per connection, i.e. 1 .. 20);
  • network latency +response time for each new client request; usually benchmark tool shows how many requests have been satisfied within a scale of time laps (e.g. within 1ms, 3ms, 5ms, 10ms, 20ms, 30ms, 40ms) and / or the shortest, the average and the longest response time;
  • throughput of responses, in bytes per second.

Among the operating conditions, thenumber (1 ..n) ofconcurrent client connections used during a test is an important parameter because it allows to correlate theconcurrency level supported by web server with results of the tested performance metrics.

Software efficiency

[edit]

Thespecific web serversoftware design and model adopted (e.g.):

  • singleprocess or multi-process;
  • singlethread (no thread) or multi-thread for each process;
  • usage ofcoroutines or not;

... and otherprogramming techniques, such as (e.g.):

... used to implement a web server program,can bias a lot theperformances and in particular thescalability level that can be achieved underheavy load or when using high end hardware (many CPUs, disks and lots of RAM).

In practice some web server software models may require more OS resources (specially more CPUs and more RAM) than others to be able to work well and so to achieve target performances.

Operating conditions

[edit]

There are manyoperating conditions that can affect the performances of a web server; performance values may vary depending on (i.e.):

  • the settings of web server (including the fact that log file is or is not enabled, etc.);
  • the HTTP version used by client requests;
  • the average HTTP request type (method, length of HTTP headers and optional body);
  • whether the requested content is static or dynamic;
  • whether the content iscached or not cached (by server and/or by client);
  • whether the content iscompressed on the fly (when transferred), pre-compressed (i.e. when a file resource is stored on disk already compressed so that web server can send that file directly to the network with the only indication that its content is compressed) or not compressed at all;
  • whether the connections are or are not encrypted;
  • the averagenetwork speed between web server and its clients;
  • the number of activeTCP connections;
  • the number of active processes managed by web server (including external CGI, SCGI, FCGI programs);
  • thehardware andsoftware limitations or settings of theOS of the computer(s) on which the web server runs;
  • other minor conditions.

Benchmarking

[edit]
Main article:Web server benchmarking

Performances of a web server are typicallybenchmarked by using one or more of the availableautomated load testing tools.

Load limits

[edit]

A web server (program installation) usually has pre-definedload limits for each combination ofoperating conditions, also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also theC10k problem and theC10M problem).

When a web server is near to or over its load limits, it getsoverloaded and so it may becomeunresponsive.

Causes of overload

[edit]

At any time web servers can be overloaded due to one or more of the following causes (e.g.).

  • Excess legitimate web traffic. Thousands or even millions of clients connecting to the website in a short amount of time, e.g.,Slashdot effect.
  • Distributed Denial of Service attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users.
  • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them).
  • XSS worms can cause high traffic because of millions of infected browsers or web servers.
  • Internet bots Traffic not filtered/limited on large websites with very few network resources (e.g.bandwidth) and/or hardware resources (CPUs, RAM, disks).
  • Internet (network) slowdowns (e.g. due to packet losses) so that client requests are served more slowly and the number of connections increases so much that server limits are reached.
  • Web servers,serving dynamic content,waiting for slow responses coming fromback-end computer(s) (e.g.databases), maybe because of too many queries mixed with too many inserts or updates of DB data; in these cases web servers have to wait for back-end data responses before replying to HTTP clients but during these waits too many new client connections / requests arrive and so they become overloaded.
  • Web servers (computers)partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures such asback-end (e.g.database) failures; in these cases the remaining web servers may get too much traffic and become overloaded.

Symptoms of overload

[edit]

The symptoms of an overloaded web server are usually the following ones (e.g.).

  • Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
  • The web server returns anHTTP error code, such as 500, 502,[49][50] 503,[51] 504,[52] 408, or even an intermittent404.
  • The web server refuses or resets (interrupts)TCP connections before it returns any content.
  • In very rare cases, the web server returns only a part of the requested content. This behavior can be considered abug, even if it usually arises as a symptom of overload.

Anti-overload techniques

[edit]

To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones (e.g.).

  • Tuning OS parameters for hardware capabilities and usage.
  • Tuning web server(s) parameters to improve their security and performances.
  • Deployingweb cache techniques (not only for static contents but, whenever possible, for dynamic contents too).
  • Managing network traffic, by using:
    • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
    • HTTP traffic managers to drop, redirect or rewrite requests having badHTTP patterns;
    • Bandwidth management andtraffic shaping, in order to smooth down peaks in network usage.
  • Using differentdomain names,IP addresses and computers to serve different kinds (static and dynamic) of content; the aim is toseparate big or huge files (download.*) (that domain might be replaced also by aCDN) from small and medium-sized files (static.*) and from main dynamic site (maybe where some contents are stored in abackend database) (www.*); the idea is to be able to efficiently serve big or huge (over 10 – 1000 MB) files (maybe throttling downloads) and to fullycache small and medium-sized files, without affecting performances of dynamic site under heavy load, by using different settings for each (group) of web server computers, e.g.:
    • https://download.example.com
    • https://static.example.com
    • https://www.example.com
  • Using many web servers (computers) that are grouped together behind aload balancer so that they act or are seen as one big web server.
  • Adding more hardware resources (i.e.RAM, fastdisks) to each computer.
  • Using more efficient computer programs for web servers (see also:software efficiency).
  • Using the most efficientWeb Server Gateway Interface to process dynamic requests (spawning one or more external programs every time a dynamic page is retrieved, kills performances).
  • Using other programming techniques andworkarounds, especially if dynamic content is involved, to speed up the HTTP responses (i.e. by avoiding dynamic calls to retrieve objects, such as style sheets, images and scripts), that never change or change very rarely, by copying that content to static files once and then keeping them synchronized with dynamic content).
  • Using latest efficient versions ofHTTP (e.g. beyond using common HTTP/1.1 also by enablingHTTP/2 and maybeHTTP/3 too, whenever available web server software has reliable support for the latter two protocols) in order to reduce a lot the number of TCP/IP connections started by each client and the size of data exchanged (because of more compact HTTP headers representation and maybe data compression).

Caveats about using HTTP/2 and HTTP/3 protocols

Even if newer HTTP (2 and 3) protocols usually generate less network traffic for each request / response data, they may require moreOS resources (i.e. RAM and CPU) used by web server software (because ofencrypted data, lots of stream buffers and other implementation details); besides this, HTTP/2 and maybe HTTP/3 too, depending also on settings of web server and client program, may not be the best options for data upload of big or huge files at very high speed because their data streams are optimized for concurrency of requests and so, in many cases, using HTTP/1.1 TCP/IP connections may lead to better results / higher upload speeds (your mileage may vary).[53][54]

Market share

[edit]
Further information on HTTP server programs:Category:Web server software
Chart:
Market share of all sites for most popular web servers 2005–2021
Chart:
Market share of all sites for most popular web servers 1995–2005

Below are the latest statistics of the market share of all sites of the top web servers on the Internet byNetcraft.

Web server: Market share of all sites
Datenginx (Nginx, Inc.)Apache (ASF)OpenResty (OpenResty Software Foundation)Cloudflare Server (Cloudflare, Inc.)IIS (Microsoft)GWS (Google)Others
October 2021[55]34.95%24.63%6.45%4.87%4.00% (*)4.00% (*)Less than 22%
February 2021[56]34.54%26.32%6.36%5.0%6.5%3.90%Less than 18%
February 2020[57]36.48%24.5%4.00%3.0%14.21%3.18%Less than 15%
February 2019[58]25.34%26.16%N/AN/A28.42%1.66%Less than 19%
February 2018[59]24.32%27.45%N/AN/A34.50%1.20%Less than 13%
February 2017[60]19.42%20.89%N/AN/A43.16%1.03%Less than 15%
February 2016[61]16.61%32.80%N/AN/A29.83%2.21%Less than 19%

NOTE: (*) percentage rounded to integer number, because its decimal values are not publicly reported by source page (only its rounded value is reported in graph).

See also

[edit]

Standard Web Server Gateway Interfaces used fordynamic contents:

  • CGI Common Gateway Interface
  • SCGI Simple Common Gateway Interface
  • FastCGI Fast Common Gateway Interface

A few other Web Server Interfaces (server orprogramming language specific) used for dynamic contents:

  • SSI Server Side Includes, rarely used, static HTML documents containing SSI directives are interpreted by server software to include small dynamic data on the fly when pages are served, e.g. date and time, other static file contents, etc.
  • SAPI Server Application Programming Interface:
    • ISAPI Internet Server Application Programming Interface
    • NSAPI Netscape Server Application Programming Interface
  • PSGI Perl Web Server Gateway Interface
  • WSGI Python Web Server Gateway Interface
  • Rack Rack Web Server Gateway Interface
  • JSGI JavaScript Web Server Gateway Interface
  • Java Servlet,JavaServer Pages
  • Active Server Pages,ASP.NET

References

[edit]
  1. ^abcNancy J. Yeager; Robert E. McGrath (1996).Web Server Technology. Morgan Kaufmann.ISBN 1-55860-376-X.Archived from the original on 20 January 2023. Retrieved22 January 2021.
  2. ^William Nelson; Arvind Srinivasan; Murthy Chintalapati (2009).Sun Web Server: The Essential Guide. Pearson Education.ISBN 978-0-13-712892-1.Archived from the original on 20 January 2023. Retrieved14 October 2021.
  3. ^Zolfagharifard, Ellie (24 November 2018)."'Father of the web' Sir Tim Berners-Lee on his plan to fight fake news".The Telegraph. London.ISSN 0307-1235.Archived from the original on 11 January 2022. Retrieved1 February 2019.
  4. ^"History of Computers and Computing, Internet, Birth, The World Wide Web of Tim Berners-Lee".history-computer.com.Archived from the original on 4 January 2019. Retrieved1 February 2019.
  5. ^abcTim Berner-Lee (1992)."WWW Project History (original)". CERN (World Wide Web project).Archived from the original on 8 December 2021. Retrieved20 December 2021.
  6. ^abTim Berner-Lee (20 August 1991)."WorldWideWeb wide-area hypertext app available (announcement)". CERN (World Wide Web project).Archived from the original on 2 December 2021. Retrieved16 October 2021.
  7. ^abWeb Administrator."Web History". CERN (World Wide Web project).Archived from the original on 2 December 2021. Retrieved16 October 2021.
  8. ^Tim Berner-Lee (2 August 1991)."Qualifiers on hypertext links ..." CERN (World Wide Web project).Archived from the original on 7 December 2021. Retrieved16 October 2021.
  9. ^Ali Mesbah (2009).Analysis and Testing of Ajax-based Single-page Web Applications.ISBN 978-90-79982-02-8. Retrieved18 December 2021.
  10. ^abRobert H'obbes' Zakon."Hobbes' Internet Timeline v5.1 (WWW Growth) NOTE: till 1996 number of web servers = number of web sites". ISOC. Archived from the original on 15 August 2000. Retrieved18 December 2021.
  11. ^Tim Smith; François Flückiger."Licensing the Web". CERN (World Wide Web project).Archived from the original on 6 December 2021. Retrieved16 October 2021.
  12. ^"NCSA httpd". NCSA (web archive). Archived fromthe original on 1 August 2010. Retrieved16 December 2021.
  13. ^"About the Apache HTTPd server: How Apache Came to be". Apache: HTTPd server project. 1997.Archived from the original on 7 June 2008. Retrieved17 December 2021.
  14. ^"Web Server Survey, NOTE: number of active web sites in year 2000 has been interpolated". Netcraft. 22 December 2021.Archived from the original on 27 December 2021. Retrieved27 December 2021.
  15. ^"Netcraft: web server software (1996)". Netcraft (web archive). Archived fromthe original on 30 December 1996. Retrieved16 December 2021.
  16. ^"Overview of new features in Apache 2.2". Apache: HTTPd server project. 2005.Archived from the original on 27 November 2021. Retrieved16 December 2021.
  17. ^"Overview of new features in Apache 2.4". Apache: HTTPd server project. 2012.Archived from the original on 26 November 2021. Retrieved16 December 2021.
  18. ^"Connections, persistent connections: practical considerations".RFC 2616, Hypertext Transfer Protocol -- HTTP/1.1. pp. 46–47. sec. 8.1.4.doi:10.17487/RFC2616.RFC2616.
  19. ^"Maximum concurrent connections to the same domain for browsers". 2017.Archived from the original on 21 December 2021. Retrieved21 December 2021.
  20. ^"Linux Web Server Performance Benchmark - 2016 results". RootUsers. 8 March 2016.Archived from the original on 23 December 2021. Retrieved22 December 2021.
  21. ^ab"Will HTTP/2 replace HTTP/1.x?". IETF HTTP Working Group.Archived from the original on 27 September 2014. Retrieved22 December 2021.
  22. ^ab"Implementations of HTTP/2 in client and server software". IETF HTTP Working Group.Archived from the original on 23 December 2021. Retrieved22 December 2021.
  23. ^"Why just one TCP connection?". IETF HTTP Working Group.Archived from the original on 27 September 2014. Retrieved22 December 2021.
  24. ^ab"Client/Server Messaging".RFC 7230, HTTP/1.1: Message Syntax and Routing. pp. 7–8. sec. 2.1.doi:10.17487/RFC7230.RFC7230.
  25. ^ab"Handling Incomplete Messages".RFC 7230, HTTP/1.1: Message Syntax and Routing. p. 34. sec. 3.4.doi:10.17487/RFC7230.RFC7230.
  26. ^"Message Parsing Robustness".RFC 7230, HTTP/1.1: Message Syntax and Routing. pp. 34–35. sec. 3.5.doi:10.17487/RFC7230.RFC7230.
  27. ^R. Bowen (29 September 2002)."URL Mapping"(PDF). Apache software foundation.Archived(PDF) from the original on 15 November 2021. Retrieved15 November 2021.
  28. ^abcde"Mapping URLs to Filesystem Locations". Apache: HTTPd server project. 2021.Archived from the original on 20 October 2021. Retrieved19 October 2021.
  29. ^"Dynamic Content with CGI". Apache: HTTPd server project. 2021.Archived from the original on 15 November 2021. Retrieved19 October 2021.
  30. ^Chris Shiflett (2003).HTTP developer's handbook. Sams's publishing.ISBN 0-672-32454-7.Archived from the original on 20 January 2023. Retrieved9 December 2021.
  31. ^abcASF Infrabot (22 May 2019)."Directory listings". Apache foundation: HTTPd server project.Archived from the original on 7 June 2019. Retrieved16 November 2021.
  32. ^"Apache: directory listing to download files". Apache: HTTPd server.Archived from the original on 2 December 2021. Retrieved16 December 2021.
  33. ^"Client Error 4xx".RFC 7231, HTTP/1.1: Semantics and Content. p. 58. sec. 6.5.doi:10.17487/RFC7231.RFC7231.
  34. ^"Server Error 5xx".RFC 7231, HTTP/1.1: Semantics and Content. pp. 62-63. sec. 6.6.doi:10.17487/RFC7231.RFC7231.
  35. ^"Introduction".RFC 7235, HTTP/1.1: Authentication. p. 3. sec. 1.doi:10.17487/RFC7235.RFC7235.
  36. ^ab"Response Status Codes: Redirection 3xx".RFC 7231, HTTP/1.1: Semantics and Content. pp. 53–54. sec. 6.4.doi:10.17487/RFC7231.RFC7231.
  37. ^"Successful 2xx".RFC 7231, HTTP/1.1: Semantics and Content. pp. 51-54. sec. 6.3.doi:10.17487/RFC7231.RFC7231.
  38. ^"Caching Guide". Apache: HTTPd server project. 2021.Archived from the original on 9 December 2021. Retrieved9 December 2021.
  39. ^"NGINX Content Caching". F5 NGINX. 2021.Archived from the original on 9 December 2021. Retrieved9 December 2021.
  40. ^Evangelos P. Markatos (1996)."Main Memory Caching of Web Documents". Computer networks and ISDN Systems.Archived from the original on 20 January 2023. Retrieved9 December 2021.
  41. ^"IPlanet Web Server 7.0.9: file-cache". Oracle. 2010.Archived from the original on 9 December 2021. Retrieved9 December 2021.
  42. ^"Apache Module mod_file_cache". Apache: HTTPd server project. 2021.Archived from the original on 9 December 2021. Retrieved9 December 2021.
  43. ^"HTTP server: configuration: file cache". GNU. 2021.Archived from the original on 9 December 2021. Retrieved9 December 2021.
  44. ^"Apache Module mod_cache_disk". Apache: HTTPd server project. 2021.Archived from the original on 9 December 2021. Retrieved9 December 2021.
  45. ^"What is dynamic cache?". Educative. 2021.Archived from the original on 9 December 2021. Retrieved9 December 2021.
  46. ^"Dynamic Cache Option Tutorial". Siteground. 2021.Archived from the original on 20 January 2023. Retrieved9 December 2021.
  47. ^Arun Iyengar; Jim Challenger (2000)."Improving Web Server Performance by Caching Dynamic Data". Usenix. Retrieved9 December 2021.
  48. ^Jussara M. Almeida; Virgilio Almeida; David J. Yates (7 July 1997)."WebMonitor: a tool for measuring World Wide Web server performance".First Monday.doi:10.5210/fm.v2i7.539.Archived from the original on 4 November 2021. Retrieved4 November 2021.
  49. ^Fisher, Tim; Lifewire."Getting a 502 Bad Gateway Error? Here's What to Do".Lifewire.Archived from the original on 23 February 2017. Retrieved1 February 2019.
  50. ^"What is a 502 bad gateway and how do you fix it?".IT PRO.Archived from the original on 20 January 2023. Retrieved1 February 2019.
  51. ^Fisher, Tim; Lifewire."Getting a 503 Service Unavailable Error? Here's What to Do".Lifewire.Archived from the original on 20 January 2023. Retrieved1 February 2019.
  52. ^Fisher, Tim; Lifewire."Getting a 504 Gateway Timeout Error? Here's What to Do".Lifewire.Archived from the original on 23 April 2021. Retrieved1 February 2019.
  53. ^many (24 January 2021)."Slow uploads with HTTP/2". github.Archived from the original on 16 November 2021. Retrieved15 November 2021.
  54. ^Junho Choi (24 August 2020)."Delivering HTTP/2 upload speed improvements". Cloudflare.Archived from the original on 16 November 2021. Retrieved15 November 2021.
  55. ^"October 2021 Web Server Survey".Netcraft. 15 October 2021.Archived from the original on 15 November 2021. Retrieved15 November 2021.
  56. ^"February 2021 Web Server Survey".Netcraft. 26 February 2021.Archived from the original on 15 April 2021. Retrieved8 April 2021.
  57. ^"February 2020 Web Server Survey".Netcraft. 20 February 2020.Archived from the original on 17 April 2021. Retrieved8 April 2021.
  58. ^"February 2019 Web Server Survey".Netcraft. 28 February 2019.Archived from the original on 15 April 2021. Retrieved8 April 2021.
  59. ^"February 2018 Web Server Survey".Netcraft. 13 February 2018.Archived from the original on 17 April 2021. Retrieved8 April 2021.
  60. ^"February 2017 Web Server Survey".Netcraft. 27 February 2017.Archived from the original on 14 March 2017. Retrieved13 March 2017.
  61. ^"February 2016 Web Server Survey".Netcraft. 22 February 2016.Archived from the original on 27 January 2022. Retrieved27 January 2022.

External links

[edit]
Website management
Concepts
Web hosting
Web analytics
Web hosting control panels (comparison)
Top-level domain registries
Domain name managers andregistrars
Web content management system
Authority control databases: NationalEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Web_server&oldid=1281253153"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp