A computer network diagram of clients communicating with a server via the Internet
Theclient–server model is adistributed application structure that partitions tasks or workloads between the providers of a resource or service, calledservers, and service requesters, calledclients.[1] Often clients and servers communicate over acomputer network on separate hardware, but both client and server may be on the same device. A serverhost runs one or more server programs, which share their resources with clients. A client usually does not share its computing resources, but it requests content or service from a server and may share its own content as part of the request. Clients, therefore, initiate communication sessions with servers, which await incoming requests.Examples of computer applications that use the client–server model areemail, network printing, and theWorld Wide Web.
The server component provides a function or service to one or many clients, which initiate requests for such services.Servers are classified by the services they provide. For example, aweb server servesweb pages and afile server servescomputer files. Ashared resource may be any of the server computer's software and electronic components, fromprograms anddata toprocessors andstorage devices. The sharing of resources of a server constitutes aservice.
Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run a web server and file server software at the same time to serve different data to clients making different kinds of requests. The client software can also communicate with server software within the same computer.[2] Communication between servers, such as to synchronize data, is sometimes calledinter-server orserver-to-server communication.
Generally, a service is anabstraction of computer resources and a client does not have to beconcerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the relevantapplication protocol, i.e. the content and the formatting of the data for the requested service.
Clients and servers exchange messages in arequest–responsemessaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example ofinter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in acommunications protocol. All protocols operate in theapplication layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement anapplication programming interface (API).[3] The API is anabstraction layer for accessing a service. By restricting communication to a specificcontent format, it facilitatesparsing. By abstracting access, it facilitates cross-platform data exchange.[4]
A server may receive requests from many distinct clients in a short period. A computer can only perform a limited number oftasks at any moment, and relies on ascheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximizeavailability, the server software may limit the availability to clients.Denial of service attacks are designed to exploit a server's obligation to process requests by overloading it with excessive request rates.Encryption should be applied if sensitive information is to be communicated between the client and the server.
When abank customer accessesonline banking services with aweb browser (the client), the client initiates a request to the bank's web server. The customer'slogincredentials are compared against adatabase, and the webserver accesses thatdatabase server as a client. Anapplication server interprets the returned data by applying the bank'sbusiness logic and provides theoutput to the webserver. Finally, the webserver returns the result to the client web browser for display.
In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete.
Server-side refers to programs and operations that run on theserver. This is in contrast to client-side programs and operations which run on theclient.
"Server-side software" refers to acomputer application, such as aweb server, that runs on remoteserver hardware, reachable from auser's localcomputer,smartphone, or other device. Operations may be performed server-side because they require access to information or functionality that is not available on theclient, or because performing such operations on theclient side would be slow, unreliable, orinsecure.
Client and server programs may be commonly available ones such as free or commercialweb servers andweb browsers, communicating with each other using standardizedprotocols. Or,programmers may write their own server, client, andcommunications protocol which can only be used with one another.
Server-side operations include both those that are carried out in response to client requests, and non-client-oriented operations such as maintenance tasks.[5][6]
In acomputer security context, server-side vulnerabilities or attacks refer to those that occur on a server computer system, rather than on the client side, orin between the two. For example, an attacker might exploit anSQL injection vulnerability in aweb application in order to maliciously change or gain unauthorized access to data in the server'sdatabase. Alternatively, an attacker might break into a server system using vulnerabilities in the underlyingoperating system and then be able to access database and other files in the same manner as authorized administrators of the server.[7][8][9]
In the case ofdistributed computing projects such asSETI@home and theGreat Internet Mersenne Prime Search, while the bulk of the operations occur on the client side, the servers are responsible for coordinating the clients, sending them data to analyze, receiving and storing results, providing reporting functionality to project administrators, etc. In the case of an Internet-dependent user application likeGoogle Earth, while querying and display of map data takes place on the client side, the server is responsible for permanent storage of map data, resolving user queries into map data to be returned to the client, etc.
Web applications andservices can be implemented in almost any language, as long as they can return data to standards-based web browsers (possibly via intermediary programs) in formats which they can use.
Typically, a client is acomputer application, such as aweb browser, that runs on auser's localcomputer,smartphone, or other device, and connects to aserver as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe the operations or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use lessbandwidth, and incur a lessersecurity risk.
When the server serves data in a commonly used manner, for example according to standardprotocols such asHTTP orFTP, users may have their choice of a number of client programs (e.g. most modern web browsers can request and receive data using both HTTP and FTP). In the case of more specialized applications,programmers may write their own server, client, andcommunications protocol which can only be used with one another.
Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be termed client-side operations.
In acomputer security context, client-side vulnerabilities or attacks refer to those that occur on the client / user's computer system, rather than on theserver side, orin between the two. As an example, if a server contained anencrypted file or message which could only be decrypted using akey housed on the user's computer system, a client-side attack would normally be an attacker's only opportunity to gain access to the decrypted contents. For instance, the attacker might causemalware to be installed on the client system, allowing the attacker to view the user's screen, record the user's keystrokes, and steal copies of the user's encryption keys, etc. Alternatively, an attacker might employcross-site scripting vulnerabilities to execute malicious code on the client's system without needing to install any permanently resident malware.[7][8][9]
Distributed computing projects such asSETI@home and the Great Internet Mersenne Prime Search, as well as Internet-dependent applications likeGoogle Earth, rely primarily on client-side operations. They initiate a connection with the server (either in response to a user query, as with Google Earth, or in an automated fashion, as with SETI@home), and request some data. The server selects a data set (aserver-side operation) and sends it back to the client. The client then analyzes the data (a client-side operation), and, when the analysis is complete, displays it to the user (as with Google Earth) and/or transmits the results of calculations back to the server (as with SETI@home).
An early form of client–server architecture isremote job entry, dating at least toOS/360 (announced 1964), where the request was to run ajob, and the response was the output.
While formulating the client–server model in the 1960s and 1970s,computer scientists buildingARPANET (at theStanford Research Institute) used the termsserver-host (orserving host) anduser-host (orusing-host), and these appear in the early documents RFC 5[10] and RFC 4.[11] This usage was continued atXerox PARC in the mid-1970s.
One context in which researchers used these terms was in the design of acomputer network programming language called Decode-Encode Language (DEL).[10] The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that theUnited States Department of Defense established ARPANET (predecessor ofInternet).
Client-host andserver-host have subtly different meanings thanclient andserver. A host is any computer connected to a network. Whereas the wordsserver andclient may refer either to a computer or to a computer program,server-host andclient-host always refer to computers. The host is a versatile, multifunction computer;clients andservers are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving.
An early use of the wordclient occurs in "Separating Data from Function in a Distributed File System", a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user's network node (the client).[12] By 1992, the wordserver had entered into general parlance.[13][14]
The client-server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts.Centralized computing, however, specifically allocates a large number of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be.[15] It relies heavily on network resources (servers and infrastructure) for computation and storage. Adiskless node loads even itsoperating system from the network, and acomputer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, arich client, such as apersonal computer, has many resources and does not rely on a server for essential functions.
In the client-server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected workload.Load-balancing andfailover systems are often employed to scale the server beyond a single physical machine.[20][21]
Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.
In apeer-to-peer network, two or more computers (peers) pool their resources and communicate in adecentralized system. Peers are coequal, or equipotentnodes in a non-hierarchical network. Unlike clients in a client-server orclient-queue-client network, peers communicate with each other directly.[citation needed] In peer-to-peer networking, analgorithm in the peer-to-peer communications protocol balancesload, and even peers with modest resources can help to share the load.[citation needed] If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achievehigh availability because other,redundant peers make up for any resourcedowntime; as the availability and load capacity of peers change, the protocol reroutes requests.
Both client-server andmaster-slave are regarded as sub-categories of distributed peer-to-peer systems.[22]
^Benatallah, B.; Casati, F.; Toumani, F. (2004). "Web service conversation modeling: A cornerstone for e-business automation".IEEE Internet Computing.8:46–54.doi:10.1109/MIC.2004.1260703.S2CID8121624.
^Cardellini, V.; Colajanni, M.; Yu, P.S. (1999). "Dynamic load balancing on Web-server systems".IEEE Internet Computing.3 (3). Institute of Electrical and Electronics Engineers (IEEE):28–39.doi:10.1109/4236.769420.ISSN1089-7801.
^Varma, Vasudeva (2009)."1: Software Architecture Primer".Software Architecture: A Case Based Approach. Delhi: Pearson Education India. p. 29.ISBN9788131707494. Retrieved2017-07-04.Distributed Peer-to-Peer Systems [...] This is a generic style of which popular styles are the client-server and master-slave styles.