Nginx load balancing method, device, equipment and readable storage mediumTechnical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for Nginx load balancing.
Background
Nginx: is a free, open source and high-performance HTTP server and reverse proxy server, and is also an IMAP, POP3 and SMTP proxy server.
With the continuous development of network technology, the number of network users increases sharply, and the demand for network access increases day by day, which puts a great pressure on the website server. The straight line increase of the number of concurrent services is a severe examination on the working performance of the server. In the prior art, a multi-server clustering technology is generally adopted to solve the problems, and load balancing is a core problem of the clustering technology. The Nginx can be used as a reverse proxy server for realizing load balancing. The reverse proxy is that a proxy server receives a connection request on the Internet, distributes the request to a server cluster on an internal network according to a certain rule method, and returns a result obtained from the server cluster to a client sending the connection request on the Internet. Therefore, it is crucial to research a method for load balancing of the Nginx, and the method has positive significance for the server to process high-concurrency requests.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. To this end, it is an object of the present invention to provide a method of Nginx load balancing, thereby relieving the stress of individual servers.
The technical scheme adopted by the invention is as follows: an Nginx load balancing method, comprising the steps of:
setting a weight value for each back-end server;
receiving an access request initiated by a client;
and distributing an access request to each back-end server according to the weight value.
Further, the step of setting a weight value for each backend server specifically includes:
and setting a weight value according to the hardware configuration of each back-end server, wherein the higher the hardware configuration is, the larger the set weight value is, and the lower the hardware configuration is, the smaller the set weight value is.
Further, an Nginx load balancing method further includes the steps of:
and distributing the access request to each back-end server according to the hash value of the IP of the client initiating the access request.
Further, an Nginx load balancing method further includes the steps of:
and distributing the access request to each back-end server according to the response time of each back-end server to the access request.
Further, an Nginx load balancing method further includes the steps of:
and distributing the access request to each back-end server according to the hash value of the URL of the access request.
An Nginx load balancing apparatus, comprising:
a setting unit configured to set a weight value to each backend server;
the receiving unit is used for receiving an access request initiated by a client;
and the distribution unit is used for distributing the access request to each back-end server according to the weight value.
An Nginx load balancing apparatus comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform one of the Nginx load balancing methods described above.
A computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform a method of Nginx load balancing as described above.
The invention has the beneficial effects that:
according to the invention, the weighted value is set for each back-end server, and when a large number of concurrent access requests initiated by the client are received, the access requests are distributed to each back-end server according to the weighted values, so that the problem of overlarge data flow of a single server is effectively avoided, the pressure of the single server is reduced, and simultaneously, the resources of each back-end server can be used in a balanced manner.
Drawings
Fig. 1 is a flowchart of an exemplary method for Nginx load balancing according to the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention provides a Nginx load balancing method, as shown in FIG. 1, comprising the following steps:
setting a weight value for each back-end server;
receiving an access request initiated by a client;
and distributing the access request to each back-end server according to the weight value.
As an improvement of the technical scheme, the step of setting a weight value for each backend server specifically includes:
the weight value of each back-end server is set according to the hardware configuration of each back-end server, the higher the hardware configuration is, the larger the set weight value is, and the lower the hardware configuration is, the smaller the set weight value is. The configuration is as follows:
where weight is a weighted value, then Nginx distributes twice as many access requests to the 192.168.1.11 server as to the 192.168.1.10 server.
Of course, the weight value may be set according to other indicators of the backend server.
If a certain back-end server is down, Nginx will automatically remove the back-end server from the queue, and no access request is distributed to the back-end server.
As an improvement of the technical scheme, the method for balancing the Nginx load further comprises the following steps:
and distributing the access request to each back-end server according to the hash value of the IP of the client initiating the access request. The configuration is as follows:
in this way, a client with a fixed IP address always accesses the same backend server, which also solves the problem of session (session control) sharing in a cluster deployment environment to a certain extent.
As an improvement of the technical scheme, the method for balancing the Nginx load further comprises the following steps:
and distributing the access request to each back-end server according to the response time of each back-end server to the access request.
The performance of each backend server is different, and the response time to the access request is long or short. Generally, an access request is preferentially distributed to a server having a short response time and high processing efficiency. Since Nginx by default does not support this algorithm, if this scheduling algorithm is to be used, the upstream _ fair module is installed. The configuration is as follows:
as an improvement of the technical scheme, the method for balancing the Nginx load further comprises the following steps:
and distributing the access request to each back-end server according to the hash value of the URL of the access request.
The URL of each access request points to a certain fixed back-end server, and the back-end server is effective when being a cache server, so that the cache efficiency can be improved under the condition that the Nginx is used as a static server. Since nginnx by default does not support this algorithm, if this scheduling algorithm is to be used, a hash package of nginnx is installed. The configuration is as follows: url (url)
The present invention also provides an Nginx load balancing apparatus, comprising:
a setting unit configured to set a weight value to each backend server;
the receiving unit is used for receiving an access request initiated by a client;
and the distribution unit is used for distributing the access request to each back-end server according to the weight value.
The present invention also provides a nginnx load balancing apparatus, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of Nginx load balancing as described above.
The present invention also provides a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform a method for Nginx load balancing as described above.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.