Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
Virtual Internet Protocol (IP): the method is generally used in a scene with high service availability, and when a main server fails to provide service to the outside, the virtual IP is dynamically switched to a standby server, so that a user cannot sense the failure.
haproxy: for providing high availability, load balancing, and application proxies based on Transmission Control Protocol (TCP) and Hypertext Transfer Protocol (HTTP).
A pacemaker: a cluster resource manager. It exploits the messaging and membership management capabilities provided by the cluster infrastructure (heartbeat or corosync) to detect and recover from failures at the node or resource level to achieve maximum availability of cluster services.
corosyn c: and a part of the cluster management suite collects information such as heartbeat among nodes and provides node availability conditions for an upper layer.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be understood that some of the embodiments described herein are only for explaining the technical solutions of the present application, and are not intended to limit the technical scope of the present application.
Fig. 1A is a schematic system architecture diagram of cluster management according to an embodiment of the present invention, as shown in fig. 1A, the schematic system architecture diagram includes abalanced scheduler 11 and acluster node pool 12, where thecluster node pool 12 includes astateless node pool 121 and astateful node pool 122, where,
and thebalanced scheduler 11 is configured to provide an access entry, load-balance schedule the access request to nodes in the stateless node pool, and eliminate abnormal nodes in the stateless node pool according to the obtained internal load condition and the internal service state in the stateless node pool.
Stateless node pool 121, which may provide access services for the underlying service without storing data at any time access services are provided, may include processing node 1,processing node 2, processing node 3, andprocessing node 4. The processing node can be destroyed or created at will, and the data of the user cannot be lost under the condition of destroying the processing node; under the condition of processing the access service, different processing nodes can be switched and used arbitrarily, and the access service of a user is not influenced. In the implementation process, the node of the stateless node pool may be determined according to the operation speed of the node, for example, the node whose operation speed meets the requirement may be determined as the node of the stateless node pool according to actual needs.
Stateful node pool 122, which is used to service stored data, may include control node 1,control node 2, and control node 3. The control node can be used for storing the public data of the cluster, and the control node cannot be destroyed at will. In the implementation process, the node of the stateful node pool may be determined according to the storage performance and the operation speed of the node, for example, the node whose storage performance and operation speed meet the requirements may be determined as the node of the stateful node pool according to actual needs.
As shown in fig. 1B, a cluster management method provided in an embodiment of the present application includes:
step S110, the balance dispatcher distributes the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balance manner;
load balancing of the balanced scheduler means that when one node cannot support the existing access amount, a plurality of nodes can be deployed to form a cluster, and then service processing requests are distributed to each node under the cluster through load balancing, so that the purpose that the plurality of nodes share request pressure together is achieved.
In some embodiments, as shown in fig. 1A, thebalanced scheduler 11 is configured to provide access entries for users to access the cluster, and load-balance schedule access requests of the users to target nodes (processing nodes) in thestateless node pool 121.
Fig. 1C is a schematic diagram of a cluster node according to an embodiment of the present application, and as shown in fig. 1C, the schematic diagram includes: nodes in the stateful node pool (control node 1,control node 2, and control node 3) and nodes in the stateless node pool (processing node 1,processing node 2, processing node 3 … … processing node n), wherein,
the nodes (control node 1,control node 2, and control node 3) in the stateful node pool are a specified number of nodes selected from the entire cluster nodes, and are used as the nodes of the stateful node pool and are responsible for running the storage service of the public data. Therefore, compared with the prior art, the number of nodes in the node pool with the state is small, and the re-election convergence time is short when the nodes fail.
The nodes in the stateless node pool (processing node 1,processing node 2, processing node 3 … …, processing node n) are all nodes except the nodes in the stateful node pool in the entire cluster node.
Step S120, each target node processes the received service processing request;
in some embodiments, the target node may not store data in the case of access services provided at any time. The target node may complete the processing of the service processing request without having to access the common data of the cluster.
Step S130, when processing the received service processing request, each target node accesses a control node in a stateful node pool of the cluster under the condition that it is determined that the common data of the cluster needs to be accessed, so as to complete processing of the service processing request.
In some embodiments, as shown in fig. 1A, a control node in the stateful node pool is used to store the common data clustered in the cluster, and in a case that a target node in the stateless node pool determines that the common data needs to be accessed, the control node may be accessed to complete processing of the service processing request.
In the embodiment of the application, the balanced scheduling is realized through the balanced scheduler, and the problem of overlarge pressure of a single node in a cluster is effectively avoided. By means of classifying the cluster nodes into a stateless node pool and a stateful node pool, the target nodes in the stateless node pool are used for processing the service processing requests, and the control nodes in the stateful node pool are used for storing the public data of the cluster, so that the processing efficiency of the service processing requests is effectively improved.
The step S110 "the equilibrium scheduler distributes the acquired multiple service processing requests to the target node in the stateless node pool of the cluster in an equilibrium manner" may be implemented by the following steps:
step S1101, the management component of the cluster configures a virtual internet protocol address in the balanced scheduler to provide an access entry of the service processing request;
in some embodiments, a cluster may provide a fixed access portal to the outside, such that no change in the external request portal is caused by any change in nodes inside the cluster or by any modification of the internet protocol address. The virtual internet protocol address may be configured at the equalization scheduler.
Step S1102, the balance scheduler acquires load information and service state information of each node in the stateless nodes;
in some embodiments, the load information of the node may include a consumption status of the node resource, and the measure of the load information includes a Processing capacity of a Central Processing Unit (CPU), a CPU utilization rate, a CPU ready queue length, a disk and memory available space, a process response time, and the like.
The service state information may include information on whether the node is available, and in case the node is unavailable, the service state information may be a node failure, and in case the node is available, the service state information may be available to the node.
The balanced scheduler may obtain load information and service state information for each of the stateless nodes.
Step S1103, the balanced scheduler distributes the plurality of service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool.
In some embodiments, as shown in fig. 1D, a schematic flowchart of a process for scheduling, by an equilibrium scheduler, a processing node in a stateless node pool is provided in an embodiment of the present application, where the scheduling process includes the following steps:
step 1, a balance scheduler configures virtual IP;
step 2, configuring a processing node in the state node pool as a cluster node list by the balanced scheduler;
step 3, the processing nodes in the cluster node list report the self load information and the service state information to the balanced scheduler at regular time;
and 4, the balanced scheduler reports the internal load condition and the internal service state at regular time according to the processing node and distributes the access request of the user to the target node in the state node pool in a balanced manner.
In some embodiments, as shown in fig. 1E, in a case that a processing node in a stateless node pool fails, a processing flow of an equilibrium scheduler provided in an embodiment of the present application includes:
step 1, under the condition that a processing node 1 in a state node pool has a fault or has a high load, the processing node 1 stops reporting data to a balanced scheduler;
step 2, the equilibrium dispatcher stops distributing the access request to the processing node 1;
and 3, the balance scheduler distributes the access request to available processing nodes (target nodes) in other state node pools, wherein the available processing nodes are the nodes with normal state of the reported balance adjuster.
In the embodiment of the application, as the virtual internet protocol address is configured in the balanced scheduler, the virtual internet protocol address does not drift due to the failure of the node in the cluster, and the service recovery time is short; the balance scheduler can distribute the plurality of service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool, so that the problem that the pressure of a certain node for processing the service requests is too high can be effectively avoided.
The step S1103, where the balanced scheduler distributes the plurality of service processing requests to the target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool, may be implemented by:
step S1121, the equilibrium scheduler determines the node whose service state information in the stateless node pool is non-failure as a node to be allocated;
in some embodiments, the service state information is non-failed processing nodes, i.e., available processing nodes, such that the balanced scheduler determines these available processing nodes as the nodes to be allocated that can complete the service processing request.
Step S1122, the balancing scheduler determines the node to be allocated whose load information meets the load requirement as the target node;
in some embodiments, the load requirement may be set according to an actual situation, a node that has an excessively large load, that is, a node that does not satisfy the load requirement may be determined as the target node.
Step S1123, the equilibrium scheduler distributes the plurality of service processing requests to the target node in an equilibrium manner.
In the embodiment of the application, the balanced scheduler determines the node with the service state information being non-failure in the stateless node pool as the node to be allocated, and determines the node to be allocated with the load information meeting the load requirement as the target node, so that the obtained target node can effectively complete the service processing request.
The embodiment of the application provides a method for determining a master control node, a standby control node and a slave control node in control nodes, which comprises the following steps:
step 201, a management component of the cluster acquires a preset total number of control nodes;
step 202, the management component determines the number of the slave control nodes according to the total number of the control nodes;
in some embodiments, a master control node, a standby control node, and a plurality of slave control nodes may be provided, and in the case of determining the preset total number of control nodes, one master control node and one standby control node may be subtracted to obtain the number of slave control nodes.
Step 203, the management component obtains a performance index of each node of the cluster, wherein the performance index includes storage performance of the node and operation speed of the node;
and 204, determining the nodes with the operation speed meeting the first operation condition as the master control nodes, determining the nodes with the operation speed meeting the second operation condition as the standby control nodes, determining the nodes with the operation speed meeting the third operation condition and determining the nodes with the number meeting a number threshold as the slave control nodes by the management component, wherein the number threshold is determined according to the number of the slave control nodes.
In the embodiment of the application, a master control node, a standby control node and a slave control node can be determined according to the storage performance and the operation speed of the nodes, the master control node is used for providing the common data of the cluster, the standby control node is used for backing up the common data of the cluster, and the slave control node is used for replacing the standby control node under the condition that the standby control node fails. In this way, it can be ensured that effective common data of the cluster is provided in case of processing service access requiring access to the common data of the cluster.
The embodiment of the application provides another method for determining a master control node, a standby control node and a slave control node in a control node, which comprises the following steps:
step 210, the management component presents a configuration interface, and the configuration interface is used for configuring the master control node, the standby control node and the slave control node;
in some embodiments, such as the configuration interface shown in fig. 2A, a user may configure the master control node, the standby control node, and the slave control node by clicking theadd node control 21.
Step 211, the management component receives configuration operations for the master control node, the standby control node and the slave control node based on the configuration interface;
step 212, the management component determines one of the master control node, one of the standby control nodes, and at least one of the slave control nodes based on the configuration operation.
In the embodiment of the application, a user can complete the configuration of the master control node, the standby control node and the slave control node in the configuration interface, and the management component determines one master control node, one standby control node and at least one slave control node in the cluster node pool according to the configuration of the user.
The embodiment of the application provides a method for replacing a fault control node, which comprises the following steps:
step 220, when the control node in the stateful node pool fails, the management client acquires the address of the new control node from the management component of the cluster;
in some embodiments, each node in the stateless node pool comprises a management client and a proxy client, such as the schematic of a cluster service distribution shown in fig. 2B, which comprises atraffic IP 1211, a stateless traffic service 1212, aproxy client 1213 and amanagement client 1214 disposed on each processing node in thestateless node pool 121, astateful traffic service 1221, adata storage service 1222 and amanagement component 1223 disposed on each control node in thestateful node pool 122, wherein,
aservice IP 1211, configured to provide a real service port IP of the stateless node, configured to an available node pool of the pre-scheduler;
stateless business services 1212 to process access requests but not data stores;
amanagement client 1213 for resetting the destination IP of the proxy client in case of the switching of the master control node, so that the proxy client can be connected to a new active node.
And theproxy client 1214 is used for forwarding the request for accessing the public data to the main control node in the stateful node pool to complete the request for accessing the public data under the condition that the public data needs to be accessed.
Stateful business services 1221, configured to provide services for operating public data, such as clearing resources, generating operation and maintenance report information, and the like;
and adata storage service 1222 for storing common data of the cluster, such as mysql, redis, mongo and other databases.
Amanagement component 1223 for maintaining nodes in the stateful node pool, notifying control nodes in other stateful node pools and reassembling the stateful node pools in case of a failure of the master control node.
Fig. 2C is a schematic diagram of a node of a stateless node pool and a node of a stateful node pool provided in an embodiment of the present application, and as shown in fig. 2C, the node of the stateless node pool includes aproxy client 1213, which is used to access a data storage service of a master control node of the stateful node pool.
Step 221, the management client sends the address of the new control node to the proxy client;
step 222, the proxy client modifies the address of the public data accessing the cluster to the address of the new control node.
In some embodiments, the management client sends the address of the new control node to the proxy client, and as shown in fig. 2D, a schematic flow chart of data storage service switching in case of a failure of a control node in a stateful node pool includes:
step 1, a management client receives a cluster event notification, wherein the cluster event notification is used for notifying a control node of a proxy client fault and a determined new control node;
step 2, the management client informs the agent clients of the nodes in the stateless node pool, and changes agent configuration according to the failed control node and the determined new control node;
step 3, the proxy client disconnects the failed control node;
and 4, establishing connection between the proxy client and the new control node.
In the embodiment of the application, under the condition that a control node in a state node pool has a fault, a management client acquires the address of the new control node from a management component of a cluster, sends the address of the new control node to an agent client, and the agent client modifies the address of public data accessing the cluster into the address of the new control node. Therefore, perception of stateless service to public data is shielded by using the proxy client, local host programming of application service is realized, data storage service can be migrated under the condition that a control node fails, original data storage becomes unavailable, and at the moment, only a destination end (the address of a new control node) of the proxy client needs to be modified and reconnected, so that rapid transfer and recovery of failure service are realized.
The step 220 "in the case that the control node in the stateful node pool has a failure, the management client obtains the address of the new control node from the management component of the cluster" may be implemented by the following steps:
2201, switching the standby control node to a new main control node by the management component of the cluster under the condition that the main control node is determined to be in fault by voting in the nodes in the state node pool;
in a distributed system, a voting system can be relied on to determine whether the whole cluster can work normally; each node of the cluster under the default condition can have a certain number of votes in the hands of the node, when the node is isolated or fails, each node can detect heartbeat information according to the node and send the heartbeat information to other nodes of the cluster, and who fails is determined according to the number of the votes and can represent cluster work. Nodes that can continue to work on behalf of the cluster may be referred to as a majority party, which is the party with votes greater than half the total votes; and the party who is less than or equal to the total vote for the vote is called the minority.
Assuming that a cluster consists of 3 nodes a/B/C, when node a fails or is isolated from the B/C network, who is to represent cluster work at all? If A is to work on behalf of the cluster, the entire cluster will not be available, and if B/C is to work on behalf of the cluster, the cluster service is still available. When A fails or is isolated by the network, B sends heartbeat detection information of A to C, then C also sends heartbeat check information of A to B, the voting is that A obtains 2 votes, the whole cluster has 3 votes in total, and the A fails when half votes are already available, so the A cannot continue to represent the cluster to work; the remaining B and C are available to continue working on behalf of the cluster and service can be transferred to both nodes.
In some embodiments, a node in the pool of stateful nodes may switch the standby control incentive point to a new master control node when the master control node fails as determined by election voting.
Step 2202, said managing component determining one of said slave control nodes as a new standby control node among a plurality of said slave control nodes;
step 2203, the management component selects a new slave control node from the nodes in the stateless node pool and adds the new slave control node to the stateful node pool;
step 2204, the management component sends the address of the new master control node to the management client.
Fig. 2E is a schematic processing flow diagram in the case that a control node in a stateful node pool fails according to an embodiment of the present application, and as shown in fig. 2E, the flow includes:
step 1, removing a state node pool from a control node with a fault by a cluster management component;
step 2, the cluster management component informs the management client of the failure of the control node;
step 3, the cluster management component moves a determined stateless node in the stateless node pool out of the stateless node pool;
and 4, adding the selected stateless nodes into the stateful node pool by the cluster management component to form a new stateful node pool.
In the embodiment of the application, under the condition that the management component of the cluster determines that the main control node has a fault according to the voting of the available nodes of the cluster, the standby control node is switched to a new main control node, and the address of the new main control node is sent to the management client. Therefore, the number of the nodes in the stateful node pool is constant, and the stateful node pool is mainly responsible for running storage service of public data.
The embodiment of the application provides a method for adding or deleting nodes in a stateless node pool, which comprises the following steps:
step 230, the management component of the cluster presents a configuration interface, and the configuration interface is used for adding or deleting nodes in the stateless node pool;
in some embodiments, a user may add or delete nodes in the stateless node pool by clicking on anadd node control 21, such as the configuration interface described in FIG. 2A.
231, the management component receives an operation of adding or deleting a node based on the configuration interface;
step 232, the management component adds or reduces nodes in the stateless node pool based on the operation of adding or deleting nodes to realize the expansion or reduction of the management scale.
Fig. 2F is a schematic diagram of expanding nodes in a stateless node pool according to an embodiment of the present application, as shown in fig. 2F, the schematic diagram includes a stateless node pool 22 before expansion and a stateless node pool 23 after expansion, where,
the stateless node pool 22 before expansion comprises a processing node 1, aprocessing node 2 and a processing node 3 which respectively manage 1000 resources, and the nodes are used for processing all external service requests
The expanded stateless node pool 23 includes a processing node 1, aprocessing node 2, a processing node 3, aprocessing node 4, and a processing node 5, which respectively manage 1000 resources, where theprocessing node 4 and the processing node 5 are expanded processing nodes.
As can be seen from fig. 2F, the processing nodes in the stateless node pool can be arbitrarily increased or decreased for horizontal expansion, and will not participate in voting of the cluster available node elections; therefore, even if the number of processing nodes is large, the convergence time of the election mechanism is not influenced. The management of large-scale resources can be realized after the scale of the processing nodes is increased.
In the embodiment of the application, the processing nodes in the stateless node pool can be increased or decreased arbitrarily to expand or decrease the management scale.
In the prior art, the problems in cluster management are as follows:
(1) each node is provided with a placemaker cluster component and a corosync cluster component, when the nodes are offline, voting is needed to form a new available cluster, and when the number of the nodes is large, the convergence time of an election mechanism is long, so that the service with slow service transfer is influenced. Therefore, the method cannot support the arbitrary expansion of the nodes and is not used in the management scale.
(2) The main control node triggers the virtual IP drift off line, needs to reset the virtual IP at other nodes and informs the adjacent gateway to update the ARP information due to the change of the physical address, and can be recovered within a period of time, so that the service is influenced.
(3) When a client request is proxied through haproxy, all requests of the client are sent to the main node firstly, and the main node performs load balancing distribution to other nodes in the cluster for processing, so that the pressure of the main node is high, and when the quantity of concurrent requests is large, a single node can not load, and the scale is not changed.
Fig. 3A is a schematic flowchart of a scenario where a user accesses a cloud platform according to an embodiment of the present application, and as shown in fig. 3A, the schematic flowchart includes the following steps:
step S301, when a user accesses the cluster from the public network, the access request is dispatched to each stateless node in the cluster in a balanced way through a balanced dispatcher;
step S302, all business services in the stateless node access stateful nodes with public data storage services through the proxy client.
In the embodiment of the application, the access entrance is provided through the balanced scheduler and balanced scheduling is realized, so that the problem that the single node in the cluster is not available in scale due to the fact that all requests of the client are sent to the main node firstly when the client is proxied through haproxy, and the main node carries out load balancing distribution to other nodes in the cluster for processing is solved.
By classifying the cluster nodes into stateless nodes and stateful nodes, the stateless nodes can be increased or decreased arbitrarily to realize the transverse expansion of the management scale; the method has the advantages that the stateful nodes are formed by selecting a small number of nodes, the election convergence time of the available nodes of the new cluster is shortened when the nodes fail, and therefore rapid transfer and recovery of fault service are achieved, the problem that a pacemaker cluster component and a corosync cluster component are arranged on each node is solved, when the nodes are offline, voting is needed to form the new available cluster, and when the number of the nodes is large, the election mechanism convergence time is long, so that service transfer is slow, is influenced. Therefore, the method cannot support the arbitrary expansion of the nodes and is not used in the management scale.
Fig. 3B is a flowchart illustrating a method for processing an access request according to an embodiment of the present application, where as shown in fig. 3B, the method includes:
step S311, receiving an external access request by a target node stateless service in the stateless node pool;
step S312, the target node in the stateless node pool determines whether the access request needs to access the public data;
in a case where it is determined that the access to the common data is not required, jumping to step S314; in the case where it is determined that the access to the common data is required, it jumps to step S313.
Step S313, under the condition that the public data is determined to need to be accessed, the proxy client of the target node in the stateless node pool forwards the access request;
step S314, the proxy client of the target node in the stateless node pool accesses the data storage service of the stateful node pool;
and step S315, processing the access request.
In the embodiment of the application, the perception of the stateless service to the public data is shielded by using the proxy client, the local host programming of the application service is realized, and the processing logic for the external request service is in the target node after being distributed by the scheduler, so that the whole service logic architecture becomes simple.
Fig. 3C is a method for forwarding a service request by a proxy client in a case that a control node changes according to an embodiment of the present application, that is, in the above embodiment, step S313 "in a case that it is determined that access to common data is needed, a proxy client of a target node in a stateless node pool forwards an access request", and in a case that a message of a control node change sent by a management component is received, the method includes the following steps:
step S3131, the proxy client receives a control node change message, wherein the control node change message includes a failed control node and a determined new control node;
step S3132, the proxy client informs the proxy client of the node in the stateless node pool, and the external service request is blocked and not forwarded;
step S3133, the proxy client changes the destination address of the forwarding service request into the address of the new control node;
step S3134, the proxy client recovers and retries forwarding the blocked request.
In the embodiment of the application, under the condition that the master control node in the stateful node pool has a failure, the data storage service on the failed master control node is migrated, the original data storage becomes unavailable, and at the moment, only the destination of the proxy client of the processing node in the stateless node pool needs to be modified and reconnected. The internal service of the processing node in the stateless node pool has no perception on the fault of the main control node in the stateful node pool, and the customer service is not influenced.
Fig. 3D is a method for recovering a stateful node pool when it is detected that a master control node is offline according to an embodiment of the present disclosure, and as shown in fig. 3D, the method includes:
step S321, the management component detects that the master control node is off-line;
step S322, the management component moves the failed main control node out of the stateful node pool;
in some embodiments, the management component determines the offline master control node as the failed master control node and moves the failed master control node out of the stateful node pool.
Step S323, the management component switches the standby control node into a main control node;
step S324, the management component informs the cluster client that the master control node changes;
step S325, the management component selects a node meeting the conditions from the stateless node pool and moves out of the stateless node pool;
step S326, the management component adds the selected stateless node to a stateful node pool to recombine the stateful node pool;
step S327, recovering the node pool with the state.
In the embodiment of the application, the management component can effectively recombine the node pool with the state under the condition that the main control node is determined to be in fault, and service processing of the cluster is not influenced.
Based on the foregoing embodiments, an embodiment of the present application provides a cluster management apparatus, where the apparatus includes modules, each module includes sub-modules, and each sub-module includes a unit, and may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 4 is a schematic structural diagram of a cluster management apparatus provided in an embodiment of the present application, and as shown in fig. 4, the apparatus 400 includes:
the balanced scheduler 401 is configured to distribute the acquired multiple service processing requests to target nodes in a stateless node pool of the cluster in a balanced manner;
each target node 402 is configured to process a received service processing request;
each target node 402 is further configured to, when it is determined that access to the common data of the cluster is required during processing of the received service processing request, access a control node 403 in a stateful node pool of the cluster to complete processing of the service processing request.
In some embodiments, the balancing scheduler is further configured to obtain load information and service status information of each of the stateless nodes; the balance scheduler is further configured to distribute the plurality of service processing requests to target nodes in the stateless node pool in a balanced manner according to the load information and the service state information of each node in the stateless node pool.
In some embodiments, the balancing scheduler is further configured to determine a node in the stateless node pool, where the service state information is non-failure, as a node to be allocated; the balance scheduler is further configured to determine a node to be allocated, for which the load information meets the load requirement, as the target node; the balance scheduler is further configured to distribute the plurality of service processing requests to the target node in a balance manner.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the apparatus further includes a management component of the cluster, where the management component of the cluster is configured to obtain a preset total number of control nodes; the management component is further used for determining the number of the slave control nodes according to the total number of the control nodes; the management component is further configured to obtain a performance index of each node of the cluster, where the performance index includes storage performance of the node and an operation speed of the node; the management component is further configured to determine, among nodes whose storage performance satisfies a storage condition, a node whose operation speed satisfies a first operation condition as the master control node, a node whose operation speed satisfies a second operation condition as the standby control node, and a node whose operation speed satisfies a third operation condition and whose number satisfies a number threshold value as the slave control node, where the number threshold value is determined according to the number of the slave control nodes.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the apparatus further includes a management component of the cluster, where the management component of the cluster is configured to present a configuration interface, and the configuration interface is configured to configure the master control node, the standby control node, and the slave control node; the management component is further configured to receive configuration operations on the master control node, the slave control node and the standby control node, respectively, based on the first configuration interface; the management component is further configured to determine one master control node, one standby control node, and at least one slave control node based on the configuration operation.
In some embodiments, each node in the stateless node pool comprises a management client and a proxy client, wherein, in case of a failure of a control node in the stateful node pool, the management client is configured to obtain an address of the new control node from a management component of the cluster; the management client is further used for sending the address of the new control node to the agent client; and the proxy client is used for modifying the address of the public data accessing the cluster into the address of the new control node.
In some embodiments, the control nodes in the stateful node pool include a master control node, a standby control node, and a slave control node, and the management component of the cluster is further configured to switch the standby control node to a new master control node in a case where a failure of the master control node is determined by voting among the nodes in the stateful node pool; said management component further configured to determine one of said slave control nodes as a new standby control node among a plurality of said slave control nodes; the management component is further used for selecting a new slave control node from the nodes in the stateless node pool and adding the new slave control node to the stateful node pool; the management component is further configured to send the address of the new master control node to the management client.
In some embodiments, the management component is further configured to present a configuration interface, the configuration interface configured to add or delete nodes in the stateless node pool; the management component is further used for receiving the operation of adding or deleting the nodes based on the configuration interface; the management component is further configured to add or reduce nodes in the stateless node pool based on the operation of adding or deleting nodes, so as to implement expansion or reduction of a management scale.
In some embodiments, the management component is further configured to configure a virtual internet protocol address at the balanced scheduler to provide an access entry for the service processing request.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the flow control method is implemented in the form of a software functional module and is sold or used as a standalone product, the flow control method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the flow control method provided in the above embodiments.
Correspondingly, an embodiment of the present application provides an electronic device, and fig. 5 is a schematic diagram of a hardware entity of the electronic device provided in the embodiment of the present application, as shown in fig. 5, the hardware entity of the device 500 includes: the flow control method comprises a memory 501 and a processor 502, wherein the memory 501 stores a computer program capable of running on the processor 502, and the processor 502 executes the program to realize the steps of the flow control method provided in the above embodiments.
The Memory 501 is configured to store instructions and applications executable by the processor 502, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 502 and modules in the electronic device 500, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.