Disclosure of Invention
Based on the above study, the present invention provides a data processing method, device, server and storage medium, so as to improve the above problems.
Embodiments of the invention may be implemented as follows:
in a first aspect, an embodiment provides a data processing method, applied to a server, where a data processing policy and a plurality of operation nodes are configured in the server, the method includes:
receiving a command sent by a client, and analyzing the command;
and selecting an operation node from the plurality of operation nodes according to the analyzed command and the data processing strategy so as to execute the analyzed command and perform data processing.
In an optional embodiment, the plurality of operation nodes include a plurality of read operation nodes and a plurality of write operation nodes, the plurality of read operation nodes are sequentially stored in a read node list, and the plurality of write operation nodes are sequentially stored in a write node list;
the step of receiving the command sent by the client and analyzing the command comprises the following steps:
receiving a command sent by the client, and judging whether the command is a write command or a read command;
the step of selecting an operation node from the plurality of operation nodes according to the parsed command and the data processing policy to execute the parsed command to perform data processing includes:
when the command is judged to be a write command, selecting all the write operation nodes in the write node list according to the data processing strategy, and enabling all the write operation nodes to sequentially execute the write command according to the sequence in the write node list;
and when the command is judged to be a read command, selecting the write operation node from the write node list or the read operation node from the read node list according to the data processing strategy so as to execute the read command.
In an alternative embodiment, the data processing policy includes a read-only node policy and a read-node priority policy;
the step of selecting the write operation node from the write node list or the read operation node from the read node list according to the data processing policy includes:
when the data processing strategy is a read-only node strategy, randomly selecting one read operation node from the read node list according to the read-only node strategy;
and when the data processing strategy is a read node priority strategy, judging whether the read node list is empty, if the read node list is not empty, randomly selecting one read operation node from the read node list, and if the read node list is empty, randomly selecting one write operation node from the write node list.
In an alternative embodiment, the data processing policy includes a read-only write node policy and a write node priority policy;
the step of selecting the write operation node from the write node list or the read operation node from the read node list according to the data processing policy includes:
when the data processing strategy is a read-only write node strategy, randomly selecting one write operation node from the write node list according to the read-only write node strategy;
and when the data processing strategy is a write node priority strategy, judging whether the write node list is empty, if the write node list is not empty, randomly selecting one write operation node from the write node list, and if the write node list is empty, randomly selecting one read operation node from the read node list.
In an alternative embodiment, each operation node corresponds to a node in the cache cluster one by one, and the method further includes:
for each operation node, sending a heartbeat packet to the node in the cache cluster corresponding to the operation node through the operation node;
recording delay time between time of sending the heartbeat packet and time of receiving the feedback message when receiving the feedback message returned by the node in the corresponding cache cluster;
and deleting the operation node from the plurality of operation nodes when feedback information returned by the nodes in the corresponding cache clusters is not received.
In an alternative embodiment, the data processing policy includes a minimum delay policy; the step of selecting an operation node from the plurality of operation nodes according to the parsed command and the data processing policy includes:
and if the analyzed command is a read command, selecting the operation node with the minimum delay time from the plurality of operation nodes according to the minimum delay strategy.
In an alternative embodiment, each of the operation nodes is configured with a connection pool, and the step of executing the parsed command includes:
based on the connection pool, the analyzed command is sent to the nodes in the cache clusters corresponding to the operation node through the selected operation node, so that the nodes in the corresponding cache clusters execute the analyzed command.
In a second aspect, an embodiment provides a data processing apparatus, applied to a server, where the server is configured with a data processing policy and a plurality of operation nodes, and the apparatus includes an parsing module and a selecting module;
the analysis module is used for receiving a command sent by the client and analyzing the command;
the selection module is used for selecting an operation node from the plurality of operation nodes according to the analyzed command and the data processing strategy so as to execute the analyzed command and process the data.
In a third aspect, an embodiment provides a server configured with a data processing policy and a plurality of operation nodes, where the server is configured to:
receiving a command sent by a client, and analyzing the command;
and selecting an operation node from the plurality of operation nodes according to the analyzed command and the data processing strategy so as to execute the analyzed command and perform data processing.
In a fourth aspect, an embodiment provides a storage medium having stored thereon a computer program which, when executed by a processor, implements a data processing method according to any of the foregoing embodiments.
According to the data processing method, the device, the server and the storage medium, after the command sent by the client is received, the command is analyzed, the operation node is selected from the operation nodes according to the analyzed command and the configured data processing strategy, so that the analyzed command is executed, the service personnel are not required to code to select the node, the flexibility is high, simplicity and convenience are realized, and the cost is low.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
With the popularization of the internet, content information is more and more complex, the number of users and the access amount are more and more, so that the application needs to support more concurrent amount, and meanwhile, the calculation made by an application server and a database server is more and more. However, the resources of the application server are often limited, the technology change is slow, the number of requests which can be accepted by the database server per second is limited (or the reading and writing of files are also limited), in order to effectively utilize the limited resources to provide the throughput as large as possible, a cache system is mostly adopted to read data, and the requests in each link can directly acquire target data from the cache system and return the target data, so that the calculation amount is reduced, the response speed is effectively improved, and more users are served by the limited resources.
Currently, most cache systems adopt a master-slave structure, under which a master access node and a slave access node are generally provided, such as a master-slave architecture elastic cache provided by amazon cloud service (Amazon Web Service, AWS). Under this architecture, the service personnel are required to hard-code and choose whether to read the master node or the slave node, so that flexibility is poor, and if the master node is read, the whole service is not available during the fault period of the master node, and if the slave node is read, the slave node has no fault migration capability, and the service is not available during the fault period. In addition, because of software and hardware upgrades and capacity planning, cache system migration is a common operation, generally requiring service developers to change all into double writes where each write to the cache system, is labor-intensive, and is prone to error.
Based on the above-described studies, the present embodiment provides a data processing method to improve the above-described problems.
Referring to fig. 1, fig. 1 is a schematic diagram of an optional application scenario of a data processing method provided in this embodiment, where the data processing method may be applied to a loadbalancing cache system 100. The loadbalancing cache system 100 includes a client 10, a server 20, and acache cluster 30.
Wherein the client 10 refers to a user terminal used by a service party. The client 10 may be, but is not limited to, a personal computer (personal computer, PC), a tablet, a personal digital assistant (personal digital assistant, PDA), a mobile internet device (mobile Internet device, MID), etc.
In this embodiment, thecache cluster 30 may be a cache cluster of a master-slave architecture, for example, a dis cluster, memcached, etc. The master-slave architecture cache cluster at least comprises a master node and at least one slave node, and in one cache cluster, the data of the master node and the data of the slave node are synchronous. The master node is used for performing data reading and writing processing, and the slave node is used for performing data reading and writing processing.
In this embodiment, the server 20 may be a middleware of the loadbalancing cache system 100, and is connected to thecache cluster 30 and the client 10.
The server 20 is configured with a data processing policy and a plurality of operation nodes, each operation node corresponds to a node in thecache cluster 30 one by one, and each operation node is communicatively connected to a node in thecorresponding cache cluster 30.
Optionally, the data processing policy and the operation node may be configured according to the service requirement of the service party. In a specific embodiment, the service side provides the required operation nodes and data processing strategies according to the service requirements, and the service side constructs the service side 20 according to the provided operation nodes and data processing strategies.
As an optional implementation manner, in this embodiment, the server 20 may be integrated with the client 10, that is, the server 20 may be deployed on the client 10, and the server 20 is called by the client 10 to enable the server 20 to execute the data processing method provided in this embodiment. When the server 20 is integrated in the client 10, the client 10 can introduce the cache system load balancing library by a Java annotation technology and a Spring boot framework, and when the client is implemented, the cache system load balancing library can be used only by introducing annotations, and the data processing method provided by the embodiment is executed through the cache system load balancing library.
As another alternative implementation manner, in this embodiment, the server 20 may be deployed independently of the client 10, that is, the server 20 may be deployed separately. When the server 20 is deployed independently of the client 10, the server 20 is communicatively connected to the client 10 to process requests, commands, etc. issued by the client 10.
The data processing method provided in this embodiment is applied to the server 20 in the loadbalancing cache system 100. Optionally, in order to facilitate data processing and improve the response rate of data processing, the present embodiment integrates the server 20 with the client 10.
Based on the architecture of the loadbalancing cache system 100, please refer to fig. 2, fig. 2 is a flow chart of a data processing method according to the present embodiment. The steps in the flow diagram shown in fig. 2 are described in detail below.
Step S10: and receiving a command sent by the client and analyzing the command.
Step S20: and selecting an operation node from the plurality of operation nodes according to the analyzed command and the data processing strategy so as to execute the analyzed command and perform data processing.
Because the service end provided by the embodiment is configured with the data processing strategy and the plurality of operation nodes, after receiving the command sent by the client, the operation nodes can be selected from the plurality of operation nodes according to the analyzed command and the configured data processing strategy to execute the analyzed command for data processing, manual coding is not needed to select the nodes, the intrusion of writing service codes to service personnel is small, the flexibility of node selection is improved, and the scheme is simple to realize and low in cost.
In order to facilitate selection of the operation node, in this embodiment, the operation node is divided into a write operation node and a read operation node according to a function corresponding to the operation node, that is, the plurality of operation nodes in this embodiment include a plurality of read operation nodes and a plurality of write operation nodes.
In order to facilitate maintenance of the operation nodes, in the present embodiment, the plurality of read operation nodes are sequentially stored in the read node list, and the plurality of write operation nodes are sequentially stored in the write node list.
The order of the read operation node and the write operation node may be set arbitrarily, and the embodiment is not limited.
In an alternative embodiment, the plurality of read operation nodes and the plurality of write operation nodes may be sequentially stored in the same node list, but in order to distinguish the read operation nodes from the write operation nodes, roles of the operation nodes need to be marked in the node list, for example, if a certain operation node is a read operation node, the operation node is marked as a read operation node in the node list, so that the read operation node and the write operation node are distinguished in the same node list, thereby facilitating selection of the operation node. The manner of marking the read operation node and the write operation node can be flexibly selected, for example, the read operation node and the write operation node can be marked by different identification information and different fields.
Based on the setting of the operation nodes, after the command sent by the client is received and analyzed, the operation nodes can be selected from the node list according to the analyzed command and the configured data processing strategy.
Optionally, in this embodiment, the command sent by the client includes a write command and a read command, so the step of the server receiving the command sent by the client and parsing the command may include the following procedures:
and receiving a command sent by the client, and judging whether the command is a write command or a read command.
Correspondingly, the process of selecting an operation node from the plurality of operation nodes according to the parsed command and the data processing policy to execute the parsed command and perform data processing includes:
when the command is judged to be a write command, all the write operation nodes in the write node list are selected according to the data processing strategy, so that all the write operation nodes execute the write command in sequence in the write node list.
And when the command is judged to be a read command, selecting the write operation node from the write node list or the read operation node from the read node list according to the data processing strategy so as to execute the read command.
In this embodiment, after an operation node is selected from the node list according to the parsed command and the configured data processing policy, the parsed command is executed based on the selected operation node to perform data processing.
In an alternative embodiment, the step of executing the parsed command may include:
based on the connection pool, the analyzed command is sent to the nodes in the cache clusters corresponding to the operation node through the selected operation node, so that the nodes in the corresponding cache clusters execute the analyzed command.
The connection pool is used for storing the established connection in the pool, when a request comes, the established connection is directly used for accessing the target, the processes of establishing the connection and destroying the connection are omitted, and the data processing performance is improved.
In an exemplary embodiment, after an operation node is selected from the node list, the selected operation node may send the parsed command to the node in the corresponding cache cluster through the connection pool, and the node in the corresponding cache cluster executes the parsed command, that is, selects the connection that has been created from the connection pool, and sends the parsed command to the node in the corresponding cache cluster through the connection, and the node in the cache cluster executes the parsed command after receiving the parsed command.
For example, when the read command is obtained by parsing, after a read operation node is selected from the read node list, the read operation node sends the read command to a node in a corresponding cache cluster through the connection pool, and the node in the cache cluster performs a data read operation according to the read command.
In this embodiment, the read operation node corresponds to a slave node in the cache cluster, and the write operation node corresponds to a master node in the cache cluster. Therefore, the embodiment can complete the migration of the data in the cache clusters by configuring a plurality of write operation nodes, wherein each write operation node corresponds to the master node in a different cache cluster.
In an exemplary embodiment, when the analysis result indicates that the command is a write command, after all the write operation nodes are selected from the write node list, all the write operation nodes send the write command to the master node in the corresponding cache cluster according to the sequence in the write node list, so that the master node in the corresponding cache cluster performs the write operation of the data, namely, the cache data. After the cache data of the master nodes in all the cache clusters are synchronized, the master nodes needing to be adjusted down can be deleted, so that the multi-write migration operation is completed.
For example, the a write operation node and the B write operation node are sequentially stored in the write node list, where the a write operation node corresponds to a master node in the a cache cluster, the B write operation node corresponds to a master node in the B cache cluster, and when the resolving result is a write command, the a write operation node and the B write operation node sequentially send write commands to the master node in the corresponding cache cluster, that is, the a write operation node sends write commands to the master node in the corresponding a cache cluster and the B write operation node sends write commands to the master node in the corresponding B cache cluster, so that the master node in the a cache cluster and the master node in the B cache cluster execute data caching operation, and after the data caching in the master node in the a cache cluster and the master node in the B cache cluster are synchronized, if the master node in the a cache cluster needs to be adjusted down, the master node in the a cache cluster is deleted, and the data of the master node in the a cache cluster is also migrated to the master node in the B cache cluster.
It may be understood that, in this embodiment, after the nodes in the corresponding cache clusters execute the command, the executed result may also be returned to the corresponding operation node through the connection pool, and returned to the client by the corresponding operation node.
According to the data processing method provided by the embodiment, when the command sent by the client is the write command, all write operation nodes in the write node list are sequentially subjected to the write command according to the data processing strategy, so that the multi-write migration operation of data is completed, the operation is simple, the workload is low, and errors are not easy to occur. When the command sent by the client is a read command, a write operation node is selected from the write node list or a read operation node is selected from the read node list according to the data processing strategy so as to execute the read command, so that the flexibility is high and the response time is quick.
In an alternative implementation manner, the data processing policies provided in this embodiment include a read-only node policy, a read-node priority policy, a read-only write node policy, and a write node priority policy.
Accordingly, the process of selecting the write operation node from the write node list or the read operation node from the read node list according to the data processing policy includes:
and when the data processing strategy is a read-only node strategy, randomly selecting one read operation node from the read node list according to the read-only node strategy.
And when the data processing strategy is a read node priority strategy, judging whether the read node list is empty, if the read node list is not empty, randomly selecting one read operation node from the read node list, and if the read node list is empty, randomly selecting one write operation node from the write node list.
When the data processing strategy is a read-only write node strategy, randomly selecting one write operation node from the write node list according to the read-only write node strategy.
And when the data processing strategy is a write node priority strategy, judging whether the write node list is empty, if the write node list is not empty, randomly selecting one write operation node from the write node list, and if the write node list is empty, randomly selecting one read operation node from the read node list.
The node list is empty, which indicates that the server side does not configure a corresponding operation node, for example, the read node list is empty, which indicates that the server side does not configure a read operation node, and the write node list is empty, which indicates that the server side does not configure a write operation node.
In this embodiment, when the configured data processing policy is a read-only node policy or a read-only write node policy, when a read command is parsed, only the read operation node is randomly selected from the read node list or only the write operation node is randomly selected from the write node list. When the configured data processing strategy is a read node priority strategy, randomly selecting a read operation node from the read node list under the condition that the read node list is not empty, and randomly selecting a write operation node from the write node list if the read node list is empty. When the configured data policy is a write node priority policy, under the condition that the write node list is not empty, randomly selecting a write operation node from the write node list, and if the write node list is empty, randomly selecting a read operation node from the read node list, thereby improving flexibility of node selection and availability of service.
It should be noted that, in this embodiment, the read-only node policy, the read-node priority policy, the read-only write node policy, and the write node priority policy are respectively parallel to the data processing policy when the write command is determined, that is, when the write command is determined, all the operation nodes in the write node list are selected according to the data processing policy, and when the read command is determined, the read-only node policy, the read-only write node policy, or the write node priority policy that are currently configured are selected according to the read-only node policy, the read-write node policy, or the write operation node is selected from the write node list.
According to the data processing method provided by the embodiment, the operation node is selected according to the configured data processing strategy to execute the service, so that the service availability is improved, when the master node fails, the data reading service can still be provided based on the slave node, and when the slave node fails, the data reading service can still be provided based on other slave nodes or the master node.
In order to maintain the configured operation node and ensure availability of the configured operation node, referring to fig. 3 in combination, the method further includes steps S30 to S60.
Step S30: and sending a heartbeat packet to the nodes in the cache cluster corresponding to each operation node through the operation node.
Step S40: and judging whether feedback information returned by the nodes in the corresponding cache clusters is received or not.
When receiving the feedback message returned by the node in the corresponding cache cluster, step S50 is executed. And when the feedback message returned by the nodes in the corresponding cache clusters is not received, executing step S60.
Step S50: a delay time between a time of sending the heartbeat packet and a time of receiving the feedback message is recorded.
Step S60: and deleting the operation node from the plurality of operation nodes when feedback information returned by the nodes in the corresponding cache clusters is not received.
In this embodiment, health inspection is performed on the operation node through the heartbeat packet, and at certain intervals, the operation node sends the heartbeat packet to the node in the corresponding cache cluster, and determines whether a feedback message returned by the node in the corresponding cache cluster is received.
If the node in the corresponding cache cluster exists, the node returns feedback information to the operation node sending out the heartbeat packet after receiving the heartbeat packet, and the operation node records delay time between the time of sending out the heartbeat packet and the time of receiving the feedback message after receiving the feedback information. If the operation node does not receive the feedback information returned by the nodes in the corresponding cache clusters, the operation node indicates that the nodes in the corresponding cache clusters are not available, and the operation node is deleted, so that the availability of the configured operation node is ensured.
Based on the above health detection of the operation nodes, a delay time is recorded for each operation node, and in order to improve the processing efficiency of data, the data processing policy provided in this embodiment may further include a minimum delay policy. Thus, the step of selecting an operational node from the plurality of operational nodes according to the parsed command and the data processing policy may comprise:
and if the analyzed command is a read command, selecting the operation node with the minimum delay time from the plurality of operation nodes according to the minimum delay strategy.
For example, the configured operation nodes include an a operation node, a B operation node, and a C operation node, after health detection, the delay time of the a operation node is 0.1s, the delay time of the B operation node is 0.2s, and the delay time of the C operation node is 0.3s, and after the read command is resolved, the a operation node is selected according to a minimum delay policy to execute the read command.
According to the data processing method provided by the embodiment, the operation node is selected according to the configured data processing strategy, on one hand, no service personnel code is needed to select the node, the flexibility of node selection is improved, the cost is reduced, on the other hand, the time delay of data reading is reduced, and the reading throughput of service is improved.
On the basis of the foregoing, referring to fig. 4 in combination, the present embodiment further provides a data processing apparatus 21, which is applied to the server 20, where the server 20 is configured with a data processing policy and a plurality of operation nodes, and the apparatus includes an parsing module 211 and a selecting module 212.
The parsing module 211 is configured to receive a command sent by a client, and parse the command.
The selection module 212 is configured to select an operation node from the plurality of operation nodes according to the parsed command and the data processing policy, so as to execute the parsed command to perform data processing.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific operation of the data processing apparatus 21 described above may refer to the corresponding procedure in the foregoing method, and will not be described in detail herein.
On the basis of the above, the present embodiment further provides a server, where the server is configured with a data processing policy and a plurality of operation nodes, and the server is configured to receive a command sent by a client, parse the command, and select an operation node from the plurality of operation nodes according to the parsed command and the data processing policy, so as to execute the parsed command, and perform data processing.
On the basis of the above, the present embodiment also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the data processing method of any of the foregoing embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the server side and the storage medium described above may refer to the corresponding process in the foregoing method, and will not be described in detail herein.
In summary, the data processing method, the device, the server and the storage medium provided in this embodiment analyze the command after receiving the command sent by the client, and select the operation node from the plurality of operation nodes according to the analyzed command and the configured data processing policy, so as to execute the analyzed command, without requiring a service personnel to encode to select the node, and the method has high flexibility, is simple and convenient, and has low cost.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.