Detailed Description
The technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification.
The terms first, second, third and the like in the description and in the claims and in the above drawings, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals according to the embodiments of the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the target data, the data to be collated, and the like referred to in this specification are all acquired with sufficient authorization.
In the internet financial scenario, the related data caching scheme is greatly optimized in time consumption, but there are relatively great challenges in cache data consistency, such as dirty data, data delay, data loss and the like, which can cause cache data inconsistency.
Based on the above, the embodiment of the specification provides a data caching synchronization method, which compensates the single-link barrier risk of relying on only the message middleware or the RPC gateway on the cache synchronization link through a double-link caching synchronization scheme of cross-city transaction message and asynchronous RPC spam compensation, and ensures the timeliness, accuracy and stability of data caching synchronization.
Referring next to fig. 1, fig. 1 is a schematic diagram of an architecture of a data cache system according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the data caching system may include a first server 110 and a second server 120. Wherein:
The first server 110 may be, but is not limited to, a server corresponding to the first city data center, and is a core component of the first city data center, for running an application program and processing a user request. An agreement subscription application may be installed in the first server 110 for providing an agreement subscription service for the user. After the first server generates the target event, the first server 110 may send the target data related to the target event to the second server 120 through the form of a transaction message, and generate and execute an asynchronous cache synchronization task, where the asynchronous cache synchronization task is to send the target data related to the target event to the second server 120 through an asynchronous RPC manner. The first server 110 may be, but is not limited to being, a hardware server, a virtual server, a cloud server, etc.
The second server 120 may be a server corresponding to the second city data center and capable of providing various data caches, and may synchronously cache the target data sent by the first server 110 through the transaction message after subscribing to the transaction message of the first server 110, and receive the target data sent by the first server 110 through the asynchronous RPC, and asynchronously compensate the synchronous cache based on the target data sent by the first server 110 through the asynchronous RPC. The second server 120 may be, but is not limited to being, a hardware server, a virtual server, a cloud server, etc.
The network may be a medium providing a communication link between the second server 120 and the first server 110, or may be the internet including network devices and transmission media, but is not limited thereto. The transmission medium may be a wired link, such as, but not limited to, coaxial cable, fiber optic and digital subscriber lines (digital subscriber line, DSL), etc., or a wireless link, such as, but not limited to, wireless internet (WIRELESS FIDELITY, WIFI), bluetooth, a mobile device network, etc.
It will be appreciated that the number of first servers 110 and second servers 120 in the data caching system shown in FIG. 1 is by way of example only, and that any number of first servers 110 and second servers 120 may be included in the data caching system in a particular implementation. The embodiment of the present specification is not particularly limited thereto. For example, and without limitation, the first server 110 may be a first server cluster of a plurality of first servers and the second server 120 may be a second server cluster of a plurality of second servers.
Next, referring to fig. 1, a data cache synchronization method provided in an embodiment of the present disclosure will be described. Referring to fig. 2, a flow chart of a data cache synchronization method according to an exemplary embodiment of the present disclosure is shown. As shown in fig. 2, the data cache synchronization method includes the following steps:
s201, after the first server generates the target event, the first server transmits target data related to the target event to the second server in a cross-city mode through a transaction message mode.
Specifically, the target event may include, but is not limited to, a contract signing event or other events requiring data cache synchronization, and the like. The agreement signing event refers to a process of signing a legal service agreement between the user and the corresponding service provider of the first server by means of electronic signature, API call or interface operation, for example, but not limited to, the user completing online loan contract signing. When the first server detects that the target event occurs, target data (such as but not limited to contracted protocol identification, user information, time stamp and the like) related to the target event can be packaged into a transaction message, and the transaction message is sent to the second server through a message queue supporting the XA protocol, so that the target data related to the target event can not be lost in the process of synchronizing data cross-city cache, and atomicity of local transactions between message transmission and the first server is guaranteed. The transaction message refers to a message sent by an application system issuing the message in a transaction operation sequence of a local database. The delivery of such messages is consistent with the database transaction state, and when the transaction state is commit, the messages will be delivered to the subscribers, and when the transaction state is rollback, the messages will not be delivered to the subscribers.
Optionally, the target data (such as but not limited to contracted protocol identification, user information, time stamp, etc.) related to the target event is encapsulated as a transaction message, and also but not limited to first version information generated based on a lamort clock algorithm, etc. The target data sent by the first server through the transaction message carries the first version information, the first version information is used for identifying the freshness of the target data sent by the first server through the transaction message, and the second server supporting to receive the target data decides whether the second server needs to synchronously cache the target data sent by the first server through the transaction message. The first version information may include, but is not limited to, a first version number of the target data transmitted by the first server through the transaction message.
S202, after subscribing to the transaction message, the second server synchronously caches target data sent by the first server through the transaction message.
In particular, the second server may subscribe to the transaction message over a long connection, receive the post-check message integrity, and write the target data sent by the first server over the transaction message to a cache cluster of the local city data center, such as but not limited to a cache database thereof.
Optionally, the target data sent by the first server through the transaction message carries first version information. After subscribing to the transaction information, the second server can also judge whether the first version information is larger than the current version information in the corresponding cache database of the second server (namely, the latest version information of the local cache of the second server), then, under the condition that the first version information is larger than the current version information, the second server is not synchronized to cache the target data corresponding to the first version information, and the first server updates the target data sent by the transaction information, the step of synchronously caching the target data sent by the first server by the transaction information can be executed, so that the second server is prevented from repeatedly synchronizing the cache or caching the outdated invalid data by virtue of version comparison, and the synchronous utilization rate and the effectiveness of the cache are improved.
S203, the first server generates an asynchronous cache synchronous task after the target event occurs.
Specifically, after the first server generates the target event, the target data related to the target event is sent to the second server across cities in the form of a transaction message, and a compensation task (i.e. an asynchronous cache synchronous task) is asynchronously generated, so that the same target data is packaged for the same target event. According to the embodiment of the description, the erasure scene of message loss or synchronization failure in the data caching synchronization process can be processed asynchronously through the asynchronous caching synchronization task, and the delay influence on the first server core transaction can be reduced through asynchronous processing.
Alternatively, the asynchronous cache synchronous task may be, but is not limited to, scheduled by a distributed task queue, setting a corresponding initial delay and retry strategy.
S204, the first server executes an asynchronous cache synchronous task, and the asynchronous cache synchronous task is to send target data related to a target event to the second server in an asynchronous RPC mode.
In particular, the first server may, but is not limited to, perform asynchronous cache synchronous tasks according to a scheduling policy, i.e., send target data related to a target event to the second server by way of an asynchronous RPC. The asynchronous RPC is a non-blocking remote service invocation mechanism that allows the first server to continue to perform other operations after sending the target data associated with the target event.
Optionally, the target data sent by the first server through the asynchronous RPC carries second version information, where the second version information is used to determine whether to asynchronously compensate and synchronously buffer the target data sent by the first server through the asynchronous RPC. The second version information is used for identifying the freshness of target data sent by the second server through the asynchronous RPC, and the second server supporting receiving the target data decides whether the second server needs to synchronously cache the target data sent by the first server through the asynchronous RPC. The second version information may include, but is not limited to, a second version number of the target data sent by the first server over the asynchronous RPC.
Optionally, after the first server generates the asynchronous cache synchronous task and before executing the asynchronous cache synchronous task, the asynchronous cache synchronous task may be recorded in a first database corresponding to the first server, but is not limited to the first server. And then periodically extracting and executing asynchronous cache synchronous tasks in a to-be-executed state from the first database.
S205, the second server performs asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC.
The method comprises the steps of receiving target data sent by a first server through an asynchronous RPC, judging whether the second version information is larger than current version information in a corresponding cache database of the second server after the second server receives the target data sent by the first server through the asynchronous RPC, and then, under the condition that the second version information is larger than the current version information, indicating that the target data corresponding to the second version information is not synchronously cached in the second server, and updating the target data sent by the first server through the asynchronous RPC, so that the target data sent by the first server through the asynchronous RPC can be subjected to asynchronous compensation synchronous cache, thereby not only can an erasure scene of message loss or synchronization failure in the process of data cache synchronization be processed through asynchronous compensation synchronous cache, but also the repeated synchronous cache or outdated invalid data of cache of the second server be avoided through version comparison, and the utilization rate and effectiveness of cache synchronization are improved.
As shown in fig. 3, the first server installs a protocol center program, after a user completes a contract signing through the protocol center program, the first server not only buffers and synchronizes target data related to a target event corresponding to the user in the protocol center program to the second server in the form of a cross-city transaction message, but also generates a corresponding asynchronous buffer synchronization task, records the corresponding asynchronous buffer synchronization task in the first database, and then periodically polls, extracts and executes the asynchronous buffer synchronization task in a to-be-executed state in the first database to perform asynchronous RPC compensation buffer synchronization. After the second server-installed data view application subscribes to the cross-city transaction message, the corresponding target data may be synchronously cached in its cache database with reference to an implementation process similar to S202 described above. After receiving the target data sent by the first server through the asynchronous RPC, the second server may refer to the implementation process similar to S205 to perform compensation synchronous caching on the corresponding target data in the cache database.
In the embodiment of the specification, after a target event occurs in a first server, target data related to the target event is transmitted to a second server in a cross-city mode through a transaction message mode, so that the second server can synchronously buffer the target data transmitted by the first server through the transaction message after subscribing to the transaction message, after the target event occurs in the first server, the first server generates and executes an asynchronous buffer synchronous task, the asynchronous buffer synchronous task is to transmit the target data related to the target event to the second server in an asynchronous RPC mode, so that the second server carries out asynchronous compensation synchronous buffer based on the target data transmitted by the first server through the asynchronous RPC, and therefore the problem that message delay or loss is caused by message middleware jitter under extreme conditions and dirty data generation is avoided, the timeliness, accuracy and stability of data buffer synchronization are ensured, and the problem that the buffer synchronous link only depends on a message middleware or a single-link barrier of an RPC gateway is solved.
In the related data cache synchronization scheme, after data cache synchronization is performed through the transaction message, the cache synchronization consistency problem is found through an active real-time check mode. However, the active real-time cache checking scheme is single, all checking scenes cannot be covered, and the missing of the checking scenes possibly exists, so that the problem of dirty data cache is caused.
Based on this, the embodiment of the present specification also proposes a more comprehensive data cache checking method after data cache synchronization. Next, please refer to fig. 4, which is a schematic diagram illustrating an implementation flow chart of a data cache checking method according to an exemplary embodiment of the present disclosure. As shown in fig. 4, the data cache checking method may include, but is not limited to, the following steps:
S401, the first server sends a first data check request to the second server.
In particular, the first data collation request may, but is not limited to, carry at least one data collation type of data on-line collation, data off-line collation, delta collation, full-quantity collation, etc. The triggering conditions corresponding to the data on-line check and the data off-line check are different.
The corresponding checking mode of the data online checking can include, but is not limited to, active checking and passive checking. The active check refers to the check of consistency of the cached data in the second server corresponding to the cached database initiated by the data update (i.e. the first server) and the data in the first server corresponding to the first database, and the corresponding implementation process is similar to S401-S403. The passive check refers to the check of consistency of the cache data in the cache database initiated when the cache database corresponding to the second server is refreshed with the data of the first database corresponding to the first server, and the corresponding implementation process is similar to that of S404-S406.
Alternatively, the first data checking request may, but is not limited to, include a data online checking request, the first data to be checked may, but is not limited to, include the full amount of cache data in the second server corresponding cache database, and the first data to be checked may, but is not limited to, include the full amount of data in the first server corresponding first database. After the first server generates the target event, the first server can store target data related to the target event into a first database corresponding to the first server. And when the data stored in the first database is updated (changed), the step of sending the first data checking request to the second server is triggered to be executed, namely, the online active checking of the data cache is triggered. The total data can be, but is not limited to, data related to a target event corresponding to a target user currently completing protocol subscription in the first database, so that efficient and real-time data cache checking can be realized for the target user currently completing protocol subscription, and the consistency of data cache synchronization is timely ensured.
Optionally, the first data verification request may also, but not limited to, include a data offline verification request, where the data offline verification request carries a target data type to be verified and a target data verification range, where the target data type may, but not limited to, include delta data and/or full data, and the target data verification range may, but not limited to, include at least one target data verification period, such as, but not limited to, a last week, a last month, and so on. The delta data refers to data newly generated in the target data collation period. The current time reaches a preset offline check time (such as but not limited to last 24:00 a week, no. 1 a month, etc.) or after receiving the data offline check instruction, it triggers the step of sending the first data check request to the second server. The total data may be, but is not limited to, data related to the target event corresponding to all users who have completed the contract subscription in the first database.
S402, the second server responds to the first data checking request and sends first to-be-checked cache data in a cache database corresponding to the second server to the first server.
Alternatively, when the first data checking request is a data online checking request, the first data checking request may, but is not limited to, carry the target user identifier to be checked. After receiving the first data checking request, the second server may query the cache database based on the target user identifier to obtain corresponding first cache data to be checked, that is, all data related to the target event corresponding to the target user identifier, and then return the first cache data to be checked to the first server.
Alternatively, when the first data verification request is a data offline verification request, the data offline verification request may, but is not limited to, carry a target data type and a target data verification range that need to be verified. After receiving the first data checking request, the second server may first retrieve corresponding first to-be-checked cache data in the cache database based on the target data type to be checked and the target data checking range, and then return the first to-be-checked cache data to the first server.
S403, the first server performs data checking based on the first to-be-checked cache data and the first to-be-checked data in the first database corresponding to the first server, and a first data checking result is obtained.
Specifically, the first data to be checked is consistent with the data type and/or the data checking range corresponding to the first data to be checked. After the first server receives the first to-be-checked cache data returned by the second server, the first server performs data checking on the first to-be-checked cache data and corresponding first to-be-checked data in the first data, for example, but not limited to checking whether field information corresponding to the first to-be-checked cache data and corresponding field information corresponding to the first to-be-checked data are consistent, and the like, so as to obtain a first data checking result.
Optionally, when the first data checking result is that the first to-be-checked cache data is inconsistent with the first to-be-checked data, sending out data cache alarm information, and timely reminding abnormal conditions of inconsistent data caches, so as to prompt related personnel to timely check whether a problem exists in the data cache link.
And/or
Referring next to fig. 4, as shown in fig. 4, the data cache checking method may also include, but is not limited to, the following steps:
s404, the second server transmits a second data collation request to the first server.
Specifically, the second data collation request may include, but is not limited to, a data online collation request. After the second server updates (changes) the cache data in the corresponding cache database, the second server generates a cache checking task and stores the cache checking task into the second database corresponding to the data bypass of the second server. Then, a cache collation task in a state to be executed is periodically extracted and executed from the second database to trigger transmission of a second data collation request to the first server.
S405, the first server responds to the second data checking request and sends second data to be checked in the first database corresponding to the first server to the second server.
Alternatively, the second data checking request may, but is not limited to, carry a user identifier of the corresponding cache data to be checked for update (change). After the first server receives the second data checking request, the first server can firstly query and obtain corresponding second data to be checked in the first database based on the user identifier, namely all data related to the target event corresponding to the user identifier, and then return the second cached data to be checked to the second server.
S406, the second server performs data checking based on the second data to be checked and the second data to be checked in the corresponding cache database of the second server, and a second data checking result is obtained.
Specifically, the second data to be checked is consistent with the data type and/or the data checking range corresponding to the second cache data to be checked. The data collation process in S406 is similar to the data collation process in S403, and will not be described here again.
Alternatively, the second data to be checked may include, but is not limited to, the total amount of data (i.e., all data) in the first database corresponding to the first server, and the second data to be checked includes the total amount of cache data (i.e., all cache data) in the cache database corresponding to the second server.
Optionally, the second data to be checked may include, but is not limited to, full data corresponding to a user identifier for which the first server corresponds to the first database in which the cache data is updated (changed), and the second data to be checked includes full cache data corresponding to a user identifier for which the second server corresponds to the cache data in which the cache data is updated (changed).
Optionally, when the second data checking result is that the second to-be-checked cache data is inconsistent with the second to-be-checked data, sending data cache alarm information, and timely reminding an abnormal condition of inconsistent data cache, so as to prompt related personnel to timely check whether a problem exists in the data cache link.
As shown in fig. 5, the first server is provided with a protocol center program, and when the data in the first database corresponding to the protocol center changes, the first server triggers active checking (online checking), that is, the first data to be checked corresponding to the data bypass request of the second server is checked with the first data to be checked of the first server to obtain a first data checking result. When the current time reaches a preset offline checking time (such as but not limited to last 24:00 a week, no. 1a month, etc.) or a data offline checking instruction is received, the first server also triggers an active check (offline check), that is, checks the corresponding first to-be-checked cache data with the first to-be-checked data of the first server to obtain a first data checking result. At this time, if the offline incremental cache check is triggered, the first data to be checked may be, but not limited to, an incremental data set obtained by offline processing of the modified protocol data in the target time period in advance, and if the offline full-size cache check is triggered, the first data to be checked may be, but not limited to, a full-size data set obtained by offline processing of the full-size protocol data in the first database in advance. When the second server changes corresponding to the cache data in the cache database, the second server calls a data bypass to trigger passive check (on-line check), namely, the corresponding second data to be checked is requested to the first server to be checked with the second data to be checked per se to obtain a second data check result.
The above-described data collation range inclusion relationship among the online cache collation, the increment collation, and the full-amount collation is shown in fig. 6. In the embodiment of the specification, through the three-level cache checking system shown in fig. 6, three checking mechanisms of different effective dimensions such as online second level checking, offline day increment checking and full amount checking per month can be covered, and two checking directions of active checking and passive checking are covered on a scene, so that the risk of missed checking is avoided, and the problem of inconsistent cache synchronization can be found out 100% and processed in time through a multi-time-effect period and multi-direction checking scheme, and cache consistency is effectively ensured.
Referring next to fig. 7, fig. 7 is a schematic structural diagram of a data buffering synchronization device according to an exemplary embodiment of the present disclosure. The above data cache synchronization device is applied to the first server, as shown in fig. 7, and the data cache synchronization device 700 includes:
The first sending module 710 is configured to send, after the first server generates the target event, target data related to the target event to a second server across cities in a form of a transaction message, so that the second server synchronously caches the target data sent by the first server through the transaction message after subscribing to the transaction message;
the buffer synchronous task generating module 720 is configured to generate an asynchronous buffer synchronous task after the first server generates a target event, where the asynchronous buffer synchronous task is configured to send target data related to the target event to the second server by using an asynchronous RPC method;
and the cache synchronous task execution module 730 is configured to execute the asynchronous cache synchronous task, so that the second server performs asynchronous compensation synchronous cache based on the target data sent by the first server through the asynchronous RPC.
In one possible implementation manner, the target data sent by the first server through the transaction message carries first version information, the first version information is used for deciding whether to synchronously buffer the target data sent by the first server through the transaction message, the target data sent by the first server through the asynchronous RPC carries second version information, and the second version information is used for deciding whether to asynchronously compensate for synchronously buffer the target data sent by the first server through the asynchronous RPC.
In one possible implementation manner, the data cache synchronization device 700 further includes:
The recording module is used for recording the asynchronous cache synchronous task into a first database corresponding to the first server;
The above-mentioned cache synchronization task execution module 730 is specifically configured to:
And periodically extracting and executing asynchronous cache synchronous tasks in a to-be-executed state from the first database.
In one possible implementation manner, the data cache synchronization device 700 further includes:
the second sending module is used for sending a first data checking request to the second server so that the second server responds to the first data checking request and sends first to-be-checked cache data in a corresponding cache database of the second server to the first server;
The first receiving module is used for receiving first to-be-checked cache data sent by the second server, and carrying out data check based on the first to-be-checked cache data and first to-be-checked data in a first database corresponding to the first server to obtain a first data check result;
And/or
The second receiving module is used for receiving a second data check request sent by the second server;
and the third sending module is used for responding to the second data checking request and sending second data to be checked in the first database corresponding to the first server to the second server so that the second server performs data checking based on the second data to be checked and the second cache data to be checked in the second cache database corresponding to the second server to obtain a second data checking result, and the second data to be checked is consistent with the data type and/or the data checking range corresponding to the second cache data to be checked.
In one possible implementation manner, the first data checking request includes a data online checking request, the first to-be-checked cache data includes a total amount of cache data in the second server corresponding cache database, the first to-be-checked data includes a total amount of data in the first server corresponding first database, and the data cache synchronization device 700 further includes:
The storage module is used for storing the target data related to the target event into a first database corresponding to the first server after the target event occurs to the first server;
And the first execution module is used for executing the step of sending the first data checking request to the second server after the data stored in the first database is updated.
In one possible implementation manner, the first data checking request includes a data offline checking request, the data offline checking request carries a target data type and a target data checking range that need to be checked, the target data type includes incremental data and/or full data, the target data checking range includes at least one target data checking period, and the data cache synchronization device 700 further includes:
And the second execution module is used for executing the step of sending the first data checking request to the second server after the current time reaches the preset offline checking time or the data offline checking instruction is received.
In one possible implementation manner, the data cache synchronization device 700 further includes:
and the alarm module is used for sending out data cache alarm information under the condition that the first data check result is that the first to-be-checked cache data is inconsistent with the first to-be-checked data.
Referring next to fig. 8, fig. 8 is a schematic structural diagram of another data buffering synchronization device according to an exemplary embodiment of the present disclosure. The above data cache synchronization device is applied to the second server, as shown in fig. 8, and the data cache synchronization device 800 includes:
The first synchronous buffer module 810 is configured to, after subscribing to the transaction message of the first server, synchronously buffer target data sent by the first server through the transaction message, where the target data is data corresponding to a target event that occurs in the first server;
a first receiving module 820, configured to receive target data sent by the first server through an asynchronous RPC;
The second synchronous buffer module 830 is configured to perform asynchronous compensation synchronous buffer based on the target data sent by the first server through the asynchronous RPC.
In one possible implementation manner, the target data sent by the first server through the transaction message carries the first version information, and the data cache synchronization device 800 further includes:
the first judging module is used for judging whether the first version information is larger than the current version information in the corresponding cache database of the second server;
And the first execution module is used for executing the step of synchronously caching the target data sent by the first server through the transaction message under the condition that the first version information is larger than the current version information.
In one possible implementation manner, the target data sent by the first server through the asynchronous RPC carries the second version information, and the data buffering synchronization device 800 further includes:
The second judging module is used for judging whether the second version information is larger than the current version information in the corresponding cache database of the second server;
and the second execution module is used for executing the step of asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC under the condition that the second version information is larger than the current version information.
In one possible implementation manner, the data cache synchronization apparatus 800 further includes:
the second receiving module is used for receiving the first data checking request sent by the first server;
The first sending module is used for responding to the first data checking request and sending first to-be-checked cache data in the second server corresponding cache database to the first server so that the first server performs data checking based on the first to-be-checked cache data and the first to-be-checked data in the first server corresponding first database to obtain a first data checking result;
And/or
A second sending module, configured to send a second data checking request to the first server, so that the first server responds to the second data checking request and sends second data to be checked in a first database corresponding to the first server to the second server;
And the third receiving module is used for receiving the second data to be checked sent by the first server, carrying out data check on the second data to be checked and the second data to be checked in the corresponding cache database of the second server to obtain a second data check result, wherein the second data to be checked is consistent with the data type and/or the data check range corresponding to the second data to be checked.
In a possible implementation manner, the second data checking request includes a data online checking request, the second data to be checked includes the total data in the first database corresponding to the first server, the second cache data to be checked includes the total cache data in the cache database corresponding to the second server, and the data cache synchronization device 800 further includes:
The task generating module is used for generating a cache checking task after the cache data in the cache database is updated;
The task storage module is used for storing the cache checking task into a second database corresponding to the data bypass of the second server;
And the task execution module is used for regularly extracting and executing the cache checking task in a to-be-executed state from the second database so as to trigger the step of sending the second data checking request to the first server.
In one possible implementation manner, the data cache synchronization apparatus 800 further includes:
and the alarm module is used for sending out data cache alarm information when the second data check result is that the second data to be checked is inconsistent with the second cache data to be checked.
The division of the modules in the data buffer synchronization device is only used for illustration, and in other embodiments, the data buffer synchronization device may be divided into different modules according to the need to complete all or part of the functions of the data buffer synchronization device. The implementation of each module in the data cache synchronization apparatus provided in the embodiments of the present specification may be in the form of a computer program. The computer program may run on a server. Program modules of the computer program may be stored in the memory of the server. Which when executed by a processor, performs all or part of the steps of the data cache synchronization method described in the embodiments of the present specification.
Referring next to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 9, the electronic device 900 may include at least one processor 910, at least one communication bus 920, a user interface 930, at least one network interface 940, and a memory 950. Wherein the communication bus 920 may be used to implement the connectivity communications of the various components described above.
The user interface 930 may include a Display screen (Display) and a Camera (Camera), and optionally, the user interface 930 may further include a standard wired interface and a wireless interface.
The network interface 940 may optionally include a bluetooth module, a Near Field Communication (NFC) module, a wireless fidelity (WIRELESS FIDELITY, wi-Fi) module, and the like.
Wherein the processor 910 may include one or more processing cores. The processor 910 utilizes various interfaces and lines to connect various portions of the overall electronic device 900, perform various functions of routing the electronic device 900, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 950, and invoking data stored in the memory 950. Alternatively, the processor 910 may be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 910 may integrate one or a combination of several of a processor (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 910 and may be implemented by a single chip.
The Memory 950 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Optionally, the memory 950 includes a non-transitory computer readable medium. Memory 950 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 950 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (such as a data cache synchronization function, a data cache collation function, etc.), instructions for implementing the various method embodiments described above, etc., and a stored data area that may store data referred to in the various method embodiments described above, etc. Memory 950 may also optionally be at least one storage device located remotely from the processor 910. As shown in fig. 9, an operating system, network communication modules, user interface modules, and application programs may be included in the memory 950, which is one type of computer storage medium.
Specifically, the electronic device 900 may be the data cache synchronization apparatus 700 shown in fig. 7 or the first server, and the processor 910 may be configured to invoke an application program stored in the memory 950, and specifically perform the following operations:
after the first server generates a target event, target data related to the target event is transmitted to a second server in a cross-city mode through a transaction message mode, so that the second server synchronously caches the target data transmitted by the first server through the transaction message after subscribing to the transaction message.
And generating an asynchronous cache synchronous task after the first server generates a target event, wherein the asynchronous cache synchronous task is used for sending target data related to the target event to the second server in an asynchronous RPC mode.
And executing the asynchronous cache synchronous task so that the second server performs asynchronous compensation synchronous cache based on the target data sent by the first server through the asynchronous RPC.
In some possible embodiments, the target data sent by the first server through the transaction message carries first version information, where the first version information is used to determine whether to synchronously buffer the target data sent by the first server through the transaction message, and the target data sent by the first server through the asynchronous RPC carries second version information, where the second version information is used to determine whether to asynchronously compensate for synchronously buffering the target data sent by the first server through the asynchronous RPC.
In some possible embodiments, after the processor 910 performs the generating an asynchronous cache synchronization task, the processor is further configured to perform:
And recording the asynchronous cache synchronous task into a first database corresponding to the first server.
The processor 910 is specifically configured to perform the asynchronous cache synchronization task when performing the above-mentioned execution:
And periodically extracting and executing asynchronous cache synchronous tasks in a to-be-executed state from the first database.
In some possible embodiments, the processor 910 is further configured to perform:
And sending a first data checking request to the second server so that the second server responds to the first data checking request and sends first data to be checked in a corresponding cache database of the second server to the first server.
And receiving first to-be-checked cache data sent by the second server, and carrying out data check on the basis of the first to-be-checked cache data and first to-be-checked data in a first database corresponding to the first server to obtain a first data check result, wherein the first to-be-checked cache data is consistent with the data type and/or the data check range corresponding to the first to-be-checked data.
And/or
And receiving a second data check request sent by the second server.
And responding to the second data checking request, and sending second data to be checked in the first database corresponding to the first server to the second server so that the second server performs data checking based on the second data to be checked and the second cache data to be checked in the second server corresponding cache database to obtain a second data checking result, wherein the second data to be checked is consistent with the data type and/or the data checking range corresponding to the second cache data to be checked.
In some possible embodiments, the first data checking request includes a data online checking request, the first data to be checked includes a full amount of cache data in the second server corresponding cache database, the first data to be checked includes a full amount of data in the first server corresponding first database, and the processor 910 is further configured to perform:
After the first server generates the target event, storing the target data related to the target event into a first database corresponding to the first server.
After the data stored in the first database is updated, the step of sending a first data check request to the second server is executed.
In some possible embodiments, the first data collation request comprises a data offline collation request, the data offline collation request carries a target data type and a target data collation range to be collated, the target data type comprises delta data and/or full data, the target data collation range comprises at least one target data collation period, the processor 910 is further configured to perform:
and after the current time reaches the preset offline checking time or a data offline checking instruction is received, executing the step of sending a first data checking request to the second server.
In some possible embodiments, the processor 910 performs the data checking based on the first to-be-checked cache data and the first to-be-checked data in the first database corresponding to the first server, and after obtaining a first data checking result, is further configured to perform:
and sending out data cache alarm information under the condition that the first data check result is that the first cache data to be checked is inconsistent with the first data to be checked.
In some possible embodiments, the electronic device 900 may be the data cache synchronization apparatus 800 shown in fig. 8 or a second server, and the processor 910 may be configured to call an application program stored in the memory 950, and specifically perform the following operations:
After subscribing to the transaction information of the first server, synchronously caching target data sent by the first server through the transaction information, wherein the target data is data related to a target event generated by the first server.
And receiving target data sent by the first server through the asynchronous RPC.
And performing asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC.
In some possible embodiments, the target data sent by the first server via the transaction message carries first version information, and the processor 910 is further configured to perform:
and judging whether the first version information is larger than the current version information in the corresponding cache database of the second server.
And executing the step of synchronously caching the target data sent by the first server through the transaction message under the condition that the first version information is larger than the current version information.
In some possible embodiments, the target data sent by the first server via the asynchronous RPC carries a second version of information, and the processor 910 is further configured to perform:
And judging whether the second version information is larger than the current version information in the corresponding cache database of the second server.
And executing the step of asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC under the condition that the second version information is larger than the current version information.
In some possible embodiments, the processor 910 is further configured to perform:
And receiving a first data check request sent by the first server.
And responding to the first data checking request, and sending first to-be-checked cache data in the second server corresponding cache database to the first server so that the first server performs data checking based on the first to-be-checked cache data and the first to-be-checked data in the first server corresponding first database to obtain a first data checking result, wherein the first to-be-checked cache data is consistent with the data type and/or the data checking range corresponding to the first to-be-checked data.
And/or
And sending a second data checking request to the first server so that the first server responds to the second data checking request and sends second data to be checked in the first database corresponding to the first server to the second server.
And receiving second data to be checked sent by the first server, and performing data check based on the second data to be checked and second data to be checked in a cache database corresponding to the second server to obtain a second data check result, wherein the second data to be checked is consistent with the data type and/or the data check range corresponding to the second data to be checked.
In some possible embodiments, the second data checking request includes a data online checking request, the second data to be checked includes a full amount of data in the first database corresponding to the first server, the second cache data to be checked includes a full amount of cache data in the cache database corresponding to the second server, and the processor 910 is further configured to perform:
After the cache data in the cache database is updated, a cache checking task is generated, and the cache checking task is stored in a second database corresponding to the data bypass of the second server.
And periodically extracting and executing the cache checking task in a to-be-executed state from the second database to trigger the step of sending a second data checking request to the first server.
In some possible embodiments, the processor 910 performs the data checking based on the second data to be checked and the second data to be checked in the second server corresponding cache database, and after obtaining a second data checking result, is further configured to perform:
and sending out data cache alarm information under the condition that the second data check result is that the second data to be checked is inconsistent with the second cache data to be checked.
The present description also provides a computer-readable storage medium having instructions stored therein, which when executed on a computer or processor, cause the computer or processor to perform one or more steps of the above embodiments. The constituent modules of the data cache synchronization device may be stored in the computer-readable storage medium if implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product described above includes one or more computer instructions. When the computer program instructions described above are loaded and executed on a computer, the processes or functions described in accordance with the embodiments of the present specification are all or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired, e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium, e.g., a digital versatile disk (DIGITAL VERSATILE DISC, DVD), or a semiconductor medium, e.g., a Solid state disk (Solid STATE DISK, SSD), etc.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program, which may be stored in a computer-readable storage medium, instructing relevant hardware, and which, when executed, may comprise the embodiment methods as described above. The storage medium includes various media capable of storing program codes such as ROM, RAM, magnetic disk or optical disk. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely preferred embodiments of the present disclosure, and do not limit the scope of the disclosure, and various modifications and improvements made by those skilled in the art to the technical solution of the disclosure should fall within the scope of protection defined by the claims.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims and description may be performed in an order different from that in the embodiments recited in the description and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.