Movatterモバイル変換


[0]ホーム

URL:


CN120336047A - Data cache synchronization method, device, electronic device, medium and program product - Google Patents

Data cache synchronization method, device, electronic device, medium and program product

Info

Publication number
CN120336047A
CN120336047ACN202510819460.5ACN202510819460ACN120336047ACN 120336047 ACN120336047 ACN 120336047ACN 202510819460 ACN202510819460 ACN 202510819460ACN 120336047 ACN120336047 ACN 120336047A
Authority
CN
China
Prior art keywords
data
server
cache
target
asynchronous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510819460.5A
Other languages
Chinese (zh)
Inventor
钱超
倪波
彭新睿
刘义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ant Consumer Finance Co ltd
Original Assignee
Chongqing Ant Consumer Finance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ant Consumer Finance Co ltdfiledCriticalChongqing Ant Consumer Finance Co ltd
Priority to CN202510819460.5ApriorityCriticalpatent/CN120336047A/en
Publication of CN120336047ApublicationCriticalpatent/CN120336047A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

The embodiment of the specification discloses a data cache synchronization method, a device, electronic equipment, a medium and a program product. The method comprises the steps that after a target event occurs in a first server, target data related to the target event are transmitted to a second server in a cross-city mode through a transaction message mode, so that the second server can synchronously buffer the target data transmitted by the first server through the transaction message after subscribing to the transaction message, an asynchronous buffer synchronous task can be generated and executed after the target event occurs in the first server, the asynchronous buffer synchronous task is used for transmitting the target data related to the target event to the second server in an asynchronous RPC mode, asynchronous compensation synchronous buffer is carried out on the basis of the target data transmitted by the first server through the asynchronous RPC, the single-link barrier risk of only depending on a message middleware or an RPC gateway on a buffer synchronous link is compensated, and the timeliness, accuracy and stability of data buffer synchronization are guaranteed.

Description

Data cache synchronization method, device, electronic equipment, medium and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data cache synchronization method, apparatus, electronic device, medium, and program product.
Background
In the internet financial scene, in order to pursue customer experience (such as payment scene), a scheme of data caching is adopted in technical architecture design, so that the effect of smooth customer experience is achieved by greatly reducing the time consumption of a link, unnecessary link dependence and system interaction are reduced to a certain extent, and the system stability is improved.
Disclosure of Invention
The embodiment of the specification provides a data cache synchronization method, a device, electronic equipment, a medium and a program product, which make up the single-link barrier risk of only depending on a message middleware or an RPC gateway on a cache synchronization link through a double-link cache synchronization scheme of cross-city transaction message and asynchronous RPC spam compensation, and ensure the timeliness, accuracy and stability of data cache synchronization. The technical scheme is as follows:
in a first aspect, an embodiment of the present disclosure provides a data cache synchronization method, where the method is applied to a first server, and the method includes:
After the first server generates a target event, transmitting target data related to the target event to a second server in a cross-city mode through a transaction message mode, so that the second server synchronously caches the target data transmitted by the first server through the transaction message after subscribing to the transaction message;
Generating an asynchronous cache synchronous task after the first server generates a target event, wherein the asynchronous cache synchronous task is used for sending target data related to the target event to the second server in an asynchronous RPC mode;
And executing the asynchronous cache synchronous task so that the second server performs asynchronous compensation synchronous cache based on the target data sent by the first server through the asynchronous RPC.
In a second aspect, embodiments of the present disclosure provide another data cache synchronization method, where the method is applied to a second server, and the method includes:
After subscribing to the transaction information of the first server, synchronously caching target data sent by the first server through the transaction information, wherein the target data is data related to a target event generated by the first server;
receiving target data sent by the first server through an asynchronous RPC;
and performing asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC.
In a third aspect, an embodiment of the present disclosure provides a data cache synchronization apparatus, where the apparatus is applied to a first server, the apparatus includes:
The first sending module is used for sending target data related to the target event to the second server in a cross-city mode through a transaction message mode after the target event occurs to the first server, so that the second server synchronously caches the target data sent by the first server through the transaction message after subscribing to the transaction message;
The system comprises a first server, a buffer synchronous task generating module, a second server and a buffer synchronous task generating module, wherein the first server is used for generating a target event and then generating an asynchronous buffer synchronous task;
and the cache synchronous task execution module is used for executing the asynchronous cache synchronous task so as to enable the second server to perform asynchronous compensation synchronous cache based on the target data sent by the first server through the asynchronous RPC.
In a fourth aspect, embodiments of the present disclosure provide another data cache synchronization apparatus, where the apparatus is applied to a second server, the apparatus includes:
The first synchronous caching module is used for synchronously caching target data sent by the first server through the transaction message after subscribing the transaction message of the first server, wherein the target data is data corresponding to a target event which occurs in the first server;
the first receiving module is used for receiving target data sent by the first server through the asynchronous RPC;
and the second synchronous buffer module is used for carrying out asynchronous compensation synchronous buffer based on the target data sent by the first server through the asynchronous RPC.
In a fifth aspect, embodiments of the present disclosure provide an electronic device comprising a processor and a memory;
the processor is connected with the memory;
the memory is used for storing executable program codes;
The processor executes a program corresponding to the executable program code stored in the memory by reading the executable program code for executing the method provided in the first aspect or the second aspect of the embodiment of the present specification.
In a sixth aspect, embodiments of the present disclosure provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method provided in the first or second aspect of embodiments of the present disclosure.
In a seventh aspect, embodiments of the present specification provide a computer program product comprising instructions which, when run on a computer or a processor, cause the computer or the processor to perform the data cache synchronization method provided in the first aspect or the second aspect of embodiments of the present specification.
In the embodiment of the specification, after a target event occurs in a first server, target data related to the target event is transmitted to a second server in a cross-city mode through a transaction message mode, so that the second server can synchronously buffer the target data transmitted by the first server through the transaction message after subscribing to the transaction message, after the target event occurs in the first server, the first server generates and executes an asynchronous buffer synchronous task, the asynchronous buffer synchronous task is to transmit the target data related to the target event to the second server in an asynchronous RPC mode, so that the second server carries out asynchronous compensation synchronous buffer based on the target data transmitted by the first server through the asynchronous RPC, and therefore the problem that message delay or loss is caused by message middleware jitter under extreme conditions and dirty data generation is avoided, the timeliness, accuracy and stability of data buffer synchronization are ensured, and the problem that the buffer synchronous link only depends on a message middleware or a single-link barrier of an RPC gateway is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present description, the drawings that are required in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present description, and other drawings may be obtained based on these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a data caching system according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method for synchronizing data caches according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an implementation architecture of data cache synchronization according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating an implementation of a data cache checking method according to an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an implementation architecture of data cache checking according to an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a data collation containment relationship according to an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a data buffering synchronization device according to an exemplary embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another data buffering synchronization device according to an exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification.
The terms first, second, third and the like in the description and in the claims and in the above drawings, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals according to the embodiments of the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the target data, the data to be collated, and the like referred to in this specification are all acquired with sufficient authorization.
In the internet financial scenario, the related data caching scheme is greatly optimized in time consumption, but there are relatively great challenges in cache data consistency, such as dirty data, data delay, data loss and the like, which can cause cache data inconsistency.
Based on the above, the embodiment of the specification provides a data caching synchronization method, which compensates the single-link barrier risk of relying on only the message middleware or the RPC gateway on the cache synchronization link through a double-link caching synchronization scheme of cross-city transaction message and asynchronous RPC spam compensation, and ensures the timeliness, accuracy and stability of data caching synchronization.
Referring next to fig. 1, fig. 1 is a schematic diagram of an architecture of a data cache system according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the data caching system may include a first server 110 and a second server 120. Wherein:
The first server 110 may be, but is not limited to, a server corresponding to the first city data center, and is a core component of the first city data center, for running an application program and processing a user request. An agreement subscription application may be installed in the first server 110 for providing an agreement subscription service for the user. After the first server generates the target event, the first server 110 may send the target data related to the target event to the second server 120 through the form of a transaction message, and generate and execute an asynchronous cache synchronization task, where the asynchronous cache synchronization task is to send the target data related to the target event to the second server 120 through an asynchronous RPC manner. The first server 110 may be, but is not limited to being, a hardware server, a virtual server, a cloud server, etc.
The second server 120 may be a server corresponding to the second city data center and capable of providing various data caches, and may synchronously cache the target data sent by the first server 110 through the transaction message after subscribing to the transaction message of the first server 110, and receive the target data sent by the first server 110 through the asynchronous RPC, and asynchronously compensate the synchronous cache based on the target data sent by the first server 110 through the asynchronous RPC. The second server 120 may be, but is not limited to being, a hardware server, a virtual server, a cloud server, etc.
The network may be a medium providing a communication link between the second server 120 and the first server 110, or may be the internet including network devices and transmission media, but is not limited thereto. The transmission medium may be a wired link, such as, but not limited to, coaxial cable, fiber optic and digital subscriber lines (digital subscriber line, DSL), etc., or a wireless link, such as, but not limited to, wireless internet (WIRELESS FIDELITY, WIFI), bluetooth, a mobile device network, etc.
It will be appreciated that the number of first servers 110 and second servers 120 in the data caching system shown in FIG. 1 is by way of example only, and that any number of first servers 110 and second servers 120 may be included in the data caching system in a particular implementation. The embodiment of the present specification is not particularly limited thereto. For example, and without limitation, the first server 110 may be a first server cluster of a plurality of first servers and the second server 120 may be a second server cluster of a plurality of second servers.
Next, referring to fig. 1, a data cache synchronization method provided in an embodiment of the present disclosure will be described. Referring to fig. 2, a flow chart of a data cache synchronization method according to an exemplary embodiment of the present disclosure is shown. As shown in fig. 2, the data cache synchronization method includes the following steps:
s201, after the first server generates the target event, the first server transmits target data related to the target event to the second server in a cross-city mode through a transaction message mode.
Specifically, the target event may include, but is not limited to, a contract signing event or other events requiring data cache synchronization, and the like. The agreement signing event refers to a process of signing a legal service agreement between the user and the corresponding service provider of the first server by means of electronic signature, API call or interface operation, for example, but not limited to, the user completing online loan contract signing. When the first server detects that the target event occurs, target data (such as but not limited to contracted protocol identification, user information, time stamp and the like) related to the target event can be packaged into a transaction message, and the transaction message is sent to the second server through a message queue supporting the XA protocol, so that the target data related to the target event can not be lost in the process of synchronizing data cross-city cache, and atomicity of local transactions between message transmission and the first server is guaranteed. The transaction message refers to a message sent by an application system issuing the message in a transaction operation sequence of a local database. The delivery of such messages is consistent with the database transaction state, and when the transaction state is commit, the messages will be delivered to the subscribers, and when the transaction state is rollback, the messages will not be delivered to the subscribers.
Optionally, the target data (such as but not limited to contracted protocol identification, user information, time stamp, etc.) related to the target event is encapsulated as a transaction message, and also but not limited to first version information generated based on a lamort clock algorithm, etc. The target data sent by the first server through the transaction message carries the first version information, the first version information is used for identifying the freshness of the target data sent by the first server through the transaction message, and the second server supporting to receive the target data decides whether the second server needs to synchronously cache the target data sent by the first server through the transaction message. The first version information may include, but is not limited to, a first version number of the target data transmitted by the first server through the transaction message.
S202, after subscribing to the transaction message, the second server synchronously caches target data sent by the first server through the transaction message.
In particular, the second server may subscribe to the transaction message over a long connection, receive the post-check message integrity, and write the target data sent by the first server over the transaction message to a cache cluster of the local city data center, such as but not limited to a cache database thereof.
Optionally, the target data sent by the first server through the transaction message carries first version information. After subscribing to the transaction information, the second server can also judge whether the first version information is larger than the current version information in the corresponding cache database of the second server (namely, the latest version information of the local cache of the second server), then, under the condition that the first version information is larger than the current version information, the second server is not synchronized to cache the target data corresponding to the first version information, and the first server updates the target data sent by the transaction information, the step of synchronously caching the target data sent by the first server by the transaction information can be executed, so that the second server is prevented from repeatedly synchronizing the cache or caching the outdated invalid data by virtue of version comparison, and the synchronous utilization rate and the effectiveness of the cache are improved.
S203, the first server generates an asynchronous cache synchronous task after the target event occurs.
Specifically, after the first server generates the target event, the target data related to the target event is sent to the second server across cities in the form of a transaction message, and a compensation task (i.e. an asynchronous cache synchronous task) is asynchronously generated, so that the same target data is packaged for the same target event. According to the embodiment of the description, the erasure scene of message loss or synchronization failure in the data caching synchronization process can be processed asynchronously through the asynchronous caching synchronization task, and the delay influence on the first server core transaction can be reduced through asynchronous processing.
Alternatively, the asynchronous cache synchronous task may be, but is not limited to, scheduled by a distributed task queue, setting a corresponding initial delay and retry strategy.
S204, the first server executes an asynchronous cache synchronous task, and the asynchronous cache synchronous task is to send target data related to a target event to the second server in an asynchronous RPC mode.
In particular, the first server may, but is not limited to, perform asynchronous cache synchronous tasks according to a scheduling policy, i.e., send target data related to a target event to the second server by way of an asynchronous RPC. The asynchronous RPC is a non-blocking remote service invocation mechanism that allows the first server to continue to perform other operations after sending the target data associated with the target event.
Optionally, the target data sent by the first server through the asynchronous RPC carries second version information, where the second version information is used to determine whether to asynchronously compensate and synchronously buffer the target data sent by the first server through the asynchronous RPC. The second version information is used for identifying the freshness of target data sent by the second server through the asynchronous RPC, and the second server supporting receiving the target data decides whether the second server needs to synchronously cache the target data sent by the first server through the asynchronous RPC. The second version information may include, but is not limited to, a second version number of the target data sent by the first server over the asynchronous RPC.
Optionally, after the first server generates the asynchronous cache synchronous task and before executing the asynchronous cache synchronous task, the asynchronous cache synchronous task may be recorded in a first database corresponding to the first server, but is not limited to the first server. And then periodically extracting and executing asynchronous cache synchronous tasks in a to-be-executed state from the first database.
S205, the second server performs asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC.
The method comprises the steps of receiving target data sent by a first server through an asynchronous RPC, judging whether the second version information is larger than current version information in a corresponding cache database of the second server after the second server receives the target data sent by the first server through the asynchronous RPC, and then, under the condition that the second version information is larger than the current version information, indicating that the target data corresponding to the second version information is not synchronously cached in the second server, and updating the target data sent by the first server through the asynchronous RPC, so that the target data sent by the first server through the asynchronous RPC can be subjected to asynchronous compensation synchronous cache, thereby not only can an erasure scene of message loss or synchronization failure in the process of data cache synchronization be processed through asynchronous compensation synchronous cache, but also the repeated synchronous cache or outdated invalid data of cache of the second server be avoided through version comparison, and the utilization rate and effectiveness of cache synchronization are improved.
As shown in fig. 3, the first server installs a protocol center program, after a user completes a contract signing through the protocol center program, the first server not only buffers and synchronizes target data related to a target event corresponding to the user in the protocol center program to the second server in the form of a cross-city transaction message, but also generates a corresponding asynchronous buffer synchronization task, records the corresponding asynchronous buffer synchronization task in the first database, and then periodically polls, extracts and executes the asynchronous buffer synchronization task in a to-be-executed state in the first database to perform asynchronous RPC compensation buffer synchronization. After the second server-installed data view application subscribes to the cross-city transaction message, the corresponding target data may be synchronously cached in its cache database with reference to an implementation process similar to S202 described above. After receiving the target data sent by the first server through the asynchronous RPC, the second server may refer to the implementation process similar to S205 to perform compensation synchronous caching on the corresponding target data in the cache database.
In the embodiment of the specification, after a target event occurs in a first server, target data related to the target event is transmitted to a second server in a cross-city mode through a transaction message mode, so that the second server can synchronously buffer the target data transmitted by the first server through the transaction message after subscribing to the transaction message, after the target event occurs in the first server, the first server generates and executes an asynchronous buffer synchronous task, the asynchronous buffer synchronous task is to transmit the target data related to the target event to the second server in an asynchronous RPC mode, so that the second server carries out asynchronous compensation synchronous buffer based on the target data transmitted by the first server through the asynchronous RPC, and therefore the problem that message delay or loss is caused by message middleware jitter under extreme conditions and dirty data generation is avoided, the timeliness, accuracy and stability of data buffer synchronization are ensured, and the problem that the buffer synchronous link only depends on a message middleware or a single-link barrier of an RPC gateway is solved.
In the related data cache synchronization scheme, after data cache synchronization is performed through the transaction message, the cache synchronization consistency problem is found through an active real-time check mode. However, the active real-time cache checking scheme is single, all checking scenes cannot be covered, and the missing of the checking scenes possibly exists, so that the problem of dirty data cache is caused.
Based on this, the embodiment of the present specification also proposes a more comprehensive data cache checking method after data cache synchronization. Next, please refer to fig. 4, which is a schematic diagram illustrating an implementation flow chart of a data cache checking method according to an exemplary embodiment of the present disclosure. As shown in fig. 4, the data cache checking method may include, but is not limited to, the following steps:
S401, the first server sends a first data check request to the second server.
In particular, the first data collation request may, but is not limited to, carry at least one data collation type of data on-line collation, data off-line collation, delta collation, full-quantity collation, etc. The triggering conditions corresponding to the data on-line check and the data off-line check are different.
The corresponding checking mode of the data online checking can include, but is not limited to, active checking and passive checking. The active check refers to the check of consistency of the cached data in the second server corresponding to the cached database initiated by the data update (i.e. the first server) and the data in the first server corresponding to the first database, and the corresponding implementation process is similar to S401-S403. The passive check refers to the check of consistency of the cache data in the cache database initiated when the cache database corresponding to the second server is refreshed with the data of the first database corresponding to the first server, and the corresponding implementation process is similar to that of S404-S406.
Alternatively, the first data checking request may, but is not limited to, include a data online checking request, the first data to be checked may, but is not limited to, include the full amount of cache data in the second server corresponding cache database, and the first data to be checked may, but is not limited to, include the full amount of data in the first server corresponding first database. After the first server generates the target event, the first server can store target data related to the target event into a first database corresponding to the first server. And when the data stored in the first database is updated (changed), the step of sending the first data checking request to the second server is triggered to be executed, namely, the online active checking of the data cache is triggered. The total data can be, but is not limited to, data related to a target event corresponding to a target user currently completing protocol subscription in the first database, so that efficient and real-time data cache checking can be realized for the target user currently completing protocol subscription, and the consistency of data cache synchronization is timely ensured.
Optionally, the first data verification request may also, but not limited to, include a data offline verification request, where the data offline verification request carries a target data type to be verified and a target data verification range, where the target data type may, but not limited to, include delta data and/or full data, and the target data verification range may, but not limited to, include at least one target data verification period, such as, but not limited to, a last week, a last month, and so on. The delta data refers to data newly generated in the target data collation period. The current time reaches a preset offline check time (such as but not limited to last 24:00 a week, no. 1 a month, etc.) or after receiving the data offline check instruction, it triggers the step of sending the first data check request to the second server. The total data may be, but is not limited to, data related to the target event corresponding to all users who have completed the contract subscription in the first database.
S402, the second server responds to the first data checking request and sends first to-be-checked cache data in a cache database corresponding to the second server to the first server.
Alternatively, when the first data checking request is a data online checking request, the first data checking request may, but is not limited to, carry the target user identifier to be checked. After receiving the first data checking request, the second server may query the cache database based on the target user identifier to obtain corresponding first cache data to be checked, that is, all data related to the target event corresponding to the target user identifier, and then return the first cache data to be checked to the first server.
Alternatively, when the first data verification request is a data offline verification request, the data offline verification request may, but is not limited to, carry a target data type and a target data verification range that need to be verified. After receiving the first data checking request, the second server may first retrieve corresponding first to-be-checked cache data in the cache database based on the target data type to be checked and the target data checking range, and then return the first to-be-checked cache data to the first server.
S403, the first server performs data checking based on the first to-be-checked cache data and the first to-be-checked data in the first database corresponding to the first server, and a first data checking result is obtained.
Specifically, the first data to be checked is consistent with the data type and/or the data checking range corresponding to the first data to be checked. After the first server receives the first to-be-checked cache data returned by the second server, the first server performs data checking on the first to-be-checked cache data and corresponding first to-be-checked data in the first data, for example, but not limited to checking whether field information corresponding to the first to-be-checked cache data and corresponding field information corresponding to the first to-be-checked data are consistent, and the like, so as to obtain a first data checking result.
Optionally, when the first data checking result is that the first to-be-checked cache data is inconsistent with the first to-be-checked data, sending out data cache alarm information, and timely reminding abnormal conditions of inconsistent data caches, so as to prompt related personnel to timely check whether a problem exists in the data cache link.
And/or
Referring next to fig. 4, as shown in fig. 4, the data cache checking method may also include, but is not limited to, the following steps:
s404, the second server transmits a second data collation request to the first server.
Specifically, the second data collation request may include, but is not limited to, a data online collation request. After the second server updates (changes) the cache data in the corresponding cache database, the second server generates a cache checking task and stores the cache checking task into the second database corresponding to the data bypass of the second server. Then, a cache collation task in a state to be executed is periodically extracted and executed from the second database to trigger transmission of a second data collation request to the first server.
S405, the first server responds to the second data checking request and sends second data to be checked in the first database corresponding to the first server to the second server.
Alternatively, the second data checking request may, but is not limited to, carry a user identifier of the corresponding cache data to be checked for update (change). After the first server receives the second data checking request, the first server can firstly query and obtain corresponding second data to be checked in the first database based on the user identifier, namely all data related to the target event corresponding to the user identifier, and then return the second cached data to be checked to the second server.
S406, the second server performs data checking based on the second data to be checked and the second data to be checked in the corresponding cache database of the second server, and a second data checking result is obtained.
Specifically, the second data to be checked is consistent with the data type and/or the data checking range corresponding to the second cache data to be checked. The data collation process in S406 is similar to the data collation process in S403, and will not be described here again.
Alternatively, the second data to be checked may include, but is not limited to, the total amount of data (i.e., all data) in the first database corresponding to the first server, and the second data to be checked includes the total amount of cache data (i.e., all cache data) in the cache database corresponding to the second server.
Optionally, the second data to be checked may include, but is not limited to, full data corresponding to a user identifier for which the first server corresponds to the first database in which the cache data is updated (changed), and the second data to be checked includes full cache data corresponding to a user identifier for which the second server corresponds to the cache data in which the cache data is updated (changed).
Optionally, when the second data checking result is that the second to-be-checked cache data is inconsistent with the second to-be-checked data, sending data cache alarm information, and timely reminding an abnormal condition of inconsistent data cache, so as to prompt related personnel to timely check whether a problem exists in the data cache link.
As shown in fig. 5, the first server is provided with a protocol center program, and when the data in the first database corresponding to the protocol center changes, the first server triggers active checking (online checking), that is, the first data to be checked corresponding to the data bypass request of the second server is checked with the first data to be checked of the first server to obtain a first data checking result. When the current time reaches a preset offline checking time (such as but not limited to last 24:00 a week, no. 1a month, etc.) or a data offline checking instruction is received, the first server also triggers an active check (offline check), that is, checks the corresponding first to-be-checked cache data with the first to-be-checked data of the first server to obtain a first data checking result. At this time, if the offline incremental cache check is triggered, the first data to be checked may be, but not limited to, an incremental data set obtained by offline processing of the modified protocol data in the target time period in advance, and if the offline full-size cache check is triggered, the first data to be checked may be, but not limited to, a full-size data set obtained by offline processing of the full-size protocol data in the first database in advance. When the second server changes corresponding to the cache data in the cache database, the second server calls a data bypass to trigger passive check (on-line check), namely, the corresponding second data to be checked is requested to the first server to be checked with the second data to be checked per se to obtain a second data check result.
The above-described data collation range inclusion relationship among the online cache collation, the increment collation, and the full-amount collation is shown in fig. 6. In the embodiment of the specification, through the three-level cache checking system shown in fig. 6, three checking mechanisms of different effective dimensions such as online second level checking, offline day increment checking and full amount checking per month can be covered, and two checking directions of active checking and passive checking are covered on a scene, so that the risk of missed checking is avoided, and the problem of inconsistent cache synchronization can be found out 100% and processed in time through a multi-time-effect period and multi-direction checking scheme, and cache consistency is effectively ensured.
Referring next to fig. 7, fig. 7 is a schematic structural diagram of a data buffering synchronization device according to an exemplary embodiment of the present disclosure. The above data cache synchronization device is applied to the first server, as shown in fig. 7, and the data cache synchronization device 700 includes:
The first sending module 710 is configured to send, after the first server generates the target event, target data related to the target event to a second server across cities in a form of a transaction message, so that the second server synchronously caches the target data sent by the first server through the transaction message after subscribing to the transaction message;
the buffer synchronous task generating module 720 is configured to generate an asynchronous buffer synchronous task after the first server generates a target event, where the asynchronous buffer synchronous task is configured to send target data related to the target event to the second server by using an asynchronous RPC method;
and the cache synchronous task execution module 730 is configured to execute the asynchronous cache synchronous task, so that the second server performs asynchronous compensation synchronous cache based on the target data sent by the first server through the asynchronous RPC.
In one possible implementation manner, the target data sent by the first server through the transaction message carries first version information, the first version information is used for deciding whether to synchronously buffer the target data sent by the first server through the transaction message, the target data sent by the first server through the asynchronous RPC carries second version information, and the second version information is used for deciding whether to asynchronously compensate for synchronously buffer the target data sent by the first server through the asynchronous RPC.
In one possible implementation manner, the data cache synchronization device 700 further includes:
The recording module is used for recording the asynchronous cache synchronous task into a first database corresponding to the first server;
The above-mentioned cache synchronization task execution module 730 is specifically configured to:
And periodically extracting and executing asynchronous cache synchronous tasks in a to-be-executed state from the first database.
In one possible implementation manner, the data cache synchronization device 700 further includes:
the second sending module is used for sending a first data checking request to the second server so that the second server responds to the first data checking request and sends first to-be-checked cache data in a corresponding cache database of the second server to the first server;
The first receiving module is used for receiving first to-be-checked cache data sent by the second server, and carrying out data check based on the first to-be-checked cache data and first to-be-checked data in a first database corresponding to the first server to obtain a first data check result;
And/or
The second receiving module is used for receiving a second data check request sent by the second server;
and the third sending module is used for responding to the second data checking request and sending second data to be checked in the first database corresponding to the first server to the second server so that the second server performs data checking based on the second data to be checked and the second cache data to be checked in the second cache database corresponding to the second server to obtain a second data checking result, and the second data to be checked is consistent with the data type and/or the data checking range corresponding to the second cache data to be checked.
In one possible implementation manner, the first data checking request includes a data online checking request, the first to-be-checked cache data includes a total amount of cache data in the second server corresponding cache database, the first to-be-checked data includes a total amount of data in the first server corresponding first database, and the data cache synchronization device 700 further includes:
The storage module is used for storing the target data related to the target event into a first database corresponding to the first server after the target event occurs to the first server;
And the first execution module is used for executing the step of sending the first data checking request to the second server after the data stored in the first database is updated.
In one possible implementation manner, the first data checking request includes a data offline checking request, the data offline checking request carries a target data type and a target data checking range that need to be checked, the target data type includes incremental data and/or full data, the target data checking range includes at least one target data checking period, and the data cache synchronization device 700 further includes:
And the second execution module is used for executing the step of sending the first data checking request to the second server after the current time reaches the preset offline checking time or the data offline checking instruction is received.
In one possible implementation manner, the data cache synchronization device 700 further includes:
and the alarm module is used for sending out data cache alarm information under the condition that the first data check result is that the first to-be-checked cache data is inconsistent with the first to-be-checked data.
Referring next to fig. 8, fig. 8 is a schematic structural diagram of another data buffering synchronization device according to an exemplary embodiment of the present disclosure. The above data cache synchronization device is applied to the second server, as shown in fig. 8, and the data cache synchronization device 800 includes:
The first synchronous buffer module 810 is configured to, after subscribing to the transaction message of the first server, synchronously buffer target data sent by the first server through the transaction message, where the target data is data corresponding to a target event that occurs in the first server;
a first receiving module 820, configured to receive target data sent by the first server through an asynchronous RPC;
The second synchronous buffer module 830 is configured to perform asynchronous compensation synchronous buffer based on the target data sent by the first server through the asynchronous RPC.
In one possible implementation manner, the target data sent by the first server through the transaction message carries the first version information, and the data cache synchronization device 800 further includes:
the first judging module is used for judging whether the first version information is larger than the current version information in the corresponding cache database of the second server;
And the first execution module is used for executing the step of synchronously caching the target data sent by the first server through the transaction message under the condition that the first version information is larger than the current version information.
In one possible implementation manner, the target data sent by the first server through the asynchronous RPC carries the second version information, and the data buffering synchronization device 800 further includes:
The second judging module is used for judging whether the second version information is larger than the current version information in the corresponding cache database of the second server;
and the second execution module is used for executing the step of asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC under the condition that the second version information is larger than the current version information.
In one possible implementation manner, the data cache synchronization apparatus 800 further includes:
the second receiving module is used for receiving the first data checking request sent by the first server;
The first sending module is used for responding to the first data checking request and sending first to-be-checked cache data in the second server corresponding cache database to the first server so that the first server performs data checking based on the first to-be-checked cache data and the first to-be-checked data in the first server corresponding first database to obtain a first data checking result;
And/or
A second sending module, configured to send a second data checking request to the first server, so that the first server responds to the second data checking request and sends second data to be checked in a first database corresponding to the first server to the second server;
And the third receiving module is used for receiving the second data to be checked sent by the first server, carrying out data check on the second data to be checked and the second data to be checked in the corresponding cache database of the second server to obtain a second data check result, wherein the second data to be checked is consistent with the data type and/or the data check range corresponding to the second data to be checked.
In a possible implementation manner, the second data checking request includes a data online checking request, the second data to be checked includes the total data in the first database corresponding to the first server, the second cache data to be checked includes the total cache data in the cache database corresponding to the second server, and the data cache synchronization device 800 further includes:
The task generating module is used for generating a cache checking task after the cache data in the cache database is updated;
The task storage module is used for storing the cache checking task into a second database corresponding to the data bypass of the second server;
And the task execution module is used for regularly extracting and executing the cache checking task in a to-be-executed state from the second database so as to trigger the step of sending the second data checking request to the first server.
In one possible implementation manner, the data cache synchronization apparatus 800 further includes:
and the alarm module is used for sending out data cache alarm information when the second data check result is that the second data to be checked is inconsistent with the second cache data to be checked.
The division of the modules in the data buffer synchronization device is only used for illustration, and in other embodiments, the data buffer synchronization device may be divided into different modules according to the need to complete all or part of the functions of the data buffer synchronization device. The implementation of each module in the data cache synchronization apparatus provided in the embodiments of the present specification may be in the form of a computer program. The computer program may run on a server. Program modules of the computer program may be stored in the memory of the server. Which when executed by a processor, performs all or part of the steps of the data cache synchronization method described in the embodiments of the present specification.
Referring next to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure. As shown in fig. 9, the electronic device 900 may include at least one processor 910, at least one communication bus 920, a user interface 930, at least one network interface 940, and a memory 950. Wherein the communication bus 920 may be used to implement the connectivity communications of the various components described above.
The user interface 930 may include a Display screen (Display) and a Camera (Camera), and optionally, the user interface 930 may further include a standard wired interface and a wireless interface.
The network interface 940 may optionally include a bluetooth module, a Near Field Communication (NFC) module, a wireless fidelity (WIRELESS FIDELITY, wi-Fi) module, and the like.
Wherein the processor 910 may include one or more processing cores. The processor 910 utilizes various interfaces and lines to connect various portions of the overall electronic device 900, perform various functions of routing the electronic device 900, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 950, and invoking data stored in the memory 950. Alternatively, the processor 910 may be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 910 may integrate one or a combination of several of a processor (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing contents required to be displayed by the display screen, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 910 and may be implemented by a single chip.
The Memory 950 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Optionally, the memory 950 includes a non-transitory computer readable medium. Memory 950 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 950 may include a stored program area that may store instructions for implementing an operating system, instructions for at least one function (such as a data cache synchronization function, a data cache collation function, etc.), instructions for implementing the various method embodiments described above, etc., and a stored data area that may store data referred to in the various method embodiments described above, etc. Memory 950 may also optionally be at least one storage device located remotely from the processor 910. As shown in fig. 9, an operating system, network communication modules, user interface modules, and application programs may be included in the memory 950, which is one type of computer storage medium.
Specifically, the electronic device 900 may be the data cache synchronization apparatus 700 shown in fig. 7 or the first server, and the processor 910 may be configured to invoke an application program stored in the memory 950, and specifically perform the following operations:
after the first server generates a target event, target data related to the target event is transmitted to a second server in a cross-city mode through a transaction message mode, so that the second server synchronously caches the target data transmitted by the first server through the transaction message after subscribing to the transaction message.
And generating an asynchronous cache synchronous task after the first server generates a target event, wherein the asynchronous cache synchronous task is used for sending target data related to the target event to the second server in an asynchronous RPC mode.
And executing the asynchronous cache synchronous task so that the second server performs asynchronous compensation synchronous cache based on the target data sent by the first server through the asynchronous RPC.
In some possible embodiments, the target data sent by the first server through the transaction message carries first version information, where the first version information is used to determine whether to synchronously buffer the target data sent by the first server through the transaction message, and the target data sent by the first server through the asynchronous RPC carries second version information, where the second version information is used to determine whether to asynchronously compensate for synchronously buffering the target data sent by the first server through the asynchronous RPC.
In some possible embodiments, after the processor 910 performs the generating an asynchronous cache synchronization task, the processor is further configured to perform:
And recording the asynchronous cache synchronous task into a first database corresponding to the first server.
The processor 910 is specifically configured to perform the asynchronous cache synchronization task when performing the above-mentioned execution:
And periodically extracting and executing asynchronous cache synchronous tasks in a to-be-executed state from the first database.
In some possible embodiments, the processor 910 is further configured to perform:
And sending a first data checking request to the second server so that the second server responds to the first data checking request and sends first data to be checked in a corresponding cache database of the second server to the first server.
And receiving first to-be-checked cache data sent by the second server, and carrying out data check on the basis of the first to-be-checked cache data and first to-be-checked data in a first database corresponding to the first server to obtain a first data check result, wherein the first to-be-checked cache data is consistent with the data type and/or the data check range corresponding to the first to-be-checked data.
And/or
And receiving a second data check request sent by the second server.
And responding to the second data checking request, and sending second data to be checked in the first database corresponding to the first server to the second server so that the second server performs data checking based on the second data to be checked and the second cache data to be checked in the second server corresponding cache database to obtain a second data checking result, wherein the second data to be checked is consistent with the data type and/or the data checking range corresponding to the second cache data to be checked.
In some possible embodiments, the first data checking request includes a data online checking request, the first data to be checked includes a full amount of cache data in the second server corresponding cache database, the first data to be checked includes a full amount of data in the first server corresponding first database, and the processor 910 is further configured to perform:
After the first server generates the target event, storing the target data related to the target event into a first database corresponding to the first server.
After the data stored in the first database is updated, the step of sending a first data check request to the second server is executed.
In some possible embodiments, the first data collation request comprises a data offline collation request, the data offline collation request carries a target data type and a target data collation range to be collated, the target data type comprises delta data and/or full data, the target data collation range comprises at least one target data collation period, the processor 910 is further configured to perform:
and after the current time reaches the preset offline checking time or a data offline checking instruction is received, executing the step of sending a first data checking request to the second server.
In some possible embodiments, the processor 910 performs the data checking based on the first to-be-checked cache data and the first to-be-checked data in the first database corresponding to the first server, and after obtaining a first data checking result, is further configured to perform:
and sending out data cache alarm information under the condition that the first data check result is that the first cache data to be checked is inconsistent with the first data to be checked.
In some possible embodiments, the electronic device 900 may be the data cache synchronization apparatus 800 shown in fig. 8 or a second server, and the processor 910 may be configured to call an application program stored in the memory 950, and specifically perform the following operations:
After subscribing to the transaction information of the first server, synchronously caching target data sent by the first server through the transaction information, wherein the target data is data related to a target event generated by the first server.
And receiving target data sent by the first server through the asynchronous RPC.
And performing asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC.
In some possible embodiments, the target data sent by the first server via the transaction message carries first version information, and the processor 910 is further configured to perform:
and judging whether the first version information is larger than the current version information in the corresponding cache database of the second server.
And executing the step of synchronously caching the target data sent by the first server through the transaction message under the condition that the first version information is larger than the current version information.
In some possible embodiments, the target data sent by the first server via the asynchronous RPC carries a second version of information, and the processor 910 is further configured to perform:
And judging whether the second version information is larger than the current version information in the corresponding cache database of the second server.
And executing the step of asynchronous compensation synchronous caching based on the target data sent by the first server through the asynchronous RPC under the condition that the second version information is larger than the current version information.
In some possible embodiments, the processor 910 is further configured to perform:
And receiving a first data check request sent by the first server.
And responding to the first data checking request, and sending first to-be-checked cache data in the second server corresponding cache database to the first server so that the first server performs data checking based on the first to-be-checked cache data and the first to-be-checked data in the first server corresponding first database to obtain a first data checking result, wherein the first to-be-checked cache data is consistent with the data type and/or the data checking range corresponding to the first to-be-checked data.
And/or
And sending a second data checking request to the first server so that the first server responds to the second data checking request and sends second data to be checked in the first database corresponding to the first server to the second server.
And receiving second data to be checked sent by the first server, and performing data check based on the second data to be checked and second data to be checked in a cache database corresponding to the second server to obtain a second data check result, wherein the second data to be checked is consistent with the data type and/or the data check range corresponding to the second data to be checked.
In some possible embodiments, the second data checking request includes a data online checking request, the second data to be checked includes a full amount of data in the first database corresponding to the first server, the second cache data to be checked includes a full amount of cache data in the cache database corresponding to the second server, and the processor 910 is further configured to perform:
After the cache data in the cache database is updated, a cache checking task is generated, and the cache checking task is stored in a second database corresponding to the data bypass of the second server.
And periodically extracting and executing the cache checking task in a to-be-executed state from the second database to trigger the step of sending a second data checking request to the first server.
In some possible embodiments, the processor 910 performs the data checking based on the second data to be checked and the second data to be checked in the second server corresponding cache database, and after obtaining a second data checking result, is further configured to perform:
and sending out data cache alarm information under the condition that the second data check result is that the second data to be checked is inconsistent with the second cache data to be checked.
The present description also provides a computer-readable storage medium having instructions stored therein, which when executed on a computer or processor, cause the computer or processor to perform one or more steps of the above embodiments. The constituent modules of the data cache synchronization device may be stored in the computer-readable storage medium if implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product described above includes one or more computer instructions. When the computer program instructions described above are loaded and executed on a computer, the processes or functions described in accordance with the embodiments of the present specification are all or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired, e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium, e.g., a digital versatile disk (DIGITAL VERSATILE DISC, DVD), or a semiconductor medium, e.g., a Solid state disk (Solid STATE DISK, SSD), etc.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program, which may be stored in a computer-readable storage medium, instructing relevant hardware, and which, when executed, may comprise the embodiment methods as described above. The storage medium includes various media capable of storing program codes such as ROM, RAM, magnetic disk or optical disk. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely preferred embodiments of the present disclosure, and do not limit the scope of the disclosure, and various modifications and improvements made by those skilled in the art to the technical solution of the disclosure should fall within the scope of protection defined by the claims.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims and description may be performed in an order different from that in the embodiments recited in the description and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.

Claims (18)

Translated fromChinese
1.一种数据缓存同步方法,其特征在于,所述方法应用于第一服务器,所述方法包括:1. A data cache synchronization method, characterized in that the method is applied to a first server, and the method comprises:所述第一服务器发生目标事件后,通过事务消息的形式将所述目标事件相关的目标数据跨城发送至第二服务器,以使所述第二服务器在订阅到所述事务消息后,将所述第一服务器通过事务消息发送的目标数据进行同步缓存;After a target event occurs on the first server, target data related to the target event is sent across cities to the second server in the form of a transaction message, so that after the second server subscribes to the transaction message, the target data sent by the first server through the transaction message is synchronously cached;所述第一服务器发生目标事件后,生成异步缓存同步任务;所述异步缓存同步任务为通过异步RPC的方式将所述目标事件相关的目标数据发送至所述第二服务器;After a target event occurs on the first server, an asynchronous cache synchronization task is generated; the asynchronous cache synchronization task is to send target data related to the target event to the second server in an asynchronous RPC manner;执行所述异步缓存同步任务,以使所述第二服务器基于所述第一服务器通过异步RPC发送的目标数据进行异步补偿同步缓存。The asynchronous cache synchronization task is executed to enable the second server to perform asynchronous compensation synchronization cache based on the target data sent by the first server through the asynchronous RPC.2.如权利要求1所述的方法,其特征在于,所述第一服务器通过事务消息发送的目标数据携带第一版本信息,所述第一版本信息用于决策是否同步缓存所述第一服务器通过事务消息发送的目标数据;所述第一服务器通过异步RPC发送的目标数据携带第二版本信息,所述第二版本信息用于决策是否异步补偿同步缓存所述第一服务器通过异步RPC发送的目标数据。2. The method as claimed in claim 1 is characterized in that the target data sent by the first server through the transaction message carries first version information, and the first version information is used to decide whether to synchronously cache the target data sent by the first server through the transaction message; the target data sent by the first server through asynchronous RPC carries second version information, and the second version information is used to decide whether to asynchronously compensate and synchronously cache the target data sent by the first server through asynchronous RPC.3.如权利要求1所述的方法,其特征在于,所述生成异步缓存同步任务之后,所述执行所述异步缓存同步任务之前,所述方法还包括:3. The method according to claim 1, characterized in that after generating the asynchronous cache synchronization task and before executing the asynchronous cache synchronization task, the method further comprises:将所述异步缓存同步任务记录至所述第一服务器对应的第一数据库中;Recording the asynchronous cache synchronization task in a first database corresponding to the first server;所述执行所述异步缓存同步任务包括:The executing of the asynchronous cache synchronization task includes:定期从所述第一数据库中提取并执行处于待执行状态的异步缓存同步任务。The asynchronous cache synchronization tasks in a pending state are periodically extracted and executed from the first database.4.如权利要求1所述的方法,其特征在于,所述方法还包括:4. The method according to claim 1, characterized in that the method further comprises:向所述第二服务器发送第一数据核对请求,以使所述第二服务器响应于所述第一数据核对请求,向所述第一服务器发送所述第二服务器对应缓存数据库中的第一待核对缓存数据;Sending a first data verification request to the second server, so that the second server responds to the first data verification request and sends first cache data to be verified in the cache database corresponding to the second server to the first server;接收所述第二服务器发送的第一待核对缓存数据,并基于所述第一待核对缓存数据和所述第一服务器对应第一数据库中的第一待核对数据进行数据核对,得到第一数据核对结果;所述第一待核对缓存数据与所述第一待核对数据对应的数据类型和/或数据核对范围一致;receiving first cache data to be checked sent by the second server, and performing data checking based on the first cache data to be checked and the first data to be checked in the first database corresponding to the first server, to obtain a first data checking result; the first cache data to be checked is consistent with the data type and/or data checking range corresponding to the first data to be checked;和/或and/or接收所述第二服务器发送的第二数据核对请求;receiving a second data verification request sent by the second server;响应于所述第二数据核对请求,向所述第二服务器发送所述第一服务器对应第一数据库中的第二待核对数据,以使所述第二服务器基于所述第二待核对数据和所述第二服务器对应缓存数据库中的第二待核对缓存数据进行数据核对,得到第二数据核对结果;所述第二待核对数据与所述第二待核对缓存数据对应的数据类型和/或数据核对范围一致。In response to the second data verification request, the second data to be verified in the first database corresponding to the first server is sent to the second server, so that the second server performs data verification based on the second data to be verified and the second cache data to be verified in the cache database corresponding to the second server, and obtains a second data verification result; the second data to be verified is consistent with the data type and/or data verification range corresponding to the second cache data to be verified.5.如权利要求4所述的方法,其特征在于,所述第一数据核对请求包括数据在线核对请求;所述第一待核对缓存数据包括所述第二服务器对应缓存数据库中的全量缓存数据;所述第一待核对数据包括所述第一服务器对应第一数据库中的全量数据;所述方法还包括:5. The method according to claim 4, characterized in that the first data verification request comprises a data online verification request; the first cache data to be verified comprises the full amount of cache data in the cache database corresponding to the second server; the first data to be verified comprises the full amount of data in the first database corresponding to the first server; the method further comprises:所述第一服务器发生目标事件后,将所述目标事件相关的目标数据保存至所述第一数据库中;After a target event occurs on the first server, target data related to the target event is saved in the first database;所述第一数据库中存储的数据发生更新后,执行所述向所述第二服务器发送第一数据核对请求的步骤。After the data stored in the first database is updated, the step of sending the first data verification request to the second server is performed.6.如权利要求4所述的方法,其特征在于,所述第一数据核对请求包括数据离线核对请求;所述数据离线核对请求携带需要核对的目标数据类型和目标数据核对范围;所述目标数据类型包括增量数据和/或全量数据;所述目标数据核对范围包括至少一个目标数据核对时间段;所述方法还包括:6. The method according to claim 4, characterized in that the first data verification request includes a data offline verification request; the data offline verification request carries the target data type and the target data verification range to be verified; the target data type includes incremental data and/or full data; the target data verification range includes at least one target data verification time period; the method further includes:当前时间达到预设离线核对时间或者接收到数据离线核对指令后,执行所述向所述第二服务器发送第一数据核对请求的步骤。After the current time reaches the preset offline verification time or a data offline verification instruction is received, the step of sending the first data verification request to the second server is executed.7.如权利要求4所述的方法,其特征在于,所述基于所述第一待核对缓存数据和所述第一服务器对应第一数据库中的第一待核对数据进行数据核对,得到第一数据核对结果之后,所述方法还包括:7. The method according to claim 4, characterized in that after performing data verification based on the first to-be-verified cache data and the first to-be-verified data in the first database corresponding to the first server to obtain a first data verification result, the method further comprises:在所述第一数据核对结果为所述第一待核对缓存数据与所述第一待核对数据不一致的情况下,发出数据缓存告警信息。When the first data verification result is that the first to-be-verified cached data is inconsistent with the first to-be-verified data, data cache alarm information is issued.8.一种数据缓存同步方法,其特征在于,所述方法应用于第二服务器,所述方法包括:8. A data cache synchronization method, characterized in that the method is applied to a second server, and the method comprises:在订阅到第一服务器的事务消息后,将所述第一服务器通过事务消息发送的目标数据进行同步缓存;所述目标数据为所述第一服务器发生的目标事件相关的数据;After subscribing to the transaction message of the first server, synchronously caching the target data sent by the first server through the transaction message; the target data is data related to the target event occurring in the first server;接收到所述第一服务器通过异步RPC发送的目标数据;Receiving target data sent by the first server via asynchronous RPC;基于所述第一服务器通过异步RPC发送的目标数据进行异步补偿同步缓存。Asynchronous compensation synchronization caching is performed based on target data sent by the first server through asynchronous RPC.9.如权利要求8所述的方法,其特征在于,所述第一服务器通过事务消息发送的目标数据携带第一版本信息;所述方法还包括:9. The method according to claim 8, wherein the target data sent by the first server through the transaction message carries the first version information; the method further comprises:判断所述第一版本信息是否大于所述第二服务器对应缓存数据库中的当前版本信息;Determine whether the first version information is greater than the current version information in the cache database corresponding to the second server;在所述第一版本信息大于所述当前版本信息的情况下,执行所述将所述第一服务器通过事务消息发送的目标数据进行同步缓存的步骤。In a case where the first version information is greater than the current version information, the step of synchronously caching the target data sent by the first server through the transaction message is performed.10.如权利要求8所述的方法,其特征在于,所述第一服务器通过异步RPC发送的目标数据携带第二版本信息;所述方法还包括:10. The method according to claim 8, wherein the target data sent by the first server via asynchronous RPC carries the second version information; the method further comprises:判断所述第二版本信息是否大于所述第二服务器对应缓存数据库中的当前版本信息;Determine whether the second version information is greater than the current version information in the cache database corresponding to the second server;在所述第二版本信息大于所述当前版本信息的情况下,执行所述基于所述第一服务器通过异步RPC发送的目标数据进行异步补偿同步缓存的步骤。In a case where the second version information is greater than the current version information, the step of performing asynchronous compensation synchronization caching based on the target data sent by the first server through asynchronous RPC is performed.11.如权利要求8所述的方法,其特征在于,所述方法还包括:11. The method according to claim 8, characterized in that the method further comprises:接收所述第一服务器发送的第一数据核对请求;receiving a first data verification request sent by the first server;响应于所述第一数据核对请求,向所述第一服务器发送所述第二服务器对应缓存数据库中的第一待核对缓存数据,以使所述第一服务器基于所述第一待核对缓存数据和所述第一服务器对应第一数据库中的第一待核对数据进行数据核对,得到第一数据核对结果;所述第一待核对缓存数据与所述第一待核对数据对应的数据类型和/或数据核对范围一致;In response to the first data verification request, sending the first cache data to be verified in the cache database corresponding to the second server to the first server, so that the first server performs data verification based on the first cache data to be verified and the first data to be verified in the first database corresponding to the first server, and obtains a first data verification result; the first cache data to be verified is consistent with the data type and/or data verification range corresponding to the first cache data to be verified;和/或and/or向所述第一服务器发送第二数据核对请求,以使所述第一服务器响应于所述第二数据核对请求,向所述第二服务器发送所述第一数据库中的第二待核对数据;Sending a second data verification request to the first server, so that the first server responds to the second data verification request and sends second data to be verified in the first database to the second server;接收所述第一服务器发送的第二待核对数据,并基于所述第二待核对数据和所述第二服务器对应缓存数据库中的第二待核对缓存数据进行数据核对,得到第二数据核对结果;所述第二待核对数据与所述第二待核对缓存数据对应的数据类型和/或数据核对范围一致。Receive the second data to be verified sent by the first server, and perform data verification based on the second data to be verified and the second cache data to be verified in the cache database corresponding to the second server to obtain a second data verification result; the second data to be verified is consistent with the data type and/or data verification range corresponding to the second cache data to be verified.12.如权利要求11所述的方法,其特征在于,所述第二数据核对请求包括数据在线核对请求;所述第二待核对数据包括所述第一服务器对应第一数据库中的全量数据;所述第二待核对缓存数据包括所述第二服务器对应缓存数据库中的全量缓存数据;所述方法还包括:12. The method according to claim 11, characterized in that the second data verification request comprises a data online verification request; the second data to be verified comprises the full amount of data in the first database corresponding to the first server; the second cache data to be verified comprises the full amount of cache data in the cache database corresponding to the second server; the method further comprises:所述缓存数据库中的缓存数据发生更新后,生成缓存核对任务,并将所述缓存核对任务存储至所述第二服务器对应数据旁路的第二数据库中;After the cache data in the cache database is updated, a cache check task is generated, and the cache check task is stored in a second database corresponding to the data bypass of the second server;定期从所述第二数据库中提取并执行处于待执行状态的缓存核对任务,以触发所述向所述第一服务器发送第二数据核对请求的步骤。The cache verification tasks in a pending state are periodically extracted and executed from the second database to trigger the step of sending a second data verification request to the first server.13.如权利要求11所述的方法,其特征在于,所述基于所述第二待核对数据和所述第二服务器对应缓存数据库中的第二待核对缓存数据进行数据核对,得到第二数据核对结果之后,所述方法还包括:13. The method according to claim 11, characterized in that after performing data verification based on the second data to be verified and the second cache data to be verified in the cache database corresponding to the second server to obtain the second data verification result, the method further comprises:在所述第二数据核对结果为所述第二待核对数据与所述第二待核对缓存数据不一致的情况下,发出数据缓存告警信息。When the second data verification result is that the second data to be verified is inconsistent with the second cached data to be verified, data cache warning information is issued.14.一种数据缓存同步装置,其特征在于,所述装置应用于第一服务器,所述装置包括:14. A data cache synchronization device, characterized in that the device is applied to a first server, and the device comprises:第一发送模块,用于所述第一服务器发生目标事件后,通过事务消息的形式将所述目标事件相关的目标数据跨城发送至第二服务器,以使所述第二服务器在订阅到所述事务消息后,将所述第一服务器通过事务消息发送的目标数据进行同步缓存;A first sending module is used to send target data related to the target event to the second server across cities in the form of a transaction message after a target event occurs on the first server, so that the second server can synchronize and cache the target data sent by the first server through the transaction message after subscribing to the transaction message;缓存同步任务生成模块,用于所述第一服务器发生目标事件后,生成异步缓存同步任务;所述异步缓存同步任务为通过异步RPC的方式将所述目标事件相关的目标数据发送至所述第二服务器;A cache synchronization task generation module, used for generating an asynchronous cache synchronization task after a target event occurs on the first server; the asynchronous cache synchronization task is to send target data related to the target event to the second server by means of asynchronous RPC;缓存同步任务执行模块,用于执行所述异步缓存同步任务,以使所述第二服务器基于所述第一服务器通过异步RPC发送的目标数据进行异步补偿同步缓存。The cache synchronization task execution module is used to execute the asynchronous cache synchronization task so that the second server performs asynchronous compensation synchronization cache based on the target data sent by the first server through asynchronous RPC.15.一种数据缓存同步装置,其特征在于,所述装置应用于第二服务器,所述装置包括:15. A data cache synchronization device, characterized in that the device is applied to a second server, and the device comprises:第一同步缓存模块,用于在订阅到第一服务器的事务消息后,将所述第一服务器通过事务消息发送的目标数据进行同步缓存;所述目标数据为所述第一服务器发生的目标事件对应的数据;A first synchronization cache module, configured to synchronously cache target data sent by the first server through the transaction message after subscribing to the transaction message of the first server; the target data is data corresponding to a target event occurring in the first server;第一接收模块,用于接收到所述第一服务器通过异步RPC发送的目标数据;A first receiving module, used for receiving target data sent by the first server via asynchronous RPC;第二同步缓存模块,用于基于所述第一服务器通过异步RPC发送的目标数据进行异步补偿同步缓存。The second synchronization cache module is used to perform asynchronous compensation synchronization cache based on the target data sent by the first server through asynchronous RPC.16.一种电子设备,其特征在于,包括:处理器和存储器;16. An electronic device, comprising: a processor and a memory;所述处理器与所述存储器相连;The processor is connected to the memory;所述存储器,用于存储可执行程序代码;The memory is used to store executable program code;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行如权利要求1-13任一项所述的方法。The processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to execute the method according to any one of claims 1 to 13.17.一种计算机存储介质,其特征在于,所述计算机存储介质存储有多条指令,所述指令适于由处理器加载并执行如权利要求1-13任一项所述的方法步骤。17. A computer storage medium, characterized in that the computer storage medium stores a plurality of instructions, wherein the instructions are suitable for being loaded by a processor and executing the method steps according to any one of claims 1 to 13.18.一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在计算机或处理器上运行时,使得所述计算机或所述处理器执行如权利要求1-13任一项所述的数据缓存同步方法。18. A computer program product comprising instructions, characterized in that when the computer program product is run on a computer or a processor, the computer or the processor executes the data cache synchronization method according to any one of claims 1 to 13.
CN202510819460.5A2025-06-182025-06-18 Data cache synchronization method, device, electronic device, medium and program productPendingCN120336047A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510819460.5ACN120336047A (en)2025-06-182025-06-18 Data cache synchronization method, device, electronic device, medium and program product

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510819460.5ACN120336047A (en)2025-06-182025-06-18 Data cache synchronization method, device, electronic device, medium and program product

Publications (1)

Publication NumberPublication Date
CN120336047Atrue CN120336047A (en)2025-07-18

Family

ID=96366686

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510819460.5APendingCN120336047A (en)2025-06-182025-06-18 Data cache synchronization method, device, electronic device, medium and program product

Country Status (1)

CountryLink
CN (1)CN120336047A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111638979A (en)*2020-05-242020-09-08中信银行股份有限公司Call request processing method and device, electronic equipment and readable storage medium
CN112988460A (en)*2021-02-052021-06-18新华三大数据技术有限公司Data backup method and device for virtual machine
CN113821355A (en)*2021-09-062021-12-21长沙博为软件技术股份有限公司Js-based RPC synchronous communication method and equipment
CN114153660A (en)*2021-11-292022-03-08平安壹账通云科技(深圳)有限公司Database backup method, device, server and medium
CN117938928A (en)*2024-01-092024-04-26济南浪潮数据技术有限公司 Distributed service management method, device, server and storage medium
CN117971513A (en)*2024-04-012024-05-03北京麟卓信息科技有限公司GPU virtual synchronization optimization method based on kernel structure dynamic reconstruction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111638979A (en)*2020-05-242020-09-08中信银行股份有限公司Call request processing method and device, electronic equipment and readable storage medium
CN112988460A (en)*2021-02-052021-06-18新华三大数据技术有限公司Data backup method and device for virtual machine
CN113821355A (en)*2021-09-062021-12-21长沙博为软件技术股份有限公司Js-based RPC synchronous communication method and equipment
CN114153660A (en)*2021-11-292022-03-08平安壹账通云科技(深圳)有限公司Database backup method, device, server and medium
CN117938928A (en)*2024-01-092024-04-26济南浪潮数据技术有限公司 Distributed service management method, device, server and storage medium
CN117971513A (en)*2024-04-012024-05-03北京麟卓信息科技有限公司GPU virtual synchronization optimization method based on kernel structure dynamic reconstruction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAMMURABI MENDES: ""Seriema: RDMA-based Remote Invocation with a Case-Study on Monte-Carlo Tree Search"", 《2022 IEEE 34TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD)》, 20 December 2022 (2022-12-20), pages 11 - 20*
田金博: ""多停车场信息协同管理系统设计与实现"", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, no. 2022, 15 March 2022 (2022-03-15), pages 034 - 1661*

Similar Documents

PublicationPublication DateTitle
JP6929497B2 (en) Cross-blockchain interaction methods, devices, systems, and electronic devices
US7912949B2 (en)Systems and methods for recording changes to a data store and propagating changes to a client application
CN113010818A (en)Access current limiting method and device, electronic equipment and storage medium
EP2423868A1 (en)Systems and methods for managing subscription-based licensing of software products
CN114253519B (en) A smart park security management system and electronic equipment
CN110677462B (en)Access processing method, system, device and storage medium for multi-block chain network
WO2014152078A1 (en)Application architecture supporting multiple services and caching
CN103685304A (en)Method and system for sharing session information
CN113010549A (en)Data processing method based on remote multi-active system, related equipment and storage medium
CN112260853A (en)Disaster recovery switching method and device, storage medium and electronic equipment
CN106202082B (en)Method and device for assembling basic data cache
CN112083945B (en) Update prompt method, device, electronic device and storage medium for NPM installation package
CN109117609A (en)A kind of request hold-up interception method and device
CN116628033A (en)Cache preheating and data processing method, service device, electronic equipment and medium
CN114169997A (en) A debit method and device
CN116389601A (en) A communication method, device, device and storage medium
CN109614271A (en) Control method, device, device and storage medium for data consistency of multiple clusters
CN118368296B (en)Cross-data-center application data real-time synchronization method, device and system
CN114398376B (en) Data processing method, device and readable storage medium
CN112463887B (en) A data processing method, device, equipment and storage medium
CN120336047A (en) Data cache synchronization method, device, electronic device, medium and program product
CN110933145A (en)Remote scheduling method, device, equipment and medium
US20150120607A1 (en)System and method for customer event email consolidation and delivery
CN115292371A (en)Data caching method, device, equipment and system
CN115858188A (en)Message processing method, system, device, equipment, medium and computer product

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp