Movatterモバイル変換


[0]ホーム

URL:


CN105912479A - Concurrent data caching method and structure - Google Patents

Concurrent data caching method and structure
Download PDF

Info

Publication number
CN105912479A
CN105912479ACN201610210432.4ACN201610210432ACN105912479ACN 105912479 ACN105912479 ACN 105912479ACN 201610210432 ACN201610210432 ACN 201610210432ACN 105912479 ACN105912479 ACN 105912479A
Authority
CN
China
Prior art keywords
cache
data
partition
write
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610210432.4A
Other languages
Chinese (zh)
Other versions
CN105912479B (en
Inventor
徐驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN DIGITAL PEAK TECHNOLOGY Co Ltd
Original Assignee
WUHAN DIGITAL PEAK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN DIGITAL PEAK TECHNOLOGY Co LtdfiledCriticalWUHAN DIGITAL PEAK TECHNOLOGY Co Ltd
Priority to CN201610210432.4ApriorityCriticalpatent/CN105912479B/en
Publication of CN105912479ApublicationCriticalpatent/CN105912479A/en
Priority to PCT/CN2017/077486prioritypatent/WO2017173919A1/en
Application grantedgrantedCritical
Publication of CN105912479BpublicationCriticalpatent/CN105912479B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

一种并发数据的缓存方法,A每组数据写入,A1监测是否有写锁定状态缓存分区:若有转入步骤A2‑2;若无A2监测是否存在空闲状态缓存分区:A21若有A21‑1选定一组空闲状态缓存分区置为写锁定状态;A21‑2向缓存分区写入数据:A21‑3判断数据是否写入成功:若成功,则该组数据写入完成;若失败,则将该缓存分区置为写满状态,转入步骤A1;A22若无空闲状态缓存分区,结束数据写入;步骤B数据读取,B1实时监测是否存在写满状态下缓存分区:若有B11选定该组缓存分区置为读锁定状态;B12读取该缓存分区内的缓存数据,当判断缓存分区读取完毕时,将其设为空闲状态,转入步骤B1;若没有,转入步骤B1,继续数据读取,其对缓存分区访问施加严格读写锁,可有效优化数据处理速率。

A caching method for concurrent data, A writes each group of data, A1 monitors whether there is a write-locked cache partition: if yes, transfer to step A2-2; if there is no A2 monitors whether there is an idle cache partition: A21 if there is A21- 1 Select a set of idle state cache partitions and put them in the write-locked state; A21‑2 Write data to the cache partitions: A21‑3 Determine whether the data is written successfully: if successful, the group of data writing is completed; if it fails, then Set the cache partition to the full state, and go to step A1; A22, if there is no idle state cache partition, end data writing; step B data read, B1 monitors in real time whether there is a cache partition in the full state: if there is B11 selected Set this group of cache partitions to read lock state; B12 reads the cache data in this cache partition, when it is judged that the cache partition has been read, set it as idle state, and turn to step B1; if not, turn to step B1 , to continue data reading, which imposes strict read-write locks on cache partition access, which can effectively optimize the data processing rate.

Description

Translated fromChinese
一种并发数据的缓存方法及结构A caching method and structure for concurrent data

技术领域technical field

本发明涉及辐射检测领域的数据采集技术,具体涉及一种高速数据采集的缓存方法和设备。The invention relates to data acquisition technology in the field of radiation detection, in particular to a high-speed data acquisition cache method and equipment.

背景技术Background technique

现有技术中的高速数据采集系统,其每个数据通道一般都对应设有一个独立的用于缓存通道数据的FIFO(First Input First Ouput,先入先出队列),然而在实际的数据采集中,经常遇到数据采集率高、速度快,而数据处理时间长,速度较慢的矛盾,当缓存容量大时,这种原始数据的速率较高,对网络接收和数据处理提出了较高的要求。In the high-speed data acquisition system in the prior art, each data channel is generally correspondingly provided with an independent FIFO (First Input First Ouput, first-in-first-out queue) for buffering channel data. However, in actual data acquisition, Often encounter the contradiction of high data collection rate and fast speed, but long data processing time and slow speed. When the cache capacity is large, the rate of this original data is high, which puts forward higher requirements for network reception and data processing. .

以全数字PET为例,图1为全数字PET设备中数据采集与处理流程示意图,探测器各通道对接收到信号进行采样和编码,以特定格式的数据包发送到网络。PET成像所需的有效事件分布于不同的探测器通道,两者之间的匹配关系可通过事件采样时间来标定。来自各探测器通道的有效事件以及噪声数据混合在一起,串行分布于网络传输链路中,当采集服务器接收到这些数据后,基于符合算法从中筛选出有效事件,并进行时间、能量校正,然后根据重建算法将筛选结果转换为PET图像。理想情况下数据采集应与符合处理并发执行,实时完成数据筛选、时间校正、能量校正,从而降低重建前数据存储所需的资源要求。但是由于通道数量多,且每个通道中数据量十分巨大,每秒需要同时处理1.5GB至3GB的数据,若无法及时进行处理,会导致有效事件的丢包,进而导致数据筛选、时间校正、能量校正的有效性降低,图像重建无法实现或者虽然能实现但是严重影响精准性。Taking all-digital PET as an example, Figure 1 is a schematic diagram of the data acquisition and processing flow in all-digital PET equipment. Each channel of the detector samples and encodes the received signal, and sends it to the network in a specific format of data packets. The effective events required for PET imaging are distributed in different detector channels, and the matching relationship between them can be calibrated by the event sampling time. Effective events and noise data from each detector channel are mixed together and serially distributed in the network transmission link. When the acquisition server receives these data, it screens out effective events based on the coincidence algorithm and performs time and energy correction. The screening results are then converted into PET images according to the reconstruction algorithm. Ideally, data acquisition should be performed concurrently with coincidence processing, and data screening, time correction, and energy correction can be completed in real time, thereby reducing the resource requirements required for data storage before reconstruction. However, due to the large number of channels and the huge amount of data in each channel, 1.5GB to 3GB of data needs to be processed at the same time per second. If it cannot be processed in time, it will lead to packet loss of effective events, which in turn will lead to data screening, time correction, The effectiveness of energy correction is reduced, and image reconstruction cannot be realized or it can be realized but seriously affects the accuracy.

发明内容Contents of the invention

本发明的目的在于提供一种并发数据的缓存方法及结构,能够有效解决数据采集率高、速度块,而数据处理时间长、速度慢的问题,尤其适用于全数字PET中的探测器数据采集。The purpose of the present invention is to provide a caching method and structure of concurrent data, which can effectively solve the problems of high data collection rate and fast speed, but long data processing time and slow speed, especially suitable for detector data collection in all-digital PET .

为达到上述目的,本发明的解决方案是:To achieve the above object, the solution of the present invention is:

本发明公开一种并发数据的缓存方法,包括以下步骤:缓存分区的工作状态设置为空闲状态、写锁定状态、写满状态以及读锁定状态中的任意一种,The invention discloses a caching method for concurrent data, which comprises the following steps: setting the working state of the cache partition to any one of idle state, write-locked state, write-full state and read-locked state,

(A)每组数据写入时,(A1)实时监测是否存在写锁定状态缓存分区:若有写锁定状态缓存分区直接转入步骤(A2-2)进行该组数据写入;若没有写锁定状态缓存分区,(A2)实时监测是否存在空闲状态的缓存分区:(A) When each set of data is written, (A1) monitor in real time whether there is a write-locked state cache partition: if there is a write-locked state cache partition, directly transfer to step (A2-2) to write the set of data; if there is no write lock Status cache partition, (A2) Real-time monitoring of whether there is an idle cache partition:

(A21)若有处于空闲状态的缓存分区,(A21-1)选定一组空闲状态的缓存分区置为写锁定状态;(A21-2)向所述写锁定状态的缓存分区写入该组数据:(A21-3)判断所述数据是否写入成功:若写入成功,则该组数据写入完成,结束本次数据写入;若写入失败,则将当前缓存分区置为写满状态,并转入步骤(A1)继续该组数据的写入;(A21) If there is a cache partition in an idle state, (A21-1) select a group of cache partitions in an idle state to be in a write-locked state; (A21-2) write the group to the cache partition in the write-locked state Data: (A21-3) Judging whether the data is written successfully: if the writing is successful, the writing of the group of data is completed, and this data writing ends; if the writing fails, the current cache partition is set to full State, and turn to step (A1) to continue writing of this group of data;

(A22)若没有处于空闲状态的缓存分区,则结束该组数据写入;(A22) If there is no cache partition in an idle state, then end the group of data writing;

步骤(B)数据读取时,(B1)实时监测是否存在写满状态下的缓存分区:若有处于写满状态下的缓存分区,(B11)选定该组写满状态的缓存分区置为读锁定状态;(B12)读取该读锁定状态缓存分区内的缓存数据,当判断所述读锁定状态缓存分区读取完毕时,将所述缓存设置为空闲状态,同时转入步骤(B1)继续下一组缓存分区的读取;若没有写满状态的缓存分区,转入步骤(B1),继续数据的读取。Step (B) When the data is read, (B1) monitor in real time whether there is a cache partition in the full state: if there is a cache partition in the full state, (B11) select the group of cache partitions in the full state to set Read lock state; (B12) read the cache data in the read lock state cache partition, when it is judged that the read lock state cache partition has been read, the cache is set to an idle state, and simultaneously turns to step (B1) Continue to read the next group of cache partitions; if there is no cache partition in full state, go to step (B1) and continue to read data.

所述缓存分区中,同时只存在一组写锁定状态缓存分区与一组读锁定状态缓存分区;In the cache partition, there are only one set of write-locked state cache partitions and one set of read-locked state cache partitions at the same time;

优选的,全部数据写入进程结束时,若存在一组未写满的缓存分区,则将该组缓存分区置为写满状态以进行数据的读取。Preferably, when all the data writing process ends, if there is a group of cache partitions that are not fully written, the group of cache partitions is set to a full state to read data.

依次顺序访问各缓存分区以监测是否存在写锁定或空闲状态的缓存分区;Access each cache partition sequentially to monitor whether there is a write-locked or idle cache partition;

优选的,采取遍历、与数据写入访问相同的顺序、缓存分区写入完成主动上报中的任一方式实时监测是否存在写满状态的缓存分区;Preferably, any method of traversal, the same sequence as data write access, and active reporting of cache partition write completion is adopted to monitor in real time whether there is a cache partition in a full state;

所述缓存分区内部采用分级管理的方式进行数据写入操作,所述缓存分区中包括多个大小相同的缓存扇区,且所述缓存扇区一一编码,且包括多个大小相同的缓存页面,所述每组写入数据的大小与所述缓存页面大小相同设置,则所述步骤(A21-3)中,判断所述数据是否写入成功包括以下步骤:每组数据写入缓存分区的每一缓存扇区后,缓存扇区内部计数,确定当前缓存扇区编码,比较所述缓存扇区编码与所述缓存扇区的最大编码数大小:若该缓存扇区编码小于所述缓存分区的最大编码数,判断下一组数据能够在本缓存分区中写入成功,若缓存扇区编码等于所述缓存分区的最大编码数,则判断所述缓存分区已满,转入步骤(A1),将下一组数据写入下一个缓存分区中;The cache partition internally adopts a hierarchical management method to perform data writing operations. The cache partition includes multiple cache sectors of the same size, and the cache sectors are encoded one by one, and includes multiple cache pages of the same size. , the size of each group of written data is set the same as the size of the cache page, then in the step (A21-3), judging whether the data is successfully written includes the following steps: each group of data is written into the cache partition After each cache sector, the internal count of the cache sector determines the current cache sector code, and compares the cache sector code with the maximum number of codes in the cache sector: if the cache sector code is smaller than the cache partition The maximum number of codes, judging that the next group of data can be successfully written in this cache partition, if the cache sector code is equal to the maximum code number of the cache partition, then it is judged that the cache partition is full, and then go to step (A1) , write the next set of data into the next cache partition;

优选的,所述数据按照编码顺序写入至所述缓存扇区中,且所述缓存页面一一编码,所述数据写入至缓存扇区时,按照缓存页面的编码顺序依次写入。Preferably, the data is written into the cache sectors in an encoding order, and the cache pages are encoded one by one, and when the data is written into the cache sectors, they are written in sequence in accordance with the encoding order of the cache pages.

所述步骤(A21-3)中,所述数据写入成功判断包括以下步骤:向所述写锁定状态的缓存分区写入外部发送的数据组:当判断所述写锁定状态缓存分区未写满时,写入成功,继续该组缓存分区的数据写入;当判断所述写锁定状态缓存分区写满或外部数据写完毕时,将该组缓存分区置为写满状态,并转入步骤(A1)继续下一组外部数据的写入;In the step (A21-3), the successful judgment of the data writing includes the following steps: writing an externally sent data group to the cache partition in the write-locked state: when it is judged that the cache partition in the write-locked state is not full When writing is successful, continue the data writing of this group of cache partitions; When judging that the write-locked state cache partition is full or the external data is written, this group of cache partitions is set to a full state, and turns to the step ( A1) Continue writing of the next set of external data;

进一步的,所述写锁定状态的缓存分区写满判断包括以下步骤:向所述写锁定状态的缓存分区写入外部发送的数据时,比较外部数据量与所述写锁定状态的缓存分区剩余空间大小,若外部数据的数量大于所述写锁定状态的缓存分区剩余空间,判断所述写锁定状态的缓存分区写满;或,向所述写锁定状态的缓存分区写入外部发送的数据时,系统报错,则判断所述写锁定状态的缓存分区写满。Further, the judging that the cache partition in the write-locked state is full includes the following steps: when writing externally sent data to the cache partition in the write-locked state, comparing the amount of external data with the remaining space of the cache partition in the write-locked state Size, if the amount of external data is greater than the remaining space of the cache partition in the write-locked state, it is judged that the cache partition in the write-locked state is full; or, when writing externally sent data to the cache partition in the write-locked state, If the system reports an error, it is determined that the cache partition in the write-locked state is full.

所述步骤(B1)之前,还包括实时监测是否存在读锁定状态缓存分区的步骤,若有,直接转入步骤(B12)读取所述读锁定状态缓存分区内的缓存数据;若无,则转入步骤(B1);Before the step (B1), it also includes the step of monitoring in real time whether there is a read lock state cache partition, if so, directly transfer to step (B12) to read the cache data in the read lock state cache partition; if not, then Go to step (B1);

优选的,所述数据读取速度大于数据写入速度;Preferably, the data reading speed is greater than the data writing speed;

优选的,其应有于生产者与消费者模型。Preferably, it should exist in the producer and consumer models.

本发明公开了一种并发数据的缓存结构,其同时进行数据的写入与读取,包括一组数据写入线程、一组数据读取线程、一组数据缓存模块以及一分区控制模块;The invention discloses a concurrent data cache structure, which simultaneously writes and reads data, and includes a group of data writing threads, a group of data reading threads, a group of data caching modules and a partition control module;

所述数据缓存模块包括多组缓存分区;The data cache module includes multiple sets of cache partitions;

所述分区控制模块与所述数据缓存模块通信连接以控制所述多个缓存分区的工作状态以及被访问顺序,所述分区控制模块用于将所述每个缓存分区的工作状态设置为空闲状态、写锁定状态、写满状态以及读锁定状态中的任意一种,仅当所述缓存分区为空闲状态时支持写锁定,且所述缓存分区处于写锁定状态时支持数据写入,仅当所述缓存分区为写满状态时支持读锁定,且所述缓存分区处于读锁定状态时支持数据读出;The partition control module is communicatively connected with the data cache module to control the working state and the order of access of the plurality of cache partitions, and the partition control module is used to set the working state of each cache partition to an idle state , write-locked state, write-full state, and read-locked state, write lock is supported only when the cache partition is in the idle state, and data writing is supported when the cache partition is in the write-locked state, only when the cache partition is in the write-locked state The cache partition supports read locking when it is in a full state, and supports data readout when the cache partition is in a read lock state;

所述数据写入线程与所述数据读取线程分别经由所述分区控制模块与各缓存分区通信连接,所述分区控制模块控制所述数据写入线程与所述数据读取线程访问缓存分区的顺序,以使得所述数据写入线程与所述数据读取线程根据所访问缓存分区的工作状态进行数据的写入与读取。The data writing thread and the data reading thread are respectively connected to each cache partition through the partition control module, and the partition control module controls the data writing thread and the data reading thread to access the cache partition. order, so that the data writing thread and the data reading thread perform data writing and reading according to the working status of the accessed cache partition.

所述每组缓存分区包括一控制单元以及多组缓存扇区,所述缓存扇区一一编码设置,所述控制单元与所述每组缓存扇区通信连接,以控制每组缓存分区中,数据写入缓存扇区的顺序;Each set of cache partitions includes a control unit and multiple sets of cache sectors, the cache sectors are coded one by one, and the control unit communicates with each set of cache sectors to control each set of cache partitions, The order in which data is written to cache sectors;

进一步优选的,所述每组缓存扇区包括一控制组件以及多个缓存页面,所述缓存页面一一编码且每组写入数据的大小与缓存页面相同设置,所述控制组件与所述缓存页面分别通向连接,以控制每组缓存扇区中,数据写入缓存页码的顺序。Further preferably, each group of cache sectors includes a control component and a plurality of cache pages, the cache pages are coded one by one and the size of each group of written data is set to be the same as that of the cache pages, and the control component and the cache Pages are connected separately to control the order in which data is written to the cache page numbers in each group of cache sectors.

所述分区控制模块包括一写锁定判断单元以及一读锁定判断单元;The partition control module includes a write lock judging unit and a read lock judging unit;

所述写锁定判断单元与数据缓存模块通信连接,以控制数据缓存模块中缓存分区的访问顺序、写锁定与去写锁定,所述数据写入线程与所述写锁定判断单元通信连接以根据所述写锁定判断单元确定的访问顺序访问的缓存分区,并根据当前访问缓存分区的工作状态进行数据的写入操作;The write lock judging unit is communicatively connected to the data cache module to control the access sequence, write lock and write lock of the cache partitions in the data cache module, and the data writing thread is communicatively connected to the write lock judging unit to Describe the cache partitions accessed in the access sequence determined by the write lock judgment unit, and perform data write operations according to the working status of the currently accessed cache partitions;

所述读锁定判断单元与相应的数据缓存模块通信连接,以控制数据缓存模块中缓存分区的访问顺序、读锁定与去读锁定,所述数据读取线程与所述读锁定判断单元通信连接以根据所访问的缓存分区的当前状态判断是否读取缓存数据。The read lock judging unit is communicatively connected with the corresponding data cache module to control the access sequence, read lock and de-read lock of the cache partitions in the data cache module, and the data read thread is communicatively connected with the read lock judging unit to Whether to read cached data is judged according to the current state of the accessed cache partition.

优选的,同时只存在一组写锁定状态缓存分区与一组读锁定状态缓存分区。Preferably, only one set of write-locked state cache partitions and one set of read-locked state cache partitions exist at the same time.

优选的,所述数据缓存模块为生产者与消费者模型。Preferably, the data cache module is a producer and consumer model.

此外,本发明还公开了一种并发数据的缓存模型,同时包括至少两组以上如权利要求8所述并发数据的缓存结构。In addition, the present invention also discloses a cache model for concurrent data, which includes at least two or more cache structures for concurrent data as claimed in claim 8 .

由于采用上述方案,本发明的有益效果是:本发明所示的简单的并发数据缓存方法及结构,通过为对缓存分区的访问施加严格的读写锁,可支持更高的网络数据速率,优化数据处理速度。Due to the adoption of the above scheme, the beneficial effects of the present invention are: the simple concurrent data caching method and structure shown in the present invention can support higher network data rates by imposing strict read-write locks for access to cache partitions, optimizing Data processing speed.

附图说明Description of drawings

图1为全数字PET的数据采集与处理流程示意图;Figure 1 is a schematic diagram of the data acquisition and processing flow of full digital PET;

图2为本发明一实施例中并发数据缓存方法步骤示意图;FIG. 2 is a schematic diagram of the steps of a concurrent data caching method in an embodiment of the present invention;

图3为图2所示实施例中并发数据缓存方法数据写入时的流程图;Fig. 3 is the flow chart when concurrent data caching method data writing in the embodiment shown in Fig. 2;

图4为图2所示实施例中并发数据缓存方法数据读取时的流程图;Fig. 4 is the flow chart when concurrent data cache method data reading in the embodiment shown in Fig. 2;

图5为本法一实施例中并发数据缓存结构示意图;Fig. 5 is a schematic diagram of a concurrent data cache structure in an embodiment of the method;

图6为图5所示实施例中缓存扇区的结构示意图。FIG. 6 is a schematic structural diagram of a cache sector in the embodiment shown in FIG. 5 .

具体实施方式detailed description

以下结合附图所示实施例对本发明作进一步的说明。The present invention will be further described below in conjunction with the embodiments shown in the accompanying drawings.

本发明提供了一种并发数据的缓存方法,包括以下步骤,缓存分区的工作状态设置为空闲状态、写锁定状态、写满状态以及读锁定状态中的任意一种,其中当某一缓存分区全部写满或某一时间段内外部数据写入完毕,没有新的数据写入时,该缓存分区被置为写满状态。The present invention provides a caching method for concurrent data, which includes the following steps. The working state of the cache partition is set to any one of the idle state, write-locked state, write-full state, and read-locked state, wherein when a certain cache partition is fully When it is full or the external data is written within a certain period of time, and no new data is written, the cache partition is set to a full state.

由于无法确定外部数据的发送时间以及数量的大小,为了确保每组外部发送的数据能够及时有效的写入至缓存中,尽量避免出现丢包的问题,如图3所示,每组数据写入过程如下,(A)一组数据写入:(A1)实时监测是否存在写锁定状态缓存分区:当确定存在写锁定状态缓存分区时,直接转入步骤(A2-2),将该组数据写入至写锁定状态缓存分区中。若当前缓存分区中没有处于写锁定的工作状态时,(A2)则实时监测是否存在空闲状态的缓存分区:(A21)若有处于空闲状态的缓存分区,(A21-1)选定一组空闲状态的缓存分区置为写锁定状态;(A21-2)向所述写锁定状态的缓存分区写入该组数据:(A21-3)判断所述数据是否写入成功:若数据写入成功,则该组数据写入完成,结束该组数据的写入,同时可转入步骤(A)中继续下一组数据的写入;若数据写入失败,则说明当前处于写锁定工作状态的缓存分区已经写满了,无法在进行新的数据写入,则将当前缓存分区的工作状态置为写满状态,同时转入步骤(A1)中,继续该组外部数据的写入,将该组外部数据写入至其他缓存分区中;Since it is impossible to determine the sending time and quantity of external data, in order to ensure that each set of externally sent data can be written into the cache in a timely and effective manner, and try to avoid the problem of packet loss, as shown in Figure 3, each set of data written The process is as follows, (A) a set of data writing: (A1) Real-time monitoring whether there is a write-locked state cache partition: when it is determined that there is a write-locked state cache partition, go directly to step (A2-2) and write the set of data into the write-locked state cache partition. If there is no write-locked working state in the current cache partition, (A2) monitors in real time whether there is an idle cache partition: (A21) If there is an idle cache partition, (A21-1) select a group of idle The cache partition in the state is set to the write-locked state; (A21-2) write the group of data to the cache partition in the write-locked state: (A21-3) judge whether the data is written successfully: if the data is written successfully, Then the writing of this group of data is completed, and the writing of this group of data is ended, and at the same time, it can be transferred to step (A) to continue writing the next group of data; if the data writing fails, it means that the cache currently in the write lock working state If the partition is full and no new data can be written in, set the working status of the current cache partition to full, and transfer to step (A1) to continue writing the group of external data. External data is written to other cache partitions;

其中,上述步骤(A21-3)中,可通过判断选定的写锁定状态缓存分区是否写满的步骤来确定数据是否写入成功。在向写锁定状态的缓存分区写入外部发送的数据组时,若判断写锁定状态缓存分区还未写满时,则说明有足够的空间写入当前数据,则当该组数据写入时可判断其写入成功;当判断选定的写锁定状态缓存分区已写满时,没有足够的空间继续写入新的数据,则说明无法将该组数据写入至上述写锁定状态缓存分区,判断其写入失败,此时,需将该组缓存分区置为写满状态,并转入步骤(A1),以将本组数据写入至其他合适的缓存分区中。Wherein, in the above step (A21-3), it may be determined whether the data is successfully written by judging whether the selected cache partition in the write-locked state is full. When writing an externally sent data group to the cache partition in the write-locked state, if it is judged that the cache partition in the write-locked state is not full, it means that there is enough space to write the current data, then when the group of data is written, it can be Judging that the writing is successful; when it is judged that the selected write-locked state cache partition is full and there is not enough space to continue writing new data, it means that the set of data cannot be written to the above-mentioned write-locked state cache partition. Judgment Its writing fails. At this time, it is necessary to set the set of cache partitions into a full state, and turn to step (A1), so as to write this set of data into other suitable cache partitions.

上述写锁定状态的缓存分区写满判断具体又包括以下步骤:向写锁定状态的缓存分区写入外部发送的数据时,首先比较该组数据与写锁定状态的缓存分区剩余空间大小,若外部数据大于写锁定状态的缓存分区剩余空间,则判断写锁定状态的缓存分区写满;若外部数据不大于写锁定状态的缓存分区剩余空间,则判断写锁定状态的缓存分区未写满,本组数据可继续写入;也可采取主动上报的方式判断写锁定状态的缓存分区是否写满,即当向写锁定状态的缓存分区写入外部发送的数据时,若系统主动报错,则判断写锁定状态的缓存分区写满。The above-mentioned cache partition fullness judgment in the write-locked state specifically includes the following steps: when writing externally sent data to the cache partition in the write-locked state, first compare the remaining space of the group of data with the cache partition in the write-locked state, if the external data If it is larger than the remaining space of the cache partition in the write-locked state, it is judged that the cache partition in the write-locked state is full; if the external data is not larger than the remaining space of the cache partition in the write-locked state, it is judged that the cache partition in the write-locked state is not full. You can continue to write; you can also take the initiative to report to determine whether the cache partition in the write-locked state is full, that is, when writing externally sent data to the cache partition in the write-locked state, if the system actively reports an error, then judge the write-locked state The cache partition is full.

(A22)若没有处于空闲状态的缓存分区,说明当前缓存已经全部写满,则结束该组数据的写入进程。(A22) If there is no cache partition in an idle state, it means that the current cache has been fully written, and the writing process of the group of data is ended.

数据读取包括以下步骤,(B)数据读取时,以单个缓存分区为基础来进行的,(B1)实时监测是否存在写满状态下的缓存分区:若有处于写满状态下的缓存分区,(B11)选定其中一组写满状态的缓存分区置为读锁定状态;(B12)读取该读锁定状态缓存分区内的缓存数据,当判断所述读锁定状态缓存分区读取完毕时,将所述缓存设置为空闲状态,同时转入步骤(B1)继续下一组缓存分区的读取;若没有写满状态的缓存分区,则转入步骤(B1)中,判断下一时刻是否存在写满状态缓存分区以继续进行数据的读取。Data reading includes the following steps. (B) When data is read, it is performed on the basis of a single cache partition. (B1) Real-time monitoring of whether there is a cache partition in a full state: if there is a cache partition in a full state , (B11) select one group of cache partitions in the full state to be set to the read lock state; (B12) read the cache data in the read lock state cache partition, when it is judged that the read lock state cache partition has been read , the cache is set to an idle state, and simultaneously proceeds to step (B1) to continue the reading of the next set of cache partitions; if there is no cache partition in a full state, then proceeds to step (B1) to determine whether There is a full status cache partition to continue reading data.

此外,全部数据写入进程结束时,若存在一组未写满的缓存分区,则将该组缓存分区置为写满状态以进行数据的读取。考虑到外部数据的大小无法确定,某一时间段内数据写入进程结束时,所接收的外部数据可能无法写满一组数据缓存,或者最后一段数据无法写满一组数据缓存,此时为了读取该部分数据,将写入该部分数据的缓存分区也置为写满状态。In addition, when all the data writing process ends, if there is a group of cache partitions that are not fully written, the group of cache partitions is set to a full state to read data. Considering that the size of external data cannot be determined, when the data writing process ends within a certain period of time, the received external data may not be able to fill a set of data buffers, or the last piece of data may not be able to fill a set of data buffers. Read this part of data, and set the cache partition where this part of data is written to be full.

所述步骤(B1)之前,还包括实时监测是否存在读锁定状态缓存分区的步骤,若有,直接转入步骤(B12)读取选定读锁定状态缓存分区内的缓存数据;若无,则转入步骤(B1),判断是否存在写满状态下的缓存分区以继续数据的读取。Before the step (B1), it also includes the step of real-time monitoring whether there is a read lock state cache partition, if so, directly proceed to step (B12) to read the cache data in the selected read lock state cache partition; if not, then Go to step (B1) to determine whether there is a cache partition in a full state to continue reading data.

为了便于数据写入与读取的管理,在所有的缓存分区中,同时只设置一组写锁定状态缓存分区与一组读锁定状态缓存分区,即缓存中同时只进行一组缓存分区的数据写入和/或一组缓存分区的数据读取,这样设置,一方面可使得数据的写入与读取更为有序,只有一组缓存分区写完或读完之后,才能对下一分区进行读写操作,另一方面,也可充分利用缓存分区的空间,保证每组缓存分区都得到有效利用,而不会出现多组缓存分区只有一部分空间被利用的情形。In order to facilitate the management of data writing and reading, in all cache partitions, only one set of write-locked cache partitions and one set of read-locked cache partitions are set at the same time, that is, only one set of cache partitions is written in the cache at the same time Data entry and/or data reading of a group of cache partitions. This setting, on the one hand, can make the writing and reading of data more orderly. Only after a group of cache partitions are written or read can the next partition be processed. Read and write operations, on the other hand, can also make full use of the space of the cache partitions to ensure that each group of cache partitions is effectively used, and there will be no situation where only part of the space of multiple groups of cache partitions is used.

在上述基础上,本发明所示的缓存方法对缓存分区的访问顺序进行设置,以更好的实现数据写入与读取的合理管理。On the basis of the above, the caching method shown in the present invention sets the access sequence of the cache partitions, so as to better realize reasonable management of data writing and reading.

数据写入过程中,依次顺序访问各缓存分区以监测是否存在写锁定或空闲状态的缓存分区,即写入线程在寻找写锁定或空闲状态的缓存分区时,任意设置一缓存分区作为起点,然后按照顺序循环依次访问,具体而言,各缓存分区构建和初始化完成后一般即为空闲状态,由于同时只存在一组写锁定状态的缓存分区,在进行数据写入时,写入线程按照从起始点开始的顺序依次访问各缓存分区,首先起始点的空闲状态缓存分别被置为写锁定状态以进行数据写入,当起始点的缓存分区写满后,起始点的下一缓存分区被置为写锁定状态,写入线程直接转入下一缓存分区继续数据的写入,直至该缓存分区写满,依次循环实现数据的写入。During the data writing process, each cache partition is sequentially accessed to monitor whether there is a write-locked or idle cache partition, that is, when the write thread is looking for a write-locked or idle cache partition, it arbitrarily sets a cache partition as a starting point, and then It is accessed sequentially and cyclically. Specifically, each cache partition is generally in an idle state after construction and initialization are completed. Since there is only one set of write-locked cache partitions at the same time, when writing data, the write thread starts from The order starting from the starting point visits the cache partitions in turn. First, the idle state caches at the starting point are respectively set to the write-locked state for data writing. When the cache partition at the starting point is full, the next cache partition at the starting point is set to In the write-locked state, the writing thread directly transfers to the next cache partition to continue writing data, until the cache partition is full, and writes data in turn.

由于数据写入的时间以及写入量是人为无法控制的,为了更好的应对各种突发的数据写入情况,更进一步的,缓存分区中包括多个大小相同的缓存扇区,缓存扇区一一编码且包括多个大小相同的缓存页面,将每组写入数据的大小与缓存页面大小相同设置。当确定一组写锁定状态下的缓存分区进行数据写入时,将本组数据按照顺序写入各缓存扇区的缓存页面中,即首先本组数据写入第一组缓存扇区的第一个缓存页面中,下一组数据写入第一组缓存扇区的第二个缓存页面中,依次进行,直至该组缓存扇区写满,则缓存扇区内部自己计数,确定当前缓存扇区编码,并自动将未写完的数据或者下一组数据写入至第二组缓存分区(下一缓存扇区)的第一个缓存页面中,直至该缓存分区中所有的缓存页面均被写满。相应的数据写入时缓存分区写满判断的步骤如下:比较缓存扇区编码与缓存扇区的最大编码数大小:若该缓存扇区编码小于缓存分区的最大编码数,判断下一组数据能够在本缓存分区中写入成功,若缓存扇区编码等于缓存分区的最大编码数,则判断缓存分区已满,转入步骤(A1),将下一组数据写入下一个缓存分区中。Since the time and volume of data writing are uncontrollable, in order to better cope with various sudden data writing situations, furthermore, the cache partition includes multiple cache sectors of the same size, and the cache sector Areas are coded one by one and include multiple cache pages of the same size, and the size of each group of written data is set to be the same as the cache page size. When a group of cache partitions in the write-locked state is determined to write data, the data of this group is written into the cache pages of each cache sector in order, that is, the data of this group is first written into the first page of the first group of cache sectors. In the first cache page, the next set of data is written into the second cache page of the first set of cache sectors, and proceed in turn until the set of cache sectors is full, then the cache sector counts by itself to determine the current cache sector Encoding, and automatically write unfinished data or the next set of data to the first cache page of the second set of cache partitions (next cache sector), until all cache pages in the cache partition are written Full. The steps for judging that the cache partition is full when the corresponding data is written are as follows: compare the cache sector code with the maximum code number of the cache sector: if the cache sector code is less than the maximum code number of the cache partition, judge that the next set of data can Writing in this cache partition is successful, if the cache sector code is equal to the maximum number of codes in the cache partition, then it is judged that the cache partition is full, and then go to step (A1), and write the next set of data into the next cache partition.

在进行数据读取时,可采用与数据写入访问相同的顺序的进行数据的读出,从而当缓存分区写满并置为写满状态后,数据读取线程能在第一时间内访问该缓存分区,进而该缓存分区被置为读锁定状态并进行数据的读取,这样设置,可尽量节约时间,提高数据的读出效率。同时,也可采取遍历、缓存分区写满主动上报等方式实时监测是否存在写满状态的缓存分区,从进行数据的读出。When reading data, the data can be read in the same order as the data write access, so that when the cache partition is full and set to the full state, the data read thread can access the cache partition in the first time Cache partitions, and then the cache partitions are placed in a read-locked state and read data. Such settings can save time as much as possible and improve data read-out efficiency. At the same time, it is also possible to monitor in real time whether there is a full cache partition by means of traversal, active reporting of cache partition fullness, etc., and read data from it.

此外,考虑到若出现外部数据量较大,缓存空间存储能力有限的,可能导致数据丢失的情形,故本发明所示的缓存方法中,数据读取速度大于数据写入速度设置。In addition, considering that if there is a large amount of external data and the storage capacity of the cache space is limited, data loss may result. Therefore, in the caching method shown in the present invention, the data reading speed is set to be higher than the data writing speed.

以下以本发明所示的缓存方法应用于基本缓存模型(即单个生产者和单个消费者的场景)对其工作过程进行进一步的说明。将整个缓存分为若干个缓存分区,通过简单的状态锁来实现生产者与消费者对缓存分区的访问保护。各缓存分区构建和初始化完成后即为空闲状态。在将数据写入缓存分区之前,首先实时判断是否存在空闲状态的缓存分区,如果存在则进行数据写入操作,将该缓存分区状态置为写锁定状态,仅当缓存分区为空闲状态时支持写锁定,并通过数据写入线程进行该组缓存分区的数据写入。当判断该缓存分区写满后,一方面将该缓存分区置为可读状态(即写满且未锁定状态),另一方面寻找缓存内其他的空闲分区,循环继续数据写入操作。在进行数据写入操作的同时,数据读取线程也在访问缓存分区进行数据读取操作,数据读取线程实时访问各缓存分区以判断是否存在可读状态缓存分区,当数据读取线程判断当前访问的缓存分区变为可读状态时,在数据读取线程从缓存分区读取数据之前,首先需要将该分区状态置为读锁定状态,仅当缓存分区为可读状态时支持读锁定。若试图对缓存分区添加读锁定或写锁定而操作失败,意味着网络接收线程和数据处理线程访问位置发生交汇,若为写锁定相关操作失败则表示缓存已满。The working process of the caching method shown in the present invention is further described below by applying the caching method shown in the present invention to the basic caching model (that is, the scenario of a single producer and a single consumer). The entire cache is divided into several cache partitions, and access protection of the cache partitions by producers and consumers is realized through simple state locks. After the construction and initialization of each cache partition is completed, it is in an idle state. Before writing data into the cache partition, it is first judged in real time whether there is an idle cache partition, and if it exists, the data write operation is performed, and the state of the cache partition is set to a write-locked state. Only when the cache partition is in an idle state, writes are supported. Lock, and write data to this group of cache partitions through the data write thread. When it is judged that the cache partition is full, on the one hand, the cache partition is set to a readable state (that is, it is full and unlocked), and on the other hand, it looks for other free partitions in the cache, and continues the data writing operation in a loop. While performing data writing operations, the data reading thread is also accessing the cache partitions for data reading operations. The data reading thread accesses each cache partition in real time to determine whether there is a readable state cache partition. When the data reading thread judges the current When the accessed cache partition becomes readable, before the data reading thread reads data from the cache partition, it first needs to set the state of the partition to the read lock state. Read lock is only supported when the cache partition is in the readable state. If you try to add a read lock or write lock to the cache partition and the operation fails, it means that the access locations of the network receiving thread and the data processing thread converge. If the operation related to the write lock fails, it means that the cache is full.

本发明所示的并发数据的缓存方法也可用于由多个基本缓存模型组成的缓存中,具体步骤如前所述,每基本缓存模型均采取并发数据的缓存方法进行数据的写入与读出,此处不再赘述。The caching method for concurrent data shown in the present invention can also be used in a cache composed of multiple basic cache models. The specific steps are as described above. Each basic cache model adopts a cache method for concurrent data to write and read data. , which will not be repeated here.

当该方法应用于PET设备中的数据采集实现时,NETTY网络框架定义了独立的数据接收线程充当生产者。由定制的工作线程充当消费者,负责网络数据包解析、事件符合计算等处理。缓存分区的访问锁基于JAVA开发环境提供的CAS机制实现,其基本原理是将缓存分区的状态设定实现为原子性操作。具体来说,访问锁对象的访问是一个原子性操作。一个操作是原子的,则这个操作的更高层不能发现其内部实现与结构。原子操作可以是一个步骤,也可以是多个步骤,其顺序是不可以被打乱的,也不能被分割只执行部分。CAS是指Compare andSet,即检查对象的值,如果满足某种条件,则将此对象设置为新值。使用CAS机制可实现不使用线程锁的方式对对象的值进行访问和更新,性能优于使用互斥锁的方案。When this method is applied to data acquisition in PET equipment, the NETTY network framework defines an independent data receiving thread to act as a producer. A customized worker thread acts as a consumer, responsible for network packet parsing, event compliance calculation, and other processing. The access lock of the cache partition is implemented based on the CAS mechanism provided by the JAVA development environment. The basic principle is to realize the state setting of the cache partition as an atomic operation. Specifically, access to an access lock object is an atomic operation. An operation is atomic, and higher layers of the operation cannot discover its internal implementation and structure. An atomic operation can be one step or multiple steps, and its sequence cannot be disrupted, nor can it be divided and only executed in part. CAS refers to Compare and Set, which checks the value of an object, and if a certain condition is met, this object is set to a new value. The CAS mechanism can be used to access and update the value of the object without using thread locks, and the performance is better than that of using mutex locks.

单个缓存分区作为事件筛选样本的基本集合单元,其容量可灵活配置,以便于调整因数据分片而引起的符合事件丢失率。为优化数据处理速度,网络数据包解析和符合筛选环节均引入了JAVA开发环境提供的Fork/Join并行计算模型。每个缓存分区的容量是相同,其容量大小可灵活配置。理论上缓存分区越大越好,这样可以保证数据丢失少,但缓存分区越大对物理内存的要求也就越高。故这个缓存分区容量大小值可根据物理内存的大小来确定。Fork/Join并行计算模型的原理是Map/Reduce模型,对一个复杂的任务进行分而治之,通过使用现在计算机多核多线程技术提高计算效率。例如计算N个数的和,将N个数分为M份,并使用M个线程同步执行计算的任务,再将M个结果合并成一个结果。A single cache partition serves as the basic collection unit for event screening samples, and its capacity can be flexibly configured to facilitate adjustment of the event loss rate caused by data fragmentation. In order to optimize the data processing speed, the Fork/Join parallel computing model provided by the JAVA development environment is introduced in the network packet parsing and matching screening links. The capacity of each cache partition is the same, and its capacity can be flexibly configured. Theoretically, the larger the cache partition, the better, which can ensure less data loss, but the larger the cache partition, the higher the requirements for physical memory. Therefore, the size of the cache partition capacity can be determined according to the size of the physical memory. The principle of the Fork/Join parallel computing model is the Map/Reduce model, which divides and conquers a complex task and improves computing efficiency by using current computer multi-core and multi-thread technology. For example, calculate the sum of N numbers, divide N numbers into M parts, and use M threads to perform calculation tasks synchronously, and then combine M results into one result.

本发明还公开了一种并发数据的缓存结构,同时进行数据的写入与读取,包括一组数据写入线程100、一组数据读取线程200、一组数据缓存模块300以及一分区控制模块400。The present invention also discloses a concurrent data cache structure, which simultaneously writes and reads data, including a set of data writing threads 100, a set of data reading threads 200, a set of data cache modules 300, and a partition control Module 400.

数据缓存模块300包括多组缓存分区310,分区控制模块400与数据缓存模块300通信连接,用于控制数据缓存模块300中各个缓存分区310的工作状态,即分区控制模块400将每个缓存分区310的工作状态设置为空闲状态、写锁定状态、写满状态以及读锁定状态中的任意一种。仅当缓存分区310为空闲状态时支持写锁定,且当缓存分区310的工作状态为写锁定时支持数据写入线程100进行数据的写入;仅当缓存分区310为写满状态时支持读锁定,且当缓存分区310的工作状态为读锁定状态时支持数据读取线程200进行数据的读出,同时分区控制模块400还用于控制各个缓存分区310的被访问顺序,数据写入线程100与数据读取线程200分别经由分区控制模块400与各缓存分区310通信连接,数据写入线程100与数据读取线程200按照分区控制模块400确定的访问顺序分别访问数据缓存模块300中的多组缓存分区,然后根据其所访问缓存分区的状态来进行数据的写入或者读取操作。The data cache module 300 includes multiple groups of cache partitions 310, and the partition control module 400 communicates with the data cache module 300 to control the working status of each cache partition 310 in the data cache module 300, that is, the partition control module 400 divides each cache partition 310 The working state of the device is set to any one of idle state, write-locked state, write-full state and read-locked state. Only when the cache partition 310 is in an idle state, write lock is supported, and when the working state of the cache partition 310 is write lock, the data writing thread 100 is supported to write data; only when the cache partition 310 is in a write-full state, read lock is supported , and when the working state of the cache partition 310 is the read lock state, the data read thread 200 is supported to read data, and the partition control module 400 is also used to control the order in which each cache partition 310 is accessed, and the data write thread 100 and The data read thread 200 communicates with each cache partition 310 via the partition control module 400 respectively, and the data write thread 100 and the data read thread 200 respectively access multiple groups of caches in the data cache module 300 according to the access order determined by the partition control module 400 partition, and then write or read data according to the status of the cache partition it accesses.

为了便于数据的写入与读取,分区控制模块400将读写操作分开进行,其包括一写锁定判断单元410以及一读锁定判断单元420,且在所有的缓存分区中,同时只设置一组写锁定状态缓存分区与一组读锁定状态缓存分区。In order to facilitate data writing and reading, the partition control module 400 separates the read and write operations, which includes a write lock judging unit 410 and a read lock judging unit 420, and in all cache partitions, only one group A write-locked state cache partition and a set of read-locked state cache partitions.

写锁定判断单元410与数据缓存模块300通信连接,用于控制数据缓存模块300中缓存分区310的访问顺序、写锁定(将空闲状态的缓存分区置为写锁定状态)与去写锁定(将写满的写锁定状态缓存分区置为写满状态)。数据写入线程100与写锁定判断单元410通信连接,其根据写锁定判断单元410确定的访问顺序来访问各缓存分区310,并根据当前所访问缓存分区310的工作状态进行数据的写入操作。在其中一实施例中,为了使得各缓存分区的数据输入相对均衡,写锁定判断单元410采用定制遍历策略来调节数据在缓存模块中的分布,即顺序遍历,可任意选取其中一组缓存分区作为起始点并按照编码顺序对缓存分区进行控制。缓存构建完成后,一般情况下,各缓存分区均为空闲状态,写锁定判断单元410将处于起始点的空闲状态的缓存分区置为写锁定状态,数据写入线程100按照写锁定判断单元410确定的访问顺序访问起始点的缓存分区,由于其为写锁定状态,则可开始进行数据写入,待该缓存分区写满以后,写锁定判断单元410将起始点的缓存分区置为写满状态,并将起始点下一编码处的空闲状态缓存分区置为写锁定状态,数据写入线程100按照写锁定判断单元410确定的继续访问下一编码缓存分区,然后进行数据写入操作。The write lock determination unit 410 is connected in communication with the data cache module 300, and is used to control the access sequence of the cache partitions 310 in the data cache module 300, write lock (setting the cache partition in the idle state to the write lock state) and de-write lock (set the write lock state to the cache partition in the idle state). A full write-locked state cache partition is set to write-full state). The data writing thread 100 is connected in communication with the write lock judging unit 410, and it accesses each cache partition 310 according to the access order determined by the write lock judging unit 410, and writes data according to the working status of the currently accessed cache partition 310. In one embodiment, in order to make the data input of each cache partition relatively balanced, the write lock judging unit 410 adopts a customized traversal strategy to adjust the distribution of data in the cache module, that is, sequential traversal, and a group of cache partitions can be arbitrarily selected as The starting point and controls the cache partitions in encoding order. After the cache construction is completed, under normal circumstances, each cache partition is in an idle state, and the write lock judging unit 410 sets the cache partition in the idle state at the starting point to the write lock state, and the data writing thread 100 determines according to the write lock judging unit 410 The access sequence accesses the cache partition of the starting point. Since it is in a write-locked state, data writing can be started. After the cache partition is full, the write lock judging unit 410 sets the cache partition of the starting point to a full state. And the idle state cache partition at the next encoding point at the starting point is set to the write lock state, and the data writing thread 100 continues to access the next encoding cache partition as determined by the write lock judging unit 410, and then performs the data writing operation.

读锁定判断单元420与相应的数据缓存模块300也通信连接,用于控制数据缓存模块300中缓存分区的访问顺序、读锁定(将写满状态的缓存分区置为读锁定状态)与去读锁定(将读取完毕的缓存分区置为空闲状态),数据读取线程200与读锁定判断单元420通信连接以根据所访问的缓存分区的当前状态判断是否读取缓存数据。在数据写入的同时,数据读取线程200也在进行数据的读取操作,数据读取的访问顺序可采用与数据写入访问相同的顺序的进行数据的读出,即读锁定判断单元420控制数据读取线程200按照写锁定判断单元410所确定的访问顺序,来访问各缓存分区,这样设置,当缓存分区写满并置为写满状态后,数据读取线程200能在第一时间内访问该缓存分区,进而该缓存分区被置为读锁定状态并进行数据的读取,能够更进一步的提高数据的读出效率。此外,读锁定判断单元420也可采取遍历顺序,或者由各缓存分区写满主动上报,读锁定判断单元420以及缓存分区上报的顺序确定数据读取线程200的访问顺序。The read lock judging unit 420 is also connected in communication with the corresponding data cache module 300, and is used to control the access sequence of the cache partitions in the data cache module 300, read lock (setting the cache partition in the full state to the read lock state) and remove the read lock. (Set the read cache partition to an idle state), the data read thread 200 communicates with the read lock judging unit 420 to judge whether to read cache data according to the current state of the accessed cache partition. While writing data, the data reading thread 200 is also performing a data reading operation, and the access sequence of data reading can be read out in the same order as the data writing access, that is, the read lock judging unit 420 Control the data reading thread 200 to access each cache partition according to the access order determined by the write lock judging unit 410, so set, when the cache partition is full and set to the full state, the data reading thread 200 can be in the first time The cache partition is accessed internally, and the cache partition is placed in a read-locked state to read data, which can further improve the efficiency of data readout. In addition, the read lock judging unit 420 may also adopt a traversal order, or actively report when each cache partition is full, and the read lock judging unit 420 and the order in which the cache partitions report determine the access sequence of the data reading thread 200 .

由于数据写入的过程是人为无法控制的过程,可能出现在上述设置的基础上,为了有效的进行数据的写入管理,减少数据丢包事件的发生,同时提高缓存的利用率,每组缓存分区310分层级管理设置,其包括一控制单元311以及多组缓存扇区312,缓存扇区312一一编码设置,控制单元311与该组缓存分区310中的所有缓存扇区312通信连接,用于控制每组缓存分区中,外部数据写入缓存扇区312的顺序。进一步的,每组缓存扇区312包括一控制组件312-1以及多个缓存页面312-2,缓存页面312-2一一编码且每组写入数据的大小与缓存页面312-2相同设置,控制组件312-1与缓存页面312-2分别通向连接,以控制每组缓存扇区中,数据写入缓存页码的顺序。Since the process of data writing is a process that cannot be controlled by humans, it may appear on the basis of the above settings. In order to effectively manage data writing, reduce the occurrence of data packet loss events, and improve the utilization of the cache, each set of cache The partition 310 is hierarchically managed and set, which includes a control unit 311 and multiple groups of cache sectors 312, and the cache sectors 312 are coded one by one, and the control unit 311 communicates with all the cache sectors 312 in the group of cache partitions 310, It is used to control the order in which external data is written into the cache sectors 312 in each group of cache partitions. Further, each group of cache sectors 312 includes a control component 312-1 and a plurality of cache pages 312-2, the cache pages 312-2 are coded one by one and the size of each group of written data is set to be the same as that of the cache pages 312-2, The control component 312-1 and the cache page 312-2 are respectively connected to control the order in which data is written into the cache page number in each group of cache sectors.

当该组缓存分区设置为写锁定状态且与数据写入线程100通信成功进行数据写入操作时,缓存分区内部对数据写入的过程二次导向。在其中一实施例中,控制单元311控制缓存分区内部也采用顺序写入的方式进行,一组数据写入时,将本组数据按照顺序写入各缓存扇区的缓存页面312-2中,即首先将该组数据写入第一组缓存扇区的第一个缓存页面312-2中,下一组数据写入第一组缓存扇区的第二个缓存页面312-2中,依次进行,直至该组缓存扇区写满,则缓存扇区内部自己计数,确定当前缓存扇区编码,并自动将未写完的数据或者下一组数据写入至第二组缓存扇区(下一缓存扇区)的第一个缓存页面312-2中,直至该缓存分区中所有的缓存页面312-2均被写满。相应的数据写入时缓存分区写满判断的步骤如下:每组缓存分区中,控制单元311比较缓存扇区编码与缓存扇区的最大编码数大小:若该缓存扇区编码小于缓存分区的最大编码数,判断下一组数据能够在本缓存分区中写入成功,若缓存扇区编码等于缓存分区的最大编码数,则判断缓存分区已满,将下一组数据写入下一个缓存分区中。When the group of cache partitions is set to a write-locked state and communicates with the data writing thread 100 to successfully perform a data write operation, the process of data writing within the cache partition is secondarily directed. In one of the embodiments, the control unit 311 controls the cache partition to be written sequentially. When a group of data is written, the data of this group is written in the cache page 312-2 of each cache sector in sequence. That is, first write the group of data into the first cache page 312-2 of the first group of cache sectors, and write the next group of data into the second cache page 312-2 of the first group of cache sectors, and proceed sequentially , until the group of cache sectors is full, the cache sector will count by itself, determine the code of the current cache sector, and automatically write the unfinished data or the next set of data to the second group of cache sectors (the next cache sector) until all the cache pages 312-2 in the cache partition are fully written. The steps for judging that the cache partition is full when the corresponding data is written are as follows: in each group of cache partitions, the control unit 311 compares the cache sector code with the maximum code number of the cache sector: if the cache sector code is smaller than the cache partition maximum Number of codes, to judge that the next set of data can be successfully written in this cache partition. If the cache sector code is equal to the maximum number of codes in the cache partition, it is judged that the cache partition is full, and the next set of data is written into the next cache partition .

作为一个优选方案,数据缓存模块300可为生产者与消费者模型。As a preferred solution, the data caching module 300 can be a producer and consumer model.

理论上缓存分区越大越好,这样可以保证数据丢失少,但缓存分区越大对物理内存的要求也就越高。由于缓存容量有限,故本发明还提供了一种并发数据的缓存模型,其同时包括两组以上的上述并发数据的缓存结构。每个缓存结构的工作原理相同,此处不再赘述,上述并发数据的缓存模型中,同时经由多个数据写入线程100与外部数据源通信连接,进行数据的写入操作,同时经由多个数据读取线程200与外部处理设备通信连接,将各缓存内部的数据及时读出并发送至外部处理设备中进行下一步的数据处理,从而有效增强数据的缓存能力。Theoretically, the larger the cache partition, the better, which can ensure less data loss, but the larger the cache partition, the higher the requirements for physical memory. Due to the limited cache capacity, the present invention also provides a concurrent data cache model, which includes more than two sets of concurrent data cache structures. The working principle of each cache structure is the same, and will not be repeated here. In the above-mentioned concurrent data cache model, multiple data write threads 100 communicate with external data sources at the same time to perform data write operations. The data reading thread 200 communicates with the external processing device, and reads out the data inside each buffer in time and sends it to the external processing device for further data processing, thereby effectively enhancing the data cache capability.

上述的对实施例的描述是为便于该技术领域的普通技术人员能理解和使用本发明。熟悉本领域技术的人员显然可以容易地对这些实施例做出各种修改,并把在此说明的一般原理应用到其他实施例中而不必经过创造性的劳动。因此,本发明不限于上述实施例,本领域技术人员根据本发明的揭示,不脱离本发明范畴所做出的改进和修改都应该在本发明的保护范围之内。The above description of the embodiments is for those of ordinary skill in the art to understand and use the present invention. It is obvious that those skilled in the art can easily make various modifications to these embodiments, and apply the general principles described here to other embodiments without creative efforts. Therefore, the present invention is not limited to the above-mentioned embodiments. Improvements and modifications made by those skilled in the art according to the disclosure of the present invention without departing from the scope of the present invention should fall within the protection scope of the present invention.

Claims (11)

Translated fromChinese
1.一种并发数据的缓存方法,其特征在于:包括以下步骤:缓存分区的工作状态设置为空闲状态、写锁定状态、写满状态以及读锁定状态中的任意一种,1. A cache method for concurrent data, characterized in that: comprises the following steps: the working state of the cache partition is set to any one of an idle state, a write-locked state, a full state and a read-locked state,(A)每组数据写入时,(A1)实时监测是否存在写锁定状态缓存分区:若有写锁定状态缓存分区直接转入步骤(A2-2)进行该组数据写入;若没有写锁定状态缓存分区,(A2)实时监测是否存在空闲状态的缓存分区:(A) When each set of data is written, (A1) monitor in real time whether there is a write-locked state cache partition: if there is a write-locked state cache partition, directly transfer to step (A2-2) to write the set of data; if there is no write lock Status cache partition, (A2) Real-time monitoring of whether there is an idle cache partition:(A21)若有处于空闲状态的缓存分区,(A21-1)选定一组空闲状态的缓存分区置为写锁定状态;(A21-2)向所述写锁定状态的缓存分区写入该组数据:(A21-3)判断所述数据是否写入成功:若写入成功,则该组数据写入完成,结束本次数据写入;若写入失败,则将当前缓存分区置为写满状态,并转入步骤(A1)继续该组数据的写入;(A21) If there is a cache partition in an idle state, (A21-1) select a group of cache partitions in an idle state to be in a write-locked state; (A21-2) write the group to the cache partition in the write-locked state Data: (A21-3) Judging whether the data is written successfully: if the writing is successful, the writing of the group of data is completed, and this data writing ends; if the writing fails, the current cache partition is set to full State, and turn to step (A1) to continue writing of this group of data;(A22)若没有处于空闲状态的缓存分区,则结束该组数据写入;(A22) If there is no cache partition in an idle state, then end the group of data writing;步骤(B)数据读取时,(B1)实时监测是否存在写满状态下的缓存分区:若有处于写满状态下的缓存分区,(B11)选定该组写满状态的缓存分区置为读锁定状态;(B12)读取该读锁定状态缓存分区内的缓存数据,当判断所述读锁定状态缓存分区读取完毕时,将所述缓存设置为空闲状态,同时转入步骤(B1)继续下一组缓存分区的读取;若没有写满状态的缓存分区,转入步骤(B1),继续数据的读取。Step (B) When the data is read, (B1) monitor in real time whether there is a cache partition in the full state: if there is a cache partition in the full state, (B11) select the group of cache partitions in the full state to set Read lock state; (B12) read the cache data in the read lock state cache partition, when it is judged that the read lock state cache partition has been read, the cache is set to an idle state, and simultaneously turns to step (B1) Continue to read the next group of cache partitions; if there is no cache partition in full state, go to step (B1) and continue to read data.2.根据权利要求1所述的并发数据的缓存方法,其特征在于:所述缓存分区中,同时只存在一组写锁定状态缓存分区与一组读锁定状态缓存分区;2. The caching method for concurrent data according to claim 1, characterized in that: in the cache partition, there are only one set of write-locked state cache partitions and one set of read-locked state cache partitions;优选的,全部数据写入进程结束时,若存在一组未写满的缓存分区,则将该组缓存分区置为写满状态以进行数据的读取。Preferably, when all the data writing process ends, if there is a group of cache partitions that are not fully written, the group of cache partitions is set to a full state to read data.3.根据权利要求1或2所述的并发数据的缓存方法,其特征在于:依次顺序访问各缓存分区以监测是否存在写锁定或空闲状态的缓存分区;3. The caching method of concurrent data according to claim 1 or 2, characterized in that: sequentially access each cache partition to monitor whether there is a write-locked or idle cache partition;优选的,采取遍历、与数据写入访问相同的顺序、缓存分区写入完成主动上报中的任一方式实时监测是否存在写满状态的缓存分区。Preferably, any method of traversing, accessing in the same order as data writing, and actively reporting cache partition write completion is used to monitor in real time whether there is a cache partition in a full state.4.根据权利要求1所述的并发数据的缓存方法,其特征在于:所述缓存分区内部采用分级管理的方式进行数据写入操作,所述缓存分区中包括多个大小相同的缓存扇区,且所述缓存扇区一一编码,且包括多个大小相同的缓存页面,所述每组写入数据的大小与所述缓存页面大小相同设置,则所述步骤(A21-3)中,判断所述数据是否写入成功包括以下步骤:每组数据写入缓存分区的每一缓存扇区后,缓存扇区内部计数,确定当前缓存扇区编码,比较所述缓存 扇区编码与所述缓存扇区的最大编码数大小:若该缓存扇区编码小于所述缓存分区的最大编码数,判断下一组数据能够在本缓存分区中写入成功,若缓存扇区编码等于所述缓存分区的最大编码数,则判断所述缓存分区已满,转入步骤(A1),将下一组数据写入下一个缓存分区中;4. The caching method for concurrent data according to claim 1, characterized in that: the cache partition internally adopts a hierarchical management method to perform data writing operations, and the cache partition includes a plurality of cache sectors of the same size, And the cache sectors are encoded one by one, and include multiple cache pages of the same size, and the size of each group of written data is set to be the same as the size of the cache page, then in the step (A21-3), it is judged Whether the data is successfully written includes the following steps: After each group of data is written into each cache sector of the cache partition, the internal count of the cache sector determines the current cache sector code, and compares the cache sector code with the cache The maximum code size of a sector: If the cache sector code is less than the maximum code number of the cache partition, it is judged that the next set of data can be successfully written in this cache partition; if the cache sector code is equal to the cache partition If the maximum number of codes is used, it is judged that the cache partition is full, and the step (A1) is changed to, and the next set of data is written in the next cache partition;优选的,所述数据按照编码顺序写入至所述缓存扇区中,且所述缓存页面一一编码,所述数据写入至缓存扇区时,按照缓存页面的编码顺序依次写入。Preferably, the data is written into the cache sectors in an encoding order, and the cache pages are encoded one by one, and when the data is written into the cache sectors, they are written in sequence in accordance with the encoding order of the cache pages.5.根据权利要求1所述的并发数据的缓存方法,其特征在于:所述步骤(A21-3)中,所述数据写入成功判断包括以下步骤:向所述写锁定状态的缓存分区写入外部发送的数据组:当判断所述写锁定状态缓存分区未写满时,写入成功,继续该组缓存分区的数据写入;当判断所述写锁定状态缓存分区写满或外部数据写完毕时,将该组缓存分区置为写满状态,并转入步骤(A1)继续下一组外部数据的写入;5. The caching method of concurrent data according to claim 1, characterized in that: in the step (A21-3), the judging that the data is written successfully comprises the following steps: writing to the cache partition in the write-locked state Enter the data group sent externally: when it is judged that the cache partition in the write-locked state is not full, the writing is successful, and the data writing of the group of cache partitions continues; when it is judged that the cache partition in the write-locked state is full or the external data is written When finished, put this group of cache partitions into a full state, and proceed to step (A1) to continue writing of the next group of external data;进一步的,所述写锁定状态的缓存分区写满判断包括以下步骤:向所述写锁定状态的缓存分区写入外部发送的数据时,比较外部数据量与所述写锁定状态的缓存分区剩余空间大小,若外部数据的数量大于所述写锁定状态的缓存分区剩余空间,判断所述写锁定状态的缓存分区写满;或,向所述写锁定状态的缓存分区写入外部发送的数据时,系统报错,则判断所述写锁定状态的缓存分区写满。Further, the judging that the cache partition in the write-locked state is full includes the following steps: when writing externally sent data to the cache partition in the write-locked state, comparing the amount of external data with the remaining space of the cache partition in the write-locked state Size, if the amount of external data is greater than the remaining space of the cache partition in the write-locked state, it is judged that the cache partition in the write-locked state is full; or, when writing externally sent data to the cache partition in the write-locked state, If the system reports an error, it is determined that the cache partition in the write-locked state is full.6.根据权利要求1所述的并发数据的缓存方法,其特征在于:所述步骤(B1)之前,还包括实时监测是否存在读锁定状态缓存分区的步骤,若有,直接转入步骤(B12)读取所述读锁定状态缓存分区内的缓存数据;若无,则转入步骤(B1);6. The caching method of concurrent data according to claim 1, characterized in that: before the step (B1), it also includes the step of monitoring in real time whether there is a read lock state cache partition, and if so, directly proceeds to the step (B12 ) read the cached data in the cache partition in the read-locked state; if not, proceed to step (B1);优选的,所述数据读取速度大于数据写入速度;Preferably, the data reading speed is greater than the data writing speed;优选的,其应有于生产者与消费者模型。Preferably, it should exist in the producer and consumer models.7.一种并发数据的缓存结构,其特征在于:其同时进行数据的写入与读取,包括一组数据写入线程、一组数据读取线程、一组数据缓存模块以及一分区控制模块;7. A cache structure for concurrent data, characterized in that: it writes and reads data simultaneously, including a group of data writing threads, a group of data reading threads, a group of data cache modules and a partition control module ;所述数据缓存模块包括多组缓存分区;The data cache module includes multiple sets of cache partitions;所述分区控制模块与所述数据缓存模块通信连接以控制所述多个缓存分区的工作状态以及被访问顺序,所述分区控制模块用于将所述每个缓存分区的工作状态设置为空闲状态、写锁定状态、写满状态以及读锁定状态中的任意一种,仅当所述缓存分区为空闲状态时支持写锁定,且所述缓存分区处于写锁定状态时支持数据写入,仅当所述缓存分区为写满状态时支持读锁定,且所述缓存分区处于读锁定状态时支持数据读出;The partition control module is communicatively connected with the data cache module to control the working state and the order of access of the plurality of cache partitions, and the partition control module is used to set the working state of each cache partition to an idle state , write-locked state, write-full state, and read-locked state, write lock is supported only when the cache partition is in the idle state, and data writing is supported when the cache partition is in the write-locked state, only when the cache partition is in the write-locked state The cache partition supports read locking when it is in a full state, and supports data readout when the cache partition is in a read lock state;所述数据写入线程与所述数据读取线程分别经由所述分区控制模块与各缓存分区通信连接,所述分区控制模块控制所述数据写入线程与所述数据读取线程访问缓存分区的顺序,以使得所述数据写入线程与所述数据读取线程根据所访问缓存分区的工作状态进行数据的写入与 读取。The data writing thread and the data reading thread are respectively connected to each cache partition through the partition control module, and the partition control module controls the data writing thread and the data reading thread to access the cache partition. order, so that the data writing thread and the data reading thread perform data writing and reading according to the working status of the accessed cache partition.8.根据权利要求7所述的缓存结构,其特征在于:所述每组缓存分区包括一控制单元以及多组缓存扇区,所述缓存扇区一一编码设置,所述控制单元与所述每组缓存扇区通信连接,以控制每组缓存分区中,数据写入缓存扇区的顺序;8. The cache structure according to claim 7, characterized in that: each set of cache partitions includes a control unit and multiple sets of cache sectors, the cache sectors are coded one by one, and the control unit and the Each group of cache sectors is connected by communication to control the order in which data is written into the cache sectors in each group of cache partitions;进一步优选的,所述每组缓存扇区包括一控制组件以及多个缓存页面,所述缓存页面一一编码且每组写入数据的大小与缓存页面相同设置,所述控制组件与所述缓存页面分别通向连接,以控制每组缓存扇区中,数据写入缓存页码的顺序。Further preferably, each group of cache sectors includes a control component and a plurality of cache pages, the cache pages are coded one by one and the size of each group of written data is set to be the same as that of the cache pages, and the control component and the cache Pages are connected separately to control the order in which data is written to the cache page numbers in each group of cache sectors.9.根据权利要求7所述的缓存结构,其特征在于:所述分区控制模块包括一写锁定判断单元以及一读锁定判断单元;9. The cache structure according to claim 7, wherein the partition control module comprises a write lock judging unit and a read lock judging unit;所述写锁定判断单元与数据缓存模块通信连接,以控制数据缓存模块中缓存分区的访问顺序、写锁定与去写锁定,所述数据写入线程与所述写锁定判断单元通信连接以根据所述写锁定判断单元确定的访问顺序访问的缓存分区,并根据当前访问缓存分区的工作状态进行数据的写入操作;The write lock judging unit is communicatively connected to the data cache module to control the access sequence, write lock and write lock of the cache partitions in the data cache module, and the data writing thread is communicatively connected to the write lock judging unit to Describe the cache partitions accessed in the access sequence determined by the write lock judgment unit, and perform data write operations according to the working status of the currently accessed cache partitions;所述读锁定判断单元与相应的数据缓存模块通信连接,以控制数据缓存模块中缓存分区的访问顺序、读锁定与去读锁定,所述数据读取线程与所述读锁定判断单元通信连接以根据所访问的缓存分区的当前状态判断是否读取缓存数据。The read lock judging unit is communicatively connected with the corresponding data cache module to control the access sequence, read lock and de-read lock of the cache partitions in the data cache module, and the data read thread is communicatively connected with the read lock judging unit to Whether to read cached data is judged according to the current state of the accessed cache partition.10.根据权利要求7所述的缓存结构,其特征在于:同时只存在一组写锁定状态缓存分区与一组读锁定状态缓存分区。10. The cache structure according to claim 7, wherein there are only one set of write-locked state cache partitions and one set of read-locked state cache partitions at the same time.优选的,所述数据缓存模块为生产者与消费者模型。Preferably, the data cache module is a producer and consumer model.11.一种并发数据的缓存模型,其特征在于:同时包括至少两组以上如权利要求7所述的并发数据的缓存结构。11. A cache model for concurrent data, characterized in that it includes at least two or more cache structures for concurrent data according to claim 7.
CN201610210432.4A2016-04-072016-04-07Concurrent data caching method and deviceActiveCN105912479B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201610210432.4ACN105912479B (en)2016-04-072016-04-07Concurrent data caching method and device
PCT/CN2017/077486WO2017173919A1 (en)2016-04-072017-03-21Concurrent data caching method and structure

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610210432.4ACN105912479B (en)2016-04-072016-04-07Concurrent data caching method and device

Publications (2)

Publication NumberPublication Date
CN105912479Atrue CN105912479A (en)2016-08-31
CN105912479B CN105912479B (en)2023-05-05

Family

ID=56745376

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610210432.4AActiveCN105912479B (en)2016-04-072016-04-07Concurrent data caching method and device

Country Status (2)

CountryLink
CN (1)CN105912479B (en)
WO (1)WO2017173919A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2017173919A1 (en)*2016-04-072017-10-12武汉数字派特科技有限公司Concurrent data caching method and structure
CN109213691A (en)*2017-06-302019-01-15伊姆西Ip控股有限责任公司Method and apparatus for cache management
CN109617825A (en)*2018-11-302019-04-12京信通信系统(中国)有限公司 Message processing device, method and communication system
CN109711323A (en)*2018-12-252019-05-03武汉烽火众智数字技术有限责任公司A kind of live video stream analysis accelerated method, device and equipment
CN110647477A (en)*2018-06-272020-01-03广州神马移动信息科技有限公司Data caching method, device, terminal and computer readable storage medium
CN110874273A (en)*2018-08-312020-03-10阿里巴巴集团控股有限公司Data processing method and device
CN112804003A (en)*2021-02-192021-05-14上海剑桥科技股份有限公司Optical module communication-based storage method, system and terminal
CN114035746A (en)*2021-10-282022-02-11中国科学院声学研究所High-sampling-rate data real-time acquisition and storage method and acquisition and storage system
CN114217587A (en)*2021-12-152022-03-22之江实验室Real-time comparison and aggregation method for multiple types of data of mimicry executive body
CN114305470A (en)*2017-11-102022-04-12湖北锐世数字医学影像科技有限公司 Coincidence screening device for all-digital PET

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111367666A (en)*2020-02-282020-07-03深圳壹账通智能科技有限公司 Data reading and writing method and system
CN111913657B (en)*2020-07-102023-06-09长沙景嘉微电子股份有限公司Block data read-write method, device, system and storage medium
CN112416816B (en)*2020-12-082025-01-24上证所信息网络有限公司 A write-one-multiple-read high-concurrency lock-free circular cache and its implementation method
CN115357199B (en)*2022-10-192023-02-10安超云软件有限公司Data synchronization method, system and storage medium in distributed storage system

Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5872980A (en)*1996-01-251999-02-16International Business Machines CorporationSemaphore access control buffer and method for accelerated semaphore operations
CN102298561A (en)*2011-08-102011-12-28北京百度网讯科技有限公司Method for conducting multi-channel data processing to storage device and system and device
US20110320687A1 (en)*2010-06-292011-12-29International Business Machines CorporationReducing write amplification in a cache with flash memory used as a write cache
CN102325010A (en)*2011-09-132012-01-18浪潮(北京)电子信息产业有限公司 A processing device and method for avoiding data sticking packets
CN102799537A (en)*2012-06-182012-11-28北京空间飞行器总体设计部Management method for dual-port RAM (Random Access Memory) buffer in spacecraft AOS (Advanced Orbiting System)
CN103150149A (en)*2013-03-262013-06-12华为技术有限公司Method and device for processing redo data of database
CN103257888A (en)*2012-02-162013-08-21阿里巴巴集团控股有限公司Method and equipment for concurrently executing read and write access to buffering queue
CN103412786A (en)*2013-08-292013-11-27苏州科达科技股份有限公司High performance server architecture system and data processing method thereof
CN103914565A (en)*2014-04-212014-07-09北京搜狐新媒体信息技术有限公司Method and device for inserting data into databases
US20140281248A1 (en)*2013-03-162014-09-18Intel CorporationRead-write partitioning of cache memory
CN104424133A (en)*2013-08-282015-03-18韦伯斯特生物官能(以色列)有限公司Double buffering with atomic transactions for the persistent storage of real-time data flows
US9003131B1 (en)*2013-03-272015-04-07Parallels IP Holdings GmbHMethod and system for maintaining context event logs without locking in virtual machine
CN104881258A (en)*2015-06-102015-09-02北京金山安全软件有限公司Buffer concurrent access method and device
US20150295859A1 (en)*2012-10-262015-10-15Zte CorporationData caching system and method for ethernet device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9223638B2 (en)*2012-09-242015-12-29Sap SeLockless spin buffer
CN103218176B (en)*2013-04-022016-02-24中国科学院信息工程研究所Data processing method and device
CN105912479B (en)*2016-04-072023-05-05合肥锐世数字科技有限公司Concurrent data caching method and device
CN105868123B (en)*2016-04-072018-10-09武汉数字派特科技有限公司A kind of buffer storage and method of concurrent data

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5872980A (en)*1996-01-251999-02-16International Business Machines CorporationSemaphore access control buffer and method for accelerated semaphore operations
US20110320687A1 (en)*2010-06-292011-12-29International Business Machines CorporationReducing write amplification in a cache with flash memory used as a write cache
CN102298561A (en)*2011-08-102011-12-28北京百度网讯科技有限公司Method for conducting multi-channel data processing to storage device and system and device
CN102325010A (en)*2011-09-132012-01-18浪潮(北京)电子信息产业有限公司 A processing device and method for avoiding data sticking packets
CN103257888A (en)*2012-02-162013-08-21阿里巴巴集团控股有限公司Method and equipment for concurrently executing read and write access to buffering queue
CN102799537A (en)*2012-06-182012-11-28北京空间飞行器总体设计部Management method for dual-port RAM (Random Access Memory) buffer in spacecraft AOS (Advanced Orbiting System)
US20150295859A1 (en)*2012-10-262015-10-15Zte CorporationData caching system and method for ethernet device
US20140281248A1 (en)*2013-03-162014-09-18Intel CorporationRead-write partitioning of cache memory
CN103150149A (en)*2013-03-262013-06-12华为技术有限公司Method and device for processing redo data of database
US9003131B1 (en)*2013-03-272015-04-07Parallels IP Holdings GmbHMethod and system for maintaining context event logs without locking in virtual machine
CN104424133A (en)*2013-08-282015-03-18韦伯斯特生物官能(以色列)有限公司Double buffering with atomic transactions for the persistent storage of real-time data flows
CN103412786A (en)*2013-08-292013-11-27苏州科达科技股份有限公司High performance server architecture system and data processing method thereof
CN103914565A (en)*2014-04-212014-07-09北京搜狐新媒体信息技术有限公司Method and device for inserting data into databases
CN104881258A (en)*2015-06-102015-09-02北京金山安全软件有限公司Buffer concurrent access method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
涂振发等: "面向分布式GIS空间数据的Key-value缓存", 《武汉大学学报》*

Cited By (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2017173919A1 (en)*2016-04-072017-10-12武汉数字派特科技有限公司Concurrent data caching method and structure
CN109213691A (en)*2017-06-302019-01-15伊姆西Ip控股有限责任公司Method and apparatus for cache management
CN109213691B (en)*2017-06-302023-09-01伊姆西Ip控股有限责任公司Method and apparatus for cache management
CN114305470A (en)*2017-11-102022-04-12湖北锐世数字医学影像科技有限公司 Coincidence screening device for all-digital PET
CN110647477A (en)*2018-06-272020-01-03广州神马移动信息科技有限公司Data caching method, device, terminal and computer readable storage medium
CN110647477B (en)*2018-06-272022-02-11阿里巴巴(中国)有限公司Data caching method, device, terminal and computer readable storage medium
CN110874273A (en)*2018-08-312020-03-10阿里巴巴集团控股有限公司Data processing method and device
CN110874273B (en)*2018-08-312023-06-13阿里巴巴集团控股有限公司Data processing method and device
CN109617825B (en)*2018-11-302022-03-25京信网络系统股份有限公司Message processing device, method and communication system
CN109617825A (en)*2018-11-302019-04-12京信通信系统(中国)有限公司 Message processing device, method and communication system
CN109711323B (en)*2018-12-252021-06-15武汉烽火众智数字技术有限责任公司Real-time video stream analysis acceleration method, device and equipment
CN109711323A (en)*2018-12-252019-05-03武汉烽火众智数字技术有限责任公司A kind of live video stream analysis accelerated method, device and equipment
CN112804003A (en)*2021-02-192021-05-14上海剑桥科技股份有限公司Optical module communication-based storage method, system and terminal
CN114035746A (en)*2021-10-282022-02-11中国科学院声学研究所High-sampling-rate data real-time acquisition and storage method and acquisition and storage system
CN114035746B (en)*2021-10-282023-06-16中国科学院声学研究所High sampling rate data real-time acquisition and storage method and acquisition and storage system
CN114217587A (en)*2021-12-152022-03-22之江实验室Real-time comparison and aggregation method for multiple types of data of mimicry executive body

Also Published As

Publication numberPublication date
WO2017173919A1 (en)2017-10-12
CN105912479B (en)2023-05-05

Similar Documents

PublicationPublication DateTitle
CN105912479B (en)Concurrent data caching method and device
CN105868123B (en)A kind of buffer storage and method of concurrent data
JP7640459B2 (en) Computational Data Storage System
US9444737B2 (en)Packet data processor in a communications processor architecture
US7971029B2 (en)Barrier synchronization method, device, and multi-core processor
US8943507B2 (en)Packet assembly module for multi-core, multi-thread network processors
US20150127880A1 (en)Efficient implementations for mapreduce systems
EP3285187A1 (en)Optimized merge-sorting of data retrieved from parallel storage units
US20120131283A1 (en)Memory manager for a network communications processor architecture
CN107864391B (en)Video stream cache distribution method and device
CN107220348A (en)A kind of method of data capture based on Flume and Alluxio
CN104135496B (en)RPC data transmission methods and system under a kind of homogeneous environment
CN118860290A (en) NVMe write data processing method, terminal and storage medium
Liu et al.The research and analysis of efficiency of hardware usage base on HDFS
CN114116790A (en)Data processing method and device
US11734551B2 (en)Data storage method for speech-related DNN operations
US8341368B2 (en)Automatic reallocation of structured external storage structures
CN112306628B (en)Virtual network function resource management system based on multi-core server
CN116382599B (en)Distributed cluster-oriented task execution method, device, medium and equipment
DE102020133262A1 (en) Workload scheduler for memory allocation
Li et al.Improving spark performance with zero-copy buffer management and RDMA
EP4432087A1 (en)Lock management method, apparatus and system
CN117215897A (en)Hot spot cache dynamic monitoring method, device, equipment and medium
CN102829869B (en)Ground test system for large-aperture static interference imaging spectrometer
CN112698950A (en)Memory optimization method for industrial Internet of things edge equipment

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information

Address after:230000 China (Anhui) pilot Free Trade Zone, Hefei, Anhui Province, the first floor of building C2, national health big data Industrial Park, the intersection of Xiyou road and kongtai Road, Hefei high tech Zone

Applicant after:Hefei Ruishi Digital Technology Co.,Ltd.

Address before:430074 No. 666 High-tech Avenue, Donghu Development Zone, Wuhan City, Hubei Province, B1 R&D Building, Area B, C and D, Wuhan National Biological Industrial Base Project

Applicant before:THE WUHAN DIGITAL PET Co.,Ltd.

CB02Change of applicant information
GR01Patent grant
GR01Patent grant
PE01Entry into force of the registration of the contract for pledge of patent right

Denomination of invention:A method and device for caching concurrent data

Granted publication date:20230505

Pledgee:Hefei Xingtai Technology Micro-loan Co.,Ltd.

Pledgor:Hefei Ruishi Digital Technology Co.,Ltd.

Registration number:Y2025980010487

PE01Entry into force of the registration of the contract for pledge of patent right

[8]ページ先頭

©2009-2025 Movatter.jp