Movatterモバイル変換


[0]ホーム

URL:


CN112667847A - Data caching method, data caching device and electronic equipment - Google Patents

Data caching method, data caching device and electronic equipment
Download PDF

Info

Publication number
CN112667847A
CN112667847ACN201910983939.7ACN201910983939ACN112667847ACN 112667847 ACN112667847 ACN 112667847ACN 201910983939 ACN201910983939 ACN 201910983939ACN 112667847 ACN112667847 ACN 112667847A
Authority
CN
China
Prior art keywords
data
level
read
buffers
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910983939.7A
Other languages
Chinese (zh)
Inventor
杨耀华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co LtdfiledCriticalBeijing QIYI Century Science and Technology Co Ltd
Priority to CN201910983939.7ApriorityCriticalpatent/CN112667847A/en
Publication of CN112667847ApublicationCriticalpatent/CN112667847A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

The invention provides a data caching method, a data caching device and electronic equipment, wherein the data caching device is provided with an N-level cache, and the data caching method comprises the following steps: acquiring data; if the data is the first type data, caching the data in each level of cache of the N-level cache respectively; if the data is the second type of data, caching the data in each level of cache except the first level of cache in the N-level cache respectively; the access speed of the first-level buffer is higher than that of the rest of the buffers in each level. The invention can ensure the rapidity of the first type data in the caching process, and the high real-time performance and the high availability in the reading process, also completely saves each type of data, and increases the safety and the reliability of the data. In addition, the capacity of other levels of buffers can be expanded to meet the rapidly increasing data buffer requirement, and the capacity expansion cost is reduced.

Description

Data caching method, data caching device and electronic equipment
Technical Field
The invention relates to a data caching method, a data caching device and electronic equipment.
Background
With the rapid development of the internet technology, the data volume is continuously increased, and the cache demand of the data is more and more increased. Taking video data as an example, in order to play the video data in real time, a large amount of video data needs to be cached in a certain cache database, and when the data volume is not large, the cache database can meet the requirement. However, as the amount of data is increased, the cache database cannot meet the cache requirement of the data. When the remaining cache space decreases, the cache database may trigger automatic memory cleaning, making data easily lost. Although the storage capacity of the cache database can be increased by horizontal expansion, the expansion cost is high. Therefore, the existing data caching method has the problem that the data caching requirement cannot be met.
Disclosure of Invention
The embodiment of the invention provides a data caching method, a data caching device and electronic equipment, and aims to solve the problem that the existing data caching method cannot meet the data caching requirement.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a data caching method, which is applied to a data caching apparatus having an N-level cache, where N is an integer greater than 1; the method comprises the following steps:
acquiring data;
if the data is the first type data, caching the data in each level of cache of the N-level cache respectively;
if the data is the second type of data, caching the data in each level of cache except the first level of cache in the N-level cache respectively;
the access speed of the first-level buffer is higher than that of the rest of the buffers in each level.
In a second aspect, an embodiment of the present invention provides a data caching apparatus, where the apparatus has an N-level buffer, where N is an integer greater than 1; the device comprises:
the acquisition module is used for acquiring data;
the cache module is used for caching the data in each level of cache of the N-level cache if the data is the first type of data; if the data is the second type of data, caching the data in each level of cache except the first level of cache in the N-level cache respectively;
the access speed of the first-level buffer is higher than that of the rest of the buffers in each level.
In a third aspect, an embodiment of the present invention provides a data caching apparatus, including an N-level cache, where N is an integer greater than 1;
a first level buffer in the N-level buffers is used for storing first type data, and each level buffer except the first level buffer in the N-level buffers is used for storing the first type data and second type data;
the access speed of the first-level buffer is higher than that of the rest of the buffers in each level.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor; when the processor executes the computer program, the data caching method in the first aspect of the embodiment of the present invention is implemented.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the data caching method in the first aspect of the embodiment of the present invention.
In the embodiment of the invention, the data is cached in a grading way by adopting the multi-level cache, only the first type data is cached in the first level cache with the highest access speed, and all types of data are stored in other levels of cache, so that on one hand, the first type data can be cached in the first level cache to ensure the rapidity of the first type data in caching and the high real-time performance and high availability in reading, and on the other hand, all types of data are completely stored because all types of data are cached in other levels of cache, the risk of data loss is reduced, and the safety and the reliability of the data are increased. In addition, when the data volume is increased, the capacity of other levels of buffers can be expanded to meet the requirement of rapidly increasing data buffers, and compared with the capacity expansion of the first level of buffers, the capacity expansion cost is reduced.
Drawings
Fig. 1 is a schematic diagram of a three-level cache structure according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a data caching method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of data reading and loading according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a data caching apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another data caching apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of another data processing apparatus according to an embodiment of the present invention;
FIG. 8 is a block diagram of another data processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a data caching device, which comprises N-level caches, wherein N is an integer larger than 1, and the access speed of the first-level cache in the N-level caches is higher than that of the other caches in each level. The first-level buffer is used for storing first-type data, and each level of buffer except the first-level buffer in the N-level buffer is used for storing the first-type data and the second-type data.
Here, the access speed of the first-level buffer is higher than that of the remaining layers of buffers, and the first-level buffer has the highest access speed among the N-level buffers and can be used as a cache or a cache.
In addition, since the access speed of the first-level buffer is the highest, the storage capacity of the first-level buffer is generally smaller than that of the other buffers in each level, and the capacity expansion cost of the first-level buffer is higher than that of the other buffers in each level. In view of this, in the embodiment of the present invention, the first-level buffer may be used to store the first type of data to ensure the rapidity of the first type of data in buffering, and the high real-time performance and high availability in reading, and the other-level buffers may be used to store the first type of data and the second type of data to ensure that the types of data are stored more completely, so as to reduce the risk of data loss and increase the security and reliability of the data.
In the embodiment of the present invention, the data to be buffered may be divided into the first type data and the second type data, where the first type data needs to be buffered in the first-level buffer, and therefore, the first type data may be understood as data with higher importance, or data with higher specialization, or data with newer creation time, and so on. In contrast, the second type of data may be understood as less important data, or less specialized data, or more time-consuming data, and so on. The classification criteria for the first type of data and the second type of data are not limited in the embodiments of the present invention. The first type data and the second type data may be understood as full data or all data, and thus, each level buffer except the first level buffer in the N-level buffer may be understood as a buffer for buffering the full data.
Generally, the first type of data grows more slowly and the second type of data grows more rapidly, and the first type of data is generally less quantitatively comparable to the second type of data. Accordingly, the required buffer capacity for the first type of data is generally smaller in capacity than the required buffer capacity for the second type of data. Therefore, the first type data is buffered in the first-level buffer, the capacity expansion requirement of the first-level buffer can be reduced, and as the data volume increases, the capacity expansion of other levels of buffers is generally only needed, so that compared with the capacity expansion of the first-level buffer, the capacity expansion cost is reduced.
With the rapid development of internet technology, data can be roughly classified into two types, one type is PPC (professional Generated Content) type data, and the other type is UGC (User Generated Content) type data. Taking video data as an example, the video data can be roughly divided into PPC video data and UGC video data, and considering that the number of the UGC video data is in a massive growth trend, and the UGC video data is less than the PPC video data in importance and specialty, the PPC video data can be used as the first type data, and the UGC video data can be used as the second type data. When buffering, only the PPC video data may be buffered in the first-level buffer, and the full-scale video data (i.e., the PPC video data and the UGC video data) may be buffered in the other-level buffers, respectively.
Still taking the video data as an example, the current latest and hottest video data may be used as the first type data, for example, 500 ten thousand pieces of latest and hottest video data, and other video data may be used as the second type data. During buffering, only the latest and hottest video data at present may be buffered in the first-level buffer, and the full amount of video data may be buffered in the other level buffers, respectively. This data sorting method is similar to the LRU (Least Recently Used) caching method, which removes the Least Recently Used data and gives up to the most Recently read data. The data is often read most frequently and the number of times of reading is the largest, so that the caching performance of the system can be improved by using the LRU caching method.
Optionally, the reading sequence of the N-level buffer during data reading is to read data in the first-level buffer, and if data is not read in the first-level buffer, read data in other level buffers. Therefore, when the data exists in the first-level buffer, the data required by the user can be read at the fastest speed, and the data reading speed is improved.
Optionally, in the N-level buffers, the access speeds of the buffers of the respective levels are sequentially reduced, that is, the access speed of the first-level buffer is highest, and the access speeds of the buffers of the subsequent levels are sequentially reduced.
Furthermore, in the N-level buffers, the buffer capacities of the buffers of the respective levels are sequentially increased, that is, the buffer capacity of the first-level buffer is the smallest, and the buffer capacities of the subsequent buffers of the respective levels are sequentially increased.
Optionally, the reading sequence of the N-level buffer during data reading is to read data in the first-level buffer, and if data is not read in the first-level buffer, continue to penetrate to the next-level buffer to read data, and so on. Therefore, the reading sequence of the N-level buffer is matched with the access speed of the N-level buffer, and data required by a user can be read at the fastest speed during data reading, so that the data reading speed is improved.
Optionally, N is equal to 3, and the N-level buffers include the first-level buffer, the second-level buffer, and the third-level buffer;
the access speeds of the first-level buffer, the second-level buffer and the third-level buffer are reduced in sequence.
Further, the buffer capacities of the first-level buffer, the second-level buffer and the third-level buffer are sequentially increased.
The embodiment provides a three-level buffer architecture, in which a first type of data can be buffered in three levels of buffers simultaneously, and a second type of data can be buffered in two levels of buffers simultaneously, which meets the requirements of both the buffer cost and the data completeness maintenance.
For example, as shown in fig. 1, the first level buffer may be a distributed Redis (REmote DIctionary Server) or a Redis cluster, and the distributed Redis cluster may be regarded as a cache of a CPU and used for caching data meeting a preset condition. The second-level buffer can be a Couchbase distributed buffer, the stability and the real-time performance of Couchbase can meet requirements, the storage capacity is greatly improved compared with Redis, the subsequent capacity expansion is more convenient compared with Redis, and the Couchbase distributed buffer can be used for buffering full data. The third-level buffer can be a low-cost HiKV distributed buffer, the HiKV is similar to Hadoop, the data block of the third-level buffer has three backups, the third-level buffer is guaranteed in safety, the third-level buffer is not inferior to the Hadoop in data real-time performance, and the third-level buffer can be used as a data source during data penetration and a permanent buffer to buffer the full data.
Among other things, Redis stores data in a dictionary structure (key-value structure), is an open-source high-level storage and data structure storage system that can be used as a database, cache, and message queue broker. It supports data types such as strings, hash tables, lists, collections, ordered collections, bitmaps, Hyperlogs, etc.
Couchbase is a merger of CouchDB (open source document oriented database management system) and MemBase, which is a high performance, high scalability and highly available distributed caching system.
The HiKV is a set of distributed KV (key-value) data storage solution, is mainly used for solving the storage and high-performance read-write access of mass KV data, and provides various data consistency models and multi-data center support.
The embodiment of the invention also provides a data caching method, which is applied to a data caching device with N-level caches, wherein in the N-level caches, the access speed of the first-level cache is higher than that of the rest of the caches in each level.
As shown in fig. 2, the data caching method includes the following steps:
step 101: data is acquired.
The data acquired in this step is data that needs to be buffered, and the type of the data may be unlimited, for example, document data, picture data, audio data, video data, and the like.
In the embodiment of the invention, the type of the acquired data can be judged, and the data is cached in a grading way according to different types of data. If the data is the first type data, go tostep 1021; if the data is the second type of data,step 1022 is executed.
Step 1021: and respectively buffering the data in each level buffer of the N-level buffers.
In this step, when the data is the first type of data, the data is buffered in the buffers of each level, so that on one hand, the data has a plurality of (including two) backups, and the security of the data can be improved. On the other hand, since the data is buffered in the first-level buffer, since the access speed of the first-level buffer is higher than that of the remaining-level buffers, the speed of the data at the time of buffering, and high real-time performance and high availability at the time of reading can be ensured.
Step 1022: and respectively buffering the data in each layer of buffer except the first layer of buffer in the N layers of buffers.
In this step, when the data is the second type of data, the data is buffered in the buffers of each level, so that on one hand, the data has a plurality of (including two) backups, and the security of the data can be improved. On the other hand, when the data volume is increased, the capacity of other layers of buffers can be expanded, and compared with the capacity expansion of the first layer of buffers, the capacity expansion cost is reduced.
In the embodiment of the invention, the data is cached in a grading way by adopting the multi-level cache, only the first type data is cached in the first level cache with the highest access speed, and all types of data are stored in other levels of cache, so that on one hand, the first type data can be cached in the first level cache to ensure the rapidity of the first type data in caching and the high real-time performance and high availability in reading, and on the other hand, all types of data are completely stored because all types of data are cached in other levels of cache, the risk of data loss is reduced, and the safety and the reliability of the data are increased. In addition, when the data volume is increased, the capacity of other levels of buffers can be expanded to meet the requirement of rapidly increasing data buffers, and compared with the capacity expansion of the first level of buffers, the capacity expansion cost is reduced.
Optionally, in the N-level buffers, the access speeds of the buffers of the respective levels are sequentially reduced, that is, the access speed of the first-level buffer is highest, and the access speeds of the buffers of the subsequent levels are sequentially reduced.
Furthermore, in the N-level buffers, the buffer capacities of the buffers of the respective levels are sequentially increased, that is, the buffer capacity of the first-level buffer is the smallest, and the buffer capacities of the subsequent buffers of the respective levels are sequentially increased.
Optionally, N is equal to 3, and the N-level buffers include the first-level buffer, the second-level buffer, and the third-level buffer;
the access speeds of the first-level buffer, the second-level buffer and the third-level buffer are reduced in sequence.
Further, the buffer capacities of the first-level buffer, the second-level buffer and the third-level buffer are sequentially increased.
The embodiment provides a three-level buffer architecture, in which a first type of data can be buffered in three levels of buffers simultaneously, and a second type of data can be buffered in two levels of buffers simultaneously, which meets the requirements of both the buffer cost and the data completeness maintenance.
Optionally, the method further includes:
deleting first data from an ith-level buffer when the first data exists in the ith-level buffer;
the i is an integer greater than or equal to 1 and less than N, and the first data is data which is read in a preset period and has a frequency lower than a preset frequency.
When i is equal to 1, since the access speed of the first-level buffer is fastest, the buffer capacity is correspondingly smaller, and the capacity expansion cost is correspondingly higher, when data with lower activity exists in the data cached in the first-level buffer, in order to improve the utilization rate of the first-level buffer and reduce the cache cost, the data can be deleted from the first-level buffer. Since the other hierarchical buffers cache the data, even if the data is deleted from the first hierarchical buffer, the data is cached in the subsequent hierarchical buffer, and thus, the data can be safely and reliably stored.
When i is not equal to 1, although the storage capacity of the ith-level buffer is large and the capacity expansion is convenient, a large amount of long tail data with low activity may exist in the whole data, and it is not necessary to buffer all the data in the ith-level buffer. In view of this, the inactive long tail data (i.e., the first data) in the i-th level buffer can be deleted, so as to release the buffer capacity of the i-th level buffer as much as possible, improve the utilization rate of the i-th level buffer, and reduce the buffer cost. Even if the data is deleted from the ith-level buffer, the data is still buffered in the buffer of the subsequent level, so that the data can be safely and reliably stored.
Therefore, in the N-level buffer, the N-th level buffer is used as a permanent buffer for buffering the whole data, so as to provide a comprehensive guarantee for data backup, and ensure that all data can be safely and reliably stored. Therefore, the nth-level buffer can be the buffer with the largest buffer capacity and the lowest expansion cost so as to meet the buffer demand of data growth to the greatest extent.
Taking the architecture of the three-level buffer using Redis + Couchbase + HiKV shown in fig. 1 as an example, a Redis cluster is used as a first-level buffer, the stability and real-time performance of the Couchbase can meet the requirements, the buffer capacity is greatly improved compared with the Redis, and the subsequent capacity expansion is more convenient than the Redis, so the Couchbase is selected as a second-level buffer below the Redis cluster. Although Couchbase can be expanded conveniently, considering a large amount of data with inactive long tail, it is not necessary to buffer all data in the Couchbase, and therefore, a low-cost HiKV distributed buffer is continuously introduced below the Couchbase. When the Couchbase has inactive data, the data is deleted from the Couchbase in time, so that the data is permanently cached in HiKV.
Optionally, the method further includes:
receiving a data reading instruction;
reading target data from a first-level buffer, wherein the target data is data required to be read by the data reading instruction;
and if the target data is not read, reading the target data from the next-level buffer until the target data is read.
In order to better understand the data reading process, in the embodiment of the present invention, a video playing client (e.g., an aviary client) reads video data, and in this step, the client may receive a data reading instruction input by a user. The client may call a Cache access device (Cache access) Application Programming Interface (API) to obtain video data to be read by the data reading instruction. Specifically, the client may invoke the cache accessor to read the data (i.e., the target data) required to be read by the data reading instruction from the first-level cache, if the target data is not read in the first-level cache, the target data is continuously read by penetrating through the next-level cache, if the target data is not read yet, the target data is continuously read by penetrating through the next-level cache, and so on until the target data is read, or all the level caches have been read.
In the embodiment of the invention, the data is cached in a grading way by adopting the multi-level buffers, and the data is read by penetrating the multi-level buffers layer by layer according to the ascending order of the grades, so that the data reading speed can be improved, and the real-time property during the data reading is ensured.
Among the data buffered in the N-level buffer, the read data has higher liveness than the unread data. In view of this, in order to satisfy the advantage of the read data in the read speed, the read data may be dynamically loaded according to the data read condition, so that the read data is buffered in the buffer with higher access speed. When the data is loaded, the data is still cached in the original cache, so that the data is cached in the plurality of caches, and the safety of the data is improved. Specific embodiments of data loading are described below.
The first scheme is as follows: and if the target data is read from the k-level buffer, loading the target data to a k-1-level buffer, wherein k is an integer which is more than 2 and less than or equal to N.
Scheme II: and if the target data is read from the j-level buffer and the target data is the first type data meeting the preset condition, loading the target data into the first-level buffer, wherein j is an integer which is more than 1 and less than or equal to N.
In this embodiment, it is considered that the other hierarchical buffers except the first hierarchical buffer have a large storage capacity and a low expansion cost, and therefore, when the target data is read from the k-th hierarchical buffer, the target data can be unconditionally loaded to the k-1-th hierarchical buffer regardless of whether the target data is the first type of data or the second type of data.
In this embodiment, considering that the storage capacity of the first-level buffer is the smallest and the expansion cost is high, when the first type of data is read from the j-th-level buffer, whether to load the data into the first-level buffer may be determined according to whether the data satisfies a predetermined condition. Here, the preset condition may be understood as a loading condition, and the preset condition or the loading condition may include at least one of: the frequency of reading the data in the j-level buffer is higher than the preset frequency; the number of times the data is read in the j-th level buffer exceeds a predetermined number of times. Further, whether to load the first type data to the first level buffer may be determined according to an LRU algorithm.
Taking the architecture of the three-level buffer using Redis + Couchbase + HiKV shown in fig. 1 as an example, the data reading and data loading processes are specifically exemplified.
As shown in fig. 1 and fig. 3, the Cache access is used as a Cache reading module of the client, and when receiving a data reading instruction, the client may call the Cache access to read corresponding data layer by layer according to the sequence of redis, Couchbase, and HiKV. Specifically, the Cache access tries to read corresponding data from the Redis at the first time, and if the corresponding data is not read, the corresponding data penetrates to the Couchbase layer. If corresponding data is read at the Couchbase layer and the read data type is PPC, whether the data needs to be loaded into the upper layer Redis can be judged according to a corresponding LRU algorithm. If the corresponding data is still unread at the Couchbase layer, then the penetration continues down into the permanently stored HiKV. If data is read in HiKV, it may be loaded into the second tier Couchbase and a 2 month expiration period may be set. If the read data type is PPC, whether the data is loaded into the upper layer Redis may be considered according to the corresponding LRU algorithm.
By combining the above embodiments, the embodiments of the present invention have the following beneficial effects: firstly, the rapidity of the first type data in caching and the high real-time performance and high availability in reading can be ensured; secondly, various types of data can be completely stored, the risk of data loss is reduced, and the safety and reliability of the data are improved; thirdly, the read data can be dynamically loaded, so that the read data is cached in a cache with higher access speed; and fourthly, when the data volume is increased, the capacity of other layers of caches can be expanded to meet the requirement of the rapidly-increased data cache, and compared with the capacity expansion of the first layer of cache, the capacity expansion cost is reduced.
As shown in fig. 4, an embodiment of the invention provides adata caching apparatus 500, where thedata caching apparatus 500 has N-level caches, where N is an integer greater than 1; thedata caching apparatus 500 includes:
an obtainingmodule 501, configured to obtain data;
acaching module 502, configured to cache the data in each level of the N-level caches respectively if the data is first type data; if the data is the second type of data, caching the data in each level of cache except the first level of cache in the N-level cache respectively;
the access speed of the first-level buffer is higher than that of the rest of the buffers in each level.
Optionally, as shown in fig. 5, thedata caching apparatus 500 further includes:
a deletingmodule 503, configured to delete the first data from the ith-level buffer when the first data exists in the ith-level buffer;
the i is an integer greater than or equal to 1 and less than N, and the first data is data which is read in a preset period and has a frequency lower than a preset frequency.
Optionally, as shown in fig. 6, thedata caching apparatus 500 further includes:
areceiving module 504, configured to receive a data reading instruction;
areading module 505, configured to read target data from the first-level buffer, where the target data is data that needs to be read by the data reading instruction; and if the target data is not read, reading the target data from the next-level buffer until the target data is read.
Optionally, as shown in fig. 7, thedata caching apparatus 500 further includes:
afirst loading module 506, configured to load the target data into a k-1 th-level buffer if the target data is read from a k-level buffer, where k is an integer greater than 2 and less than or equal to N.
Optionally, as shown in fig. 8, thedata caching apparatus 500 further includes:
asecond loading module 507, configured to load the target data into the first-level buffer if the target data is read from the j-th-level buffer and the target data is the first-type data meeting a preset condition, where j is an integer greater than 1 and less than or equal to N.
Optionally, the preset condition includes at least one of:
the frequency of reading the target data in the j-level buffer is higher than a preset frequency;
the target data is read in the j-th level buffer more than a predetermined number of times.
Optionally, N is equal to 3, and the N-level buffers include the first-level buffer, the second-level buffer, and the third-level buffer;
the access speeds of the first-level buffer, the second-level buffer and the third-level buffer are reduced in sequence.
It should be noted that any implementation manner in the data caching method embodiment may be implemented by thedata caching apparatus 500 in this embodiment, and the same beneficial effects are achieved, and for avoiding repetition, details are not described here again.
As shown in fig. 9, anelectronic device 800 according to an embodiment of the present invention is further provided, where theelectronic device 800 includes amemory 801, aprocessor 802, and a computer program stored in thememory 801 and executable on theprocessor 802; theprocessor 802 may be communicatively coupled to a data caching apparatus having N-level caches, where N is an integer greater than 1; when theprocessor 802 executes the computer program, the following steps are implemented:
acquiring data;
if the data is the first type data, caching the data in each level of cache of the N-level cache respectively;
if the data is the second type of data, caching the data in each level of cache except the first level of cache in the N-level cache respectively;
the access speed of the first-level buffer is higher than that of the rest of the buffers in each level.
In FIG. 9, the bus architecture may include any number of interconnected buses and bridges, with one or more processors represented byprocessor 802 and various circuits of memory represented bymemory 801 being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. Theprocessor 802 is responsible for managing the bus architecture and general processing, and thememory 801 may store data used by theprocessor 802 in executing instructions. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted mobile terminal, a wearable device, and the like.
Optionally, when theprocessor 802 executes the computer program, the following is further implemented:
deleting first data from an ith-level buffer when the first data exists in the ith-level buffer;
the i is an integer greater than or equal to 1 and less than N, and the first data is data which is read in a preset period and has a frequency lower than a preset frequency.
Optionally, when theprocessor 802 executes the computer program, the following is further implemented:
receiving a data reading instruction;
reading target data from a first-level buffer, wherein the target data is data required to be read by the data reading instruction;
and if the target data is not read, reading the target data from the next-level buffer until the target data is read.
Optionally, when theprocessor 802 executes the computer program, the following is further implemented:
and if the target data is read from the k-level buffer, loading the target data to a k-1-level buffer, wherein k is an integer which is more than 2 and less than or equal to N.
Optionally, when theprocessor 802 executes the computer program, the following is further implemented:
and if the target data is read from the j-level buffer and the target data is the first type data meeting the preset condition, loading the target data into the first-level buffer, wherein j is an integer which is more than 1 and less than or equal to N.
Optionally, the preset condition includes at least one of:
the frequency of reading the target data in the j-level buffer is higher than a preset frequency;
the target data is read in the j-th level buffer more than a predetermined number of times.
Optionally, N is equal to 3, and the N-level buffers include the first-level buffer, the second-level buffer, and the third-level buffer;
the access speeds of the first-level buffer, the second-level buffer and the third-level buffer are reduced in sequence.
It should be noted that any implementation manner in the data caching method embodiment may be implemented by theelectronic device 800 in this embodiment, and the same beneficial effects are achieved, and details are not described here.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the data caching method embodiment or implements each process of the data processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the computer program is not described herein again. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one type of logical function division, and other division manners may be available in actual implementation, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

Translated fromChinese
1.一种数据缓存方法,其特征在于,应用于具有N层级缓存器的数据缓存装置,所述N为大于1的整数;所述方法包括:1. A data buffering method, characterized in that, applied to a data buffering device having N-level buffers, wherein N is an integer greater than 1; the method comprises:获取数据;retrieve data;若所述数据为第一类型数据,则将所述数据分别缓存于所述N层级缓存器的各层级缓存器中;If the data is the first type of data, caching the data in each level of the N level registers respectively;若所述数据为第二类型数据,则将所述数据分别缓存于所述N层级缓存器中除第一层级缓存器之外的各层级缓存器中;If the data is the second type of data, caching the data in each of the N-level registers except the first-level registers;其中,所述第一层级缓存器的存取速度高于其余各层级缓存器的存取速度。Wherein, the access speed of the first level register is higher than the access speed of the other level registers.2.根据权利要求1所述的方法,其特征在于,所述方法还包括:2. The method according to claim 1, wherein the method further comprises:当第i层级缓存器中存在第一数据时,将所述第一数据从所述第i层级缓存器中删除;When the first data exists in the i-th level register, the first data is deleted from the i-th level register;其中,所述i为大于或等于1且小于N的整数,所述第一数据为在预设周期内被读取的频率低于预设频率的数据。Wherein, the i is an integer greater than or equal to 1 and less than N, and the first data is data whose frequency is lower than a preset frequency to be read within a preset period.3.根据权利要求1或2所述的方法,其特征在于,所述方法还包括:3. The method according to claim 1 or 2, wherein the method further comprises:接收数据读取指令;Receive data read instructions;从第一层级缓存器中读取目标数据,所述目标数据为所述数据读取指令所需要读取的数据;Read target data from the first-level buffer, where the target data is the data to be read by the data read instruction;若未读取到所述目标数据,则从下一层级缓存器中读取所述目标数据,直至读取到所述目标数据。If the target data is not read, the target data is read from the next-level buffer until the target data is read.4.根据权利要求3所述的方法,其特征在于,所述方法还包括:4. The method according to claim 3, wherein the method further comprises:若在第k层级缓存器中读取到所述目标数据,则将所述目标数据加载至第k-1层级缓存器,所述k为大于2且小于或等于N的整数。If the target data is read in the k-th level register, load the target data into the k-1-th level register, where k is an integer greater than 2 and less than or equal to N.5.根据权利要求3所述的方法,其特征在于,所述方法还包括:5. The method according to claim 3, wherein the method further comprises:若在第j层级缓存器中读取到所述目标数据,且所述目标数据为满足预设条件的第一类型数据,则将所述目标数据加载至所述第一层级缓存器中,j为大于1且小于或等于N的整数。If the target data is read in the j-th level register, and the target data is the first type of data that satisfies the preset condition, load the target data into the first-level register, j is an integer greater than 1 and less than or equal to N.6.根据权利要求5所述的方法,其特征在于,所述预设条件包括以下至少之一:6. The method according to claim 5, wherein the preset condition comprises at least one of the following:所述目标数据在所述第j层级缓存器中被读取的频率高于预设频率;The frequency of the target data being read in the j-th level register is higher than a preset frequency;所述目标数据在所述第j层级缓存器中被读取的次数超过预设次数。The number of times that the target data is read in the j-th level register exceeds a preset number of times.7.根据权利要求1或2所述的方法,其特征在于,所述N等于3,所述N层级缓存器包括所述第一层级缓存器、第二层级缓存器和第三层级缓存器;7. The method according to claim 1 or 2, wherein the N is equal to 3, and the N-level buffers comprise the first-level buffers, the second-level buffers, and the third-level buffers;所述第一层级缓存器、所述第二层级缓存器和所述第三层级缓存器的存取速度依次降低。The access speeds of the first-level register, the second-level register, and the third-level register are sequentially reduced.8.一种数据缓存装置,其特征在于,所述装置具有N层级缓存器,所述N为大于1的整数;所述装置包括:8. A data buffering device, characterized in that the device has N-level buffers, and N is an integer greater than 1; the device comprises:获取模块,用于获取数据;Get module, used to get data;缓存模块,用于若所述数据为第一类型数据,则将所述数据分别缓存于所述N层级缓存器的各层级缓存器中;若所述数据为第二类型数据,则将所述数据分别缓存于所述N层级缓存器中除第一层级缓存器之外的各层级缓存器中;a cache module, configured to respectively cache the data in each level of the N-level registers if the data is the first type of data; if the data is the second type of data, cache the data data is respectively cached in each level of registers except the first level of registers in the N-level registers;其中,所述第一层级缓存器的存取速度高于其余各层级缓存器的存取速度。Wherein, the access speed of the first level register is higher than the access speed of the other level registers.9.一种数据缓存装置,其特征在于,包括N层级缓存器,所述N为大于1的整数;9. A data buffering device, comprising N-level buffers, wherein N is an integer greater than 1;所述N层级缓存器中的第一层级缓存器用于存储第一类型数据,所述N层级缓存器中除第一层级缓存器之外的各层级缓存器用于存储所述第一类型数据和第二类型数据;The first-level buffers in the N-level buffers are used to store the first type of data, and each level of the N-level buffers except the first-level buffers is used to store the first-type data and the first type of buffers. Two types of data;其中,所述第一层级缓存器的存取速度高于其余各层级缓存器的存取速度。Wherein, the access speed of the first level register is higher than the access speed of the other level registers.10.根据权利要求9所述的装置,其特征在于,所述N等于3,所述N层级缓存器包括所述第一层级缓存器、第二层级缓存器和第三层级缓存器;10. The apparatus of claim 9, wherein the N is equal to 3, and the N-level registers include the first-level registers, the second-level registers, and the third-level registers;所述第一层级缓存器、所述第二层级缓存器和所述第三层级缓存器的存取速度依次降低。The access speeds of the first-level register, the second-level register, and the third-level register are sequentially reduced.11.一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序;其特征在于,所述处理器执行所述计算机程序时,实现如权利要求1至7中任一项所述的数据缓存方法。11. An electronic device, comprising a memory, a processor and a computer program stored on the memory and running on the processor; it is characterized in that, when the processor executes the computer program, the computer program as claimed in the claim is realized. The data caching method described in any one of requirements 1 to 7 is required.12.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1至7中任一项所述的数据缓存方法中的步骤。12. A computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps in the data caching method according to any one of claims 1 to 7 are implemented.
CN201910983939.7A2019-10-162019-10-16Data caching method, data caching device and electronic equipmentPendingCN112667847A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910983939.7ACN112667847A (en)2019-10-162019-10-16Data caching method, data caching device and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910983939.7ACN112667847A (en)2019-10-162019-10-16Data caching method, data caching device and electronic equipment

Publications (1)

Publication NumberPublication Date
CN112667847Atrue CN112667847A (en)2021-04-16

Family

ID=75400687

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910983939.7APendingCN112667847A (en)2019-10-162019-10-16Data caching method, data caching device and electronic equipment

Country Status (1)

CountryLink
CN (1)CN112667847A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114036190A (en)*2021-10-282022-02-11武汉烽火技术服务有限公司Cache control method, device, equipment and readable storage medium
CN114051162A (en)*2022-01-122022-02-15飞狐信息技术(天津)有限公司Caching method and device based on play records
CN118093455A (en)*2024-04-232024-05-28北京壁仞科技开发有限公司 Data loading method, data loading device, processor and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20080133836A1 (en)*2006-11-302008-06-05Magid Robert MApparatus, system, and method for a defined multilevel cache
CN107438837A (en)*2015-04-292017-12-05谷歌公司 data cache
CN107451146A (en)*2016-05-312017-12-08北京京东尚科信息技术有限公司The method of data and data cached multi-level buffer device are read using multi-level buffer
CN108132958A (en)*2016-12-012018-06-08阿里巴巴集团控股有限公司A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
CN109492020A (en)*2018-11-262019-03-19北京知道创宇信息技术有限公司A kind of data cache method, device, electronic equipment and storage medium
CN109977129A (en)*2019-03-282019-07-05中国联合网络通信集团有限公司Multi-stage data caching method and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20080133836A1 (en)*2006-11-302008-06-05Magid Robert MApparatus, system, and method for a defined multilevel cache
CN107438837A (en)*2015-04-292017-12-05谷歌公司 data cache
CN107451146A (en)*2016-05-312017-12-08北京京东尚科信息技术有限公司The method of data and data cached multi-level buffer device are read using multi-level buffer
CN108132958A (en)*2016-12-012018-06-08阿里巴巴集团控股有限公司A kind of multi-level buffer data storage, inquiry, scheduling and processing method and processing device
CN109492020A (en)*2018-11-262019-03-19北京知道创宇信息技术有限公司A kind of data cache method, device, electronic equipment and storage medium
CN109977129A (en)*2019-03-282019-07-05中国联合网络通信集团有限公司Multi-stage data caching method and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114036190A (en)*2021-10-282022-02-11武汉烽火技术服务有限公司Cache control method, device, equipment and readable storage medium
CN114051162A (en)*2022-01-122022-02-15飞狐信息技术(天津)有限公司Caching method and device based on play records
CN118093455A (en)*2024-04-232024-05-28北京壁仞科技开发有限公司 Data loading method, data loading device, processor and electronic device

Similar Documents

PublicationPublication DateTitle
CN109254733B (en) Method, apparatus and system for storing data
CN108804031B (en)Optimal record lookup
JP6050503B2 (en) Mail indexing and retrieval using a hierarchical cache
TWI627536B (en)System and method for a shared cache with adaptive partitioning
US10698831B2 (en)Method and apparatus for data access
US20140195750A1 (en)Buffer pool extension for database server
KR20160124181A (en)Modified memory compression
CN112667847A (en)Data caching method, data caching device and electronic equipment
KR20160031973A (en)Methods, devices, and systems for caching data items
JP7108784B2 (en) DATA STORAGE METHOD, DATA ACQUISITION METHOD, AND DEVICE
CN112965939A (en)File merging method, device and equipment
US12436700B2 (en)Performance of dispersed location-based deduplication
CN113609090B (en)Data storage method and device, computer readable storage medium and electronic equipment
CN116910314A (en)Method and device for optimizing range query in key value storage system based on key value separation
CN108763443A (en)block chain account processing method and device
CN119807356A (en) Information acquisition method, device and electronic equipment
CN117914867B (en)Data buffering method, device, equipment and computer readable storage medium
CN118210739A (en) Multi-level cache management method, device and computer-readable storage medium
CN113220211A (en)Data storage system, data access method and related device
JP2023503034A (en) Pattern-based cache block compression
CN111625500A (en)File snapshot method and device, electronic equipment and storage medium
CN117112614A (en)Online evolution method, device, equipment and storage medium for execution plan
US10067678B1 (en)Probabilistic eviction of partial aggregation results from constrained results storage
CN111813711B (en)Method and device for reading training sample data, storage medium and electronic equipment
CN118519964A (en)Data processing method, apparatus, computer program product, device and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20210416

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp