Movatterモバイル変換


[0]ホーム

URL:


CN110941449A - Cache block processing method and device and processor chip - Google Patents

Cache block processing method and device and processor chip
Download PDF

Info

Publication number
CN110941449A
CN110941449ACN201911118842.6ACN201911118842ACN110941449ACN 110941449 ACN110941449 ACN 110941449ACN 201911118842 ACN201911118842 ACN 201911118842ACN 110941449 ACN110941449 ACN 110941449A
Authority
CN
China
Prior art keywords
data
read
cache block
latched
target cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911118842.6A
Other languages
Chinese (zh)
Inventor
张喆鹏
赵云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Semiconductor Technology Co Ltd
Original Assignee
New H3C Semiconductor Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Semiconductor Technology Co LtdfiledCriticalNew H3C Semiconductor Technology Co Ltd
Priority to CN201911118842.6ApriorityCriticalpatent/CN110941449A/en
Publication of CN110941449ApublicationCriticalpatent/CN110941449A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application provides a Cache block processing method and device and a processor chip. The method is applied to a Cache included by a processor chip, wherein the Cache includes at least one Cache block, and the method comprises the following steps: when it is determined that data to be read needs to be latched and a target Cache block caching the data to be read is not locked, locking the target Cache block; timing by applying a timer corresponding to the target Cache block; and when the timer reaches the preset timing time, unlocking the target Cache block. It can be seen that the unlocking process of the method and the device does not need the participation of a processor in a processor chip, so that the processing time of the processor can be saved, and the processing efficiency of the processor can be improved.

Description

Cache block processing method and device and processor chip
Technical Field
The present application relates to the field of network communication technologies, and in particular, to a Cache block processing method and apparatus, and a processor chip.
Background
Processor chips are commonly integrated with processors and caches (caches). The reading speed of the Cache is obviously higher than that of a common memory, so that the data in the common memory can be cached by using the Cache. When the processor reads data, the data can be directly read from the Cache, so that the processing performance of the processor is improved.
The storage capacity of the Cache is limited. Once filled, the subsequently received data replaces the existing data in the Cache. When the processor accesses the replaced data again, it needs to be re-cached from memory. For data that needs to be frequently accessed, if the data is repeatedly cached from the memory due to being replaced, the processing performance of the processor is affected.
For this reason, add Cache and latch the function: and locking the Cache block where the data is located after the data needing frequent access is cached in the Cache. The data in the locked Cache block cannot be replaced, so that the processing efficiency of the processor on the data can be improved.
And when the data needs to be unlocked, the processor issues an unlocking instruction to unlock the Cache block where the data is located. It can be seen that the unlocking process requires processor involvement, which still affects the processing performance of the processor to some extent.
Content of application
In view of this, the present application provides a Cache block processing method, an apparatus and a processor chip, so as to improve the processing performance of a processor in the processor chip.
In order to achieve the purpose of the application, the application provides the following technical scheme:
in a first aspect, the present application provides a Cache block processing method, which is applied to a Cache included in a processor chip, where the Cache includes at least one Cache block, and the method includes:
when it is determined that data to be read needs to be latched and a target Cache block caching the data to be read is not locked, locking the target Cache block;
timing by applying a timer corresponding to the target Cache block;
and when the timer reaches the preset timing time, unlocking the target Cache block.
Optionally, the determining that the data to be read needs to be latched includes:
receiving a data reading instruction sent by a processor in the processor chip, wherein the data reading instruction comprises a storage address of the data to be read in a memory;
and if the preset address range to be latched comprises the storage address of the data to be read, determining that the data to be read needs to be latched, wherein the address range to be latched comprises the storage addresses of all the data to be latched in the memory.
Optionally, before locking the target Cache block, the method further includes:
and if the storage address of the data to be read does not hit any Cache block, reading the data to be read from a memory and caching the data to be read to the target Cache block.
Optionally, the method further includes:
and restarting a timer corresponding to a target Cache block when it is determined that the data to be read needs to be latched and the target Cache block caching the data to be read is locked.
Optionally, the method further includes:
and when determining that the data to be read needs to be latched and a target Cache block caching the data to be read is locked, forbidding to restart a timer corresponding to the target Cache block.
In a second aspect, the present application provides a Cache block processing apparatus, which is applied to a Cache included in a processor chip, where the Cache includes at least one Cache block, and the apparatus includes:
the locking unit is used for locking a target Cache block when it is determined that data to be read needs to be locked and the target Cache block caching the data to be read is not locked;
the timing unit is used for timing by applying a timer corresponding to the target Cache block;
and the unlocking unit is used for unlocking the target Cache block when the timer reaches the preset timing time.
Optionally, the apparatus further comprises:
the receiving unit is used for receiving a data reading instruction sent by a processor in the processor chip, wherein the data reading instruction comprises a storage address of the data to be read in a memory;
the determining unit is configured to determine that the data to be read needs to be latched if a preset address range to be latched includes a storage address of the data to be read, where the address range to be latched includes storage addresses of all data to be latched in a memory.
Optionally, the apparatus further comprises:
and the Cache unit is used for reading the data to be read from a memory and caching the data to be read to the target Cache block if the storage address of the data to be read does not hit any Cache block.
Optionally, the apparatus further comprises:
and the restarting unit is used for restarting the timer corresponding to the target Cache block when the data to be read needs to be latched and the target Cache block caching the data to be read is locked.
Optionally, the apparatus further comprises:
and the forbidding unit is used for forbidding to restart the timer corresponding to the target Cache block when the data to be read needs to be latched and the target Cache block caching the data to be read is locked.
In a third aspect, the present application provides a processor chip, where the processor chip includes a processor and a Cache, and the Cache is used to implement the above Cache block processing method.
In a fourth aspect, the present application provides a Cache, where the Cache implements the above Cache block processing method.
As can be seen from the above description, in the present application, after the Cache block is locked, the locking duration of the Cache block is timed by using the timer, and when the preset timing time is reached, the Cache block is automatically unlocked. The unlocking process does not need the participation of a processor in a processor chip, can effectively save the processing time of the processor, and improves the processing performance of the processor.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a Cache block processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating an implementation of determining whether data to be read needs to be latched according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a Cache block processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic diagram of a hardware structure of a processor chip according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the negotiation information may also be referred to as second information, and similarly, the second information may also be referred to as negotiation information without departing from the scope of the embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiment of the application provides a Cache block processing method applied to a Cache. According to the method, after a Cache block is locked, the locking duration of the Cache block is timed by using a timer, and when the preset timing time is reached, the Cache block is automatically unlocked. The unlocking process does not need the participation of a processor in a processor chip, can effectively save the processing time of the processor, and improves the processing performance of the processor.
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in detail with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, a flowchart of a Cache block processing method according to an embodiment of the present application is shown. The flow is applied to the Cache included in the processor chip.
The specific type of processor chip is not limited in this application. For one embodiment, the Processor chip may be a Network Processor (NP) chip.
Cache has a Cache feature. The processor chip caches the data in the memory using the Cache. When the processor in the processor chip needs to read data, the data can be directly and quickly read from the Cache, so that the processing performance of the processor is improved.
The Cache includes at least one Cache block. A Cache block may also be referred to as a Cache line (Cache line). Each Cache block may store a certain amount of data. For example, a Cache block may store 256 bytes of data.
As shown in fig. 1, the process may include the following steps:
step 101, when it is determined that data to be read needs to be latched and a target Cache block caching the data to be read is not locked, locking the target Cache block.
The storage capacity of the Cache is small, and the storage capacity of the memory is large. Once the storage space of the Cache is filled, the data read from the memory subsequently replaces the existing data in the Cache. In order to ensure that some frequently used key data are not replaced, the Cache block where the key data are located needs to be locked.
Before locking, it is first determined whether the data to be read needs to be locked, that is, whether the data to be read is critical data. The process of determining whether the data to be read needs to be latched is described below, and is not described herein for the time being.
And when it is determined that the data to be read needs to be latched and a target Cache block caching the data to be read is not locked, locking the target Cache block. Here, it is to be understood that the target Cache block is named for convenience of description and is not meant to be limiting.
The target Cache block is locked through the steps, and the data in the target Cache block cannot be replaced.
And step 102, timing by using a timer corresponding to the target Cache block.
In the application, a corresponding timer is configured for each Cache block.
In one example, after the target Cache block is locked in step 101, a timer corresponding to the target Cache block is started to start timing.
And 103, unlocking the target Cache block when the timer reaches the preset timing time.
In this step, when the timer reaches the preset timing time, it indicates that the data in the target Cache block does not need to be frequently used any more, and the target Cache block may be unlocked and the Cache space of the target Cache block may be released. I.e., allow the data in the target Cache block to be replaced.
Thus, the flow shown in fig. 1 is completed.
As can be seen from the flow shown in fig. 1, in the embodiment of the present application, after the Cache block is locked, the locking duration of the Cache block is timed by using a timer, and when the preset timing time is reached, the Cache block is automatically unlocked. The unlocking process does not need the participation of a processor in a processor chip, can effectively save the processing time of the processor, and improves the processing performance of the processor.
The process of determining whether data to be read needs to be latched is described below. Referring to fig. 2, an implementation flow for determining whether data to be read needs to be latched is shown in the embodiment of the present application.
As shown in fig. 2, the process may include the following steps 201 and 203.
Step 201, receiving a data reading instruction sent by a processor in a processor chip.
And when the processor needs to read data, issuing a data reading instruction to the Cache. The data reading instruction comprises a storage address of data to be read in the memory.
And the Cache receives a data reading instruction issued by the processor and acquires a storage address of data to be read, which is included in the data reading instruction.
In step 202, if the preset address range to be latched includes the storage address of the data to be read, it is determined that the data to be read needs to be latched.
In the embodiment of the application, the critical data needing to be frequently used can be stored in the designated storage space in the memory. The address range corresponding to the designated storage space is the address range to be latched. And the address range to be latched is configured in the Cache in advance. For example, the data 1 to 512 are key data that need to be frequently used, the key data are fixedly stored in a certain specified storage space in the memory, and the address range corresponding to the instruction storage space is 0000H to 01FFH, and the address range (0000H to 01FFH) can be configured in advance in the Cache as the address range to be latched.
After the Cache obtains the storage address of the data to be read in the memory through step 201, the storage address is compared with a locally preset address range to be latched.
If the address range to be latched includes the storage address of the data to be read, which indicates that the data to be read is the key data that needs to be frequently used, it is determined that the data to be read needs to be latched, and a target Cache block where the data to be read is located is latched through step 101. For example, the memory address of the data to be read, which is obtained by the Cache, is 0050H, and is compared with the locally preset address range to be latched 0000H-01 FFH. The address range 0000H to 01FFH to be latched includes the storage address 0050H of the data to be read, and thus, it is determined that the data to be read is the critical data that needs to be frequently used, and it is determined that the data to be read needs to be latched.
If the address range to be latched does not include the storage address of the data to be read, which indicates that the data to be read is not the key data that needs to be frequently used, it is determined that the data to be read does not need to be latched, and the locking operation in step 101 is not performed. For example, the memory address of the data to be read, which is obtained by the Cache, is 2001H, and the memory address is compared with a locally preset address range to be latched 0000H-01 FFH. The address range 0000H to 01FFH to be latched does not include the memory address 2001H of the data to be read, and therefore, it is determined that the data to be read is not the critical data that needs to be frequently used, and it is determined that the data to be read does not need to be latched.
Thus, the flow shown in fig. 2 is completed.
As can be seen from the flow shown in fig. 2, in the embodiment of the present application, the Cache can determine whether the data to be read needs to be latched by itself, and lock a target Cache block where the data to be read is located when it is determined that the data needs to be latched. The locking process does not need the processor to issue the locking instruction, so that the processing time of the processor can be further saved, and the processing performance of the processor is improved.
As an embodiment, when the Cache determines that the data to be read needs to be latched, it needs to further determine whether the data to be read is cached in the Cache. Specifically, the Cache obtains a storage address of the data to be read in the memory through step 201, and matches a tag (tag) attribute of each Cache block based on the storage address.
Here, it should be noted that each Cache block has a corresponding tag attribute, and the tag attribute is used to record a storage address of a plurality of data currently cached in the Cache block in the memory. For example, if the tag attribute value of the Cache block 1 is 20H, and the cacheable data size of the Cache block is 256 bytes, the tag attribute value (20H) indicates that data with a storage address of 2000H to 20FFH is cached in the Cache block 1.
If the storage address of the data to be read hits the tag attribute value of the target Cache block, it indicates that the storage address of the data to be read hits the target Cache block, and the data to be read is cached in the Cache, step 101 may be executed to lock the target Cache block. For example, if the memory address of the data to be read is 2001H, and the tag attribute value of the Cache block 1 is 20H (corresponding to the memory addresses 2000H to 20FFH), it indicates that the memory address of the data to be read hits the tag attribute value of the Cache block 1, that is, the memory address of the data to be read hits the Cache block 1, and the current data to be read is cached in the Cache block 1.
And if the storage address of the data to be read does not hit the tag attribute value of any Cache block, indicating that the storage address of the data to be read does not hit the Cache block 1, and the data to be read is not cached in the Cache currently, reading the data to be read from the memory and caching the data to be read into a target Cache block. The target Cache block is then locked, via step 101.
As one embodiment, when the Cache determines that the data to be read needs to be latched and a target Cache block caching the data to be read is locked, the locking operation does not need to be repeatedly executed. At this time, the timer corresponding to the target Cache block may be restarted, i.e., timing is restarted. And if the instruction for accessing the data to be read is not received again within the preset time, unlocking the target Cache block.
For example, the preset time is 10 ms. Data 1 has been latched in Cache block 1. The timer corresponding to the Cache block 1 is denoted as timer 1. Timer 1 has currently timed 5 ms. If the Cache receives an instruction for reading the data 1 by the processor at the moment, the Cache does not repeatedly execute the locking operation on the Cache block 1 because the data 1 is latched. At this time, the timer 1 is restarted, and the timer 1 restarts counting time. And if the Cache does not receive the instruction for reading the data 1 sent by the processor any more before the timer 1 reaches 10ms, unlocking the Cache block 1.
As an embodiment, when the Cache determines that the data to be read needs to be latched and a target Cache block caching the data to be read is locked, restarting of a timer corresponding to the target Cache block is prohibited.
The difference between the present embodiment and the previous embodiment is only that when it is determined that the current data to be read is latched, the timer corresponding to the target Cache block is not restarted. That is, for multiple reads of the same latched data, the timer will not be triggered to count again, and will continue to count. The usage duration of the latched data is fixed.
For example, the preset time is 10 ms. Data 1 has been latched in Cache block 1. Cache block 1 corresponds to timer 1. Timer 1 has currently timed 5 ms. If the Cache receives an instruction for reading the data 1 by the processor at the moment, the Cache does not repeatedly execute the locking operation on the Cache block 1 because the data 1 is latched. At the same time, timer 1 is prohibited from restarting, and timer 1 continues to count on a 5ms basis. When the timer 1 reaches 10ms, the Cache block 1 is unlocked.
Here, it should be noted that, in the present embodiment, the preset time may be set according to the data usage duration in the actual application scenario. For example, if data 1 to data 3 are data to be latched, the duration of use of data 1 is usually 5ms, the duration of use of data 2 is usually 6ms, and the duration of use of data 3 is usually 8ms, the timing time may be set to 10ms, so as to meet the duration requirement of use of each data.
In order to describe the method provided by the embodiment of the present application, the following describes the apparatus provided by the embodiment of the present application:
referring to fig. 3, a schematic structural diagram of an apparatus provided in an embodiment of the present application is shown. The device includes: lockingunit 301,timing unit 302 and unblockunit 303, wherein:
thelocking unit 301 is configured to lock a target Cache block that caches data to be read when it is determined that the data to be read needs to be latched and the target Cache block is not locked;
atiming unit 302, configured to apply a timer corresponding to the target Cache block to perform timing;
and the unlockingunit 303 is configured to unlock the target Cache block when the timer reaches a preset timing time.
As an embodiment, the apparatus further comprises:
the receiving unit is used for receiving a data reading instruction sent by a processor in the processor chip, wherein the data reading instruction comprises a storage address of the data to be read in a memory;
the determining unit is configured to determine that the data to be read needs to be latched if a preset address range to be latched includes a storage address of the data to be read, where the address range to be latched includes storage addresses of all data to be latched in a memory.
As an embodiment, the apparatus further comprises:
and the Cache unit is used for reading the data to be read from a memory and caching the data to be read to the target Cache block if the storage address of the data to be read does not hit any Cache block.
As an embodiment, the apparatus further comprises:
and the restarting unit is used for restarting the timer corresponding to the target Cache block when the data to be read needs to be latched and the target Cache block caching the data to be read is locked.
As an embodiment, the apparatus further comprises:
and the forbidding unit is used for forbidding to restart the timer corresponding to the target Cache block when the data to be read needs to be latched and the target Cache block caching the data to be read is locked.
The description of the apparatus shown in fig. 3 is thus completed. In the embodiment of the application, after the Cache block is locked, the locking duration of the Cache block is timed by using a timer, and when the preset timing time is reached, the Cache block is automatically unlocked. The unlocking process does not need the participation of a processor in a processor chip, can effectively save the processing time of the processor, and improves the processing performance of the processor.
The following describes a processor chip provided in an embodiment of the present application:
referring to fig. 4, a hardware structure diagram of a processor chip according to an embodiment of the present disclosure is shown. The processor chip may include aprocessor 401 and aCache 402. Theprocessor 401 and Cache402 may communicate via a data bus 403. Among other things, the Cache402 may perform the Cache block processing methods described above.
An embodiment of the present application further provides a Cache, for example, the Cache402 in fig. 4, where the Cache402 executes the above-described Cache block processing method.
This completes the description of the processor chip shown in FIG. 4.
The above description is only a preferred embodiment of the present application, and should not be taken as limiting the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present application shall be included in the scope of the present application.

Claims (12)

CN201911118842.6A2019-11-152019-11-15Cache block processing method and device and processor chipPendingCN110941449A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911118842.6ACN110941449A (en)2019-11-152019-11-15Cache block processing method and device and processor chip

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911118842.6ACN110941449A (en)2019-11-152019-11-15Cache block processing method and device and processor chip

Publications (1)

Publication NumberPublication Date
CN110941449Atrue CN110941449A (en)2020-03-31

Family

ID=69907802

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911118842.6APendingCN110941449A (en)2019-11-152019-11-15Cache block processing method and device and processor chip

Country Status (1)

CountryLink
CN (1)CN110941449A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20020062424A1 (en)*2000-04-072002-05-23Nintendo Co., Ltd.Method and apparatus for software management of on-chip cache
CN1997015A (en)*2006-11-242007-07-11华为技术有限公司Cache application method and device, and file transfer system
US20140181375A1 (en)*2012-12-202014-06-26Kabushiki Kaisha ToshibaMemory controller
CN105138587A (en)*2015-07-312015-12-09小米科技有限责任公司Data access method, apparatus and system
CN105550156A (en)*2015-12-022016-05-04浙江大华技术股份有限公司Time synchronization method and device
US9367472B2 (en)*2013-06-102016-06-14Oracle International CorporationObservation of data in persistent memory
US9645825B2 (en)*2015-01-152017-05-09Texas Instruments Deutschland GmbhInstruction cache with access locking
CN107391041A (en)*2017-07-282017-11-24郑州云海信息技术有限公司A kind of data access method and device
CN107479860A (en)*2016-06-072017-12-15华为技术有限公司A kind of forecasting method of processor chips and instruction buffer
CN107810486A (en)*2015-06-262018-03-16微软技术许可有限责任公司Lock the value of the operand of the instruction group for atomically performing
US10073697B2 (en)*2015-12-112018-09-11International Business Machines CorporationHandling unaligned load operations in a multi-slice computer processor
CN109196473A (en)*2017-02-282019-01-11华为技术有限公司Buffer memory management method, cache manager, shared buffer memory and terminal
CN109933543A (en)*2019-03-112019-06-25珠海市杰理科技股份有限公司 Cache data locking method, device and computer equipment
CN110147386A (en)*2019-04-162019-08-20平安科技(深圳)有限公司The caching method of data, device, computer equipment
CN110312997A (en)*2016-12-152019-10-08优创半导体科技有限公司Atom primitive is realized using cache lines locking

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20020062424A1 (en)*2000-04-072002-05-23Nintendo Co., Ltd.Method and apparatus for software management of on-chip cache
CN1997015A (en)*2006-11-242007-07-11华为技术有限公司Cache application method and device, and file transfer system
US20140181375A1 (en)*2012-12-202014-06-26Kabushiki Kaisha ToshibaMemory controller
US9367472B2 (en)*2013-06-102016-06-14Oracle International CorporationObservation of data in persistent memory
US9645825B2 (en)*2015-01-152017-05-09Texas Instruments Deutschland GmbhInstruction cache with access locking
CN107810486A (en)*2015-06-262018-03-16微软技术许可有限责任公司Lock the value of the operand of the instruction group for atomically performing
CN105138587A (en)*2015-07-312015-12-09小米科技有限责任公司Data access method, apparatus and system
CN105550156A (en)*2015-12-022016-05-04浙江大华技术股份有限公司Time synchronization method and device
US10073697B2 (en)*2015-12-112018-09-11International Business Machines CorporationHandling unaligned load operations in a multi-slice computer processor
CN107479860A (en)*2016-06-072017-12-15华为技术有限公司A kind of forecasting method of processor chips and instruction buffer
CN110312997A (en)*2016-12-152019-10-08优创半导体科技有限公司Atom primitive is realized using cache lines locking
EP3555752A1 (en)*2016-12-152019-10-23Optimum Semiconductor Technologies Inc.Implementing atomic primitives using cache line locking
CN109196473A (en)*2017-02-282019-01-11华为技术有限公司Buffer memory management method, cache manager, shared buffer memory and terminal
CN107391041A (en)*2017-07-282017-11-24郑州云海信息技术有限公司A kind of data access method and device
CN109933543A (en)*2019-03-112019-06-25珠海市杰理科技股份有限公司 Cache data locking method, device and computer equipment
CN110147386A (en)*2019-04-162019-08-20平安科技(深圳)有限公司The caching method of data, device, computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
毛德操 等: "《嵌入式系统 采用公开源代码和StrongARM/XScale处理器》", 31 October 2003*

Similar Documents

PublicationPublication DateTitle
EP3018588B1 (en)Memory access processing method, apparatus, and system
US9128762B2 (en)Persistent content in nonvolatile memory
CN106802955B (en)Picture data caching method
CN110554911A (en)Memory access and allocation method, memory controller and system
CN107888687B (en)Proxy client storage acceleration method and system based on distributed storage system
US20160170904A1 (en)Method and Apparatus for Querying Physical Memory Address
CN101770485A (en)Method and device for adding table items, and method, device and system for processing table items
CN115687185A (en)Handheld terminal low-power-consumption data acquisition method based on mmap memory mapping
CN109062717B (en) Data cache and cache disaster recovery method and system, cache system
CN106649143B (en)Cache access method and device and electronic equipment
CN105659216B (en)The CACHE DIRECTORY processing method and contents controller of multi-core processor system
CN110941449A (en)Cache block processing method and device and processor chip
CN119271618A (en) A method and system for implementing RDMA network card request queue
US8850159B2 (en)Method and system for latency optimized ATS usage
CN111694806A (en)Transaction log caching method, device, equipment and storage medium
US9158697B2 (en)Method for cleaning cache of processor and associated processor
CN114207602B (en) Using probabilistic data structures to reduce requests
CN111198827B (en)Page table prefetching method and device
US20030196039A1 (en)Scratch pad memories
US6981103B2 (en)Cache memory control apparatus and processor
US11182299B2 (en)Data acquisition method, microprocessor and apparatus with storage function
JP5287938B2 (en) Device control system and program
CN119884023B (en)Parallel computing core caching method, device and medium based on multi-compression scheme
HK40045451A (en)Data processing method and device
CN119179655A (en)Cache access device, coprocessor, heterogeneous computing system, and cache access method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20200331

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp