Movatterモバイル変換


[0]ホーム

URL:


CN112000296A - Performance optimization system in full flash memory array - Google Patents

Performance optimization system in full flash memory array
Download PDF

Info

Publication number
CN112000296A
CN112000296ACN202010887080.2ACN202010887080ACN112000296ACN 112000296 ACN112000296 ACN 112000296ACN 202010887080 ACN202010887080 ACN 202010887080ACN 112000296 ACN112000296 ACN 112000296A
Authority
CN
China
Prior art keywords
request
write
read
data block
raid5
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010887080.2A
Other languages
Chinese (zh)
Other versions
CN112000296B (en
Inventor
徐晗
郦伟
宋珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Computer Technology and Applications
Original Assignee
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Computer Technology and ApplicationsfiledCriticalBeijing Institute of Computer Technology and Applications
Priority to CN202010887080.2ApriorityCriticalpatent/CN112000296B/en
Publication of CN112000296ApublicationCriticalpatent/CN112000296A/en
Application grantedgrantedCritical
Publication of CN112000296BpublicationCriticalpatent/CN112000296B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to a performance optimization system in a full flash memory array, belonging to the technical field of computer storage. The invention analyzes the current research situation of applying the RAID technology to the current solid-state disk array by researching the working principle of the solid-state disk and the limitation of the flash-based solid-state disk. On the basis of the previous research, the access characteristics shown by the application load are analyzed, based on respective advantages of RAID5 and RAID1, an array Hybrid-RAID based on RAID5 and RAID1 mixed layout of load request size is provided, data layout is carried out by the Hybrid-RAID according to the reading and writing intensive property of user requests and different request sizes, and then the request preferences of the bottom RAID5 and RAID1 to different loads are combined to respond to external requests. Finally, performance test is carried out on the real equipment, and test results show that compared with the traditional disk arrays RAID5 and RAID1, the Hybrid-RAID scheme has the advantage that the average response time is reduced by 40.5% and 35.6% respectively.

Description

Performance optimization system in full flash memory array
Technical Field
The invention belongs to the technical field of computer storage, and particularly relates to a performance optimization system in a full flash memory array.
Background
With the advent of the big data era, the demands of big data storage and data analysis bring great challenges to storage departments of enterprises, and high-concurrency, high-capacity, diversified storage modes and high-efficiency storage management capabilities are required. Conventional mechanical Hard disks (Hard Disk Drives, HDD) are composed of mechanical components such as a motor, a Disk, a head swing arm, etc., and position access is realized by moving a magnetic head on a rotating magnetic Disk, so that at least 95% of the time is consumed by the motion of the components, resulting in a severe limitation on the response time of the magnetic Disk. With the increase of intensive applications, a storage system based on a traditional mechanical hard disk cannot meet the enterprise-level storage requirements, and a Solid State Drive (SSD) based on a flash memory chip brings revolutionary progress to the storage system, and becomes a powerful substitute for the traditional hard disk. In comparison with the HDD constitution structure, the SSD, which is composed of electronic chips and circuit boards, can access a drive to any position quickly and accurately by an integrated circuit instead of physically rotating a magnetic disk, thereby being capable of providing higher I/O performance.
Although the solid state disk has the advantages of small delay, low energy consumption, good shock resistance, high reliability, no noise, small size and the like, the solid state disk also has some non-negligible defects. Firstly, based on the characteristic that a solid state disk of a NAND-Flash storage medium cannot be overwritten, a remote updating mode is required to be adopted, when data updating is carried out on a certain page of a Flash memory unit, new data is written into an erased block page, then the page where old data is located is set as an invalid page, and the page is left to be erased later. The write operation (Program) and the Erase operation (Erase) of a pair of flash memory cells are called as P/E (Program/Erase) operations of flash memory granules, and the number of P/E operations of each flash memory cell has an upper limit, so that when the total number of erases is accumulated to a certain value, the solid state disk is written through, and data in the disk may be lost, thereby resulting in the end of the service life of the solid state disk. According to statistics, the flash memory erasing times of the SLC are less than ten thousand, the flash memory erasing times of the MLC range from three thousand to ten thousand, and the flash memory erasing times of the TLC are only one thousand at most. The limited number of flash resolutions is easily exhausted, especially in some applications where the writing operations are relatively intensive. Therefore, the endurance and life cycle of the solid state disk are severely limited by the number of times the flash memory chip is erased.
With the proliferation of application data, a single solid-state disk has also been unable to meet the high demands of current storage systems due to limitations in performance, capacity, reliability, and the like. A RAID (Redundant Array of independent Disks, RAID for short) is designed based on disk access characteristics, and mainly aims to alleviate performance and capacity limitations of a single disk. The disk array technology is used for combining a plurality of solid-state disks into a solid-state disk array, so that the parallel access requirement of a user on big data can be ensured, a large-capacity storage space is provided, and meanwhile, redundant data can be introduced to provide reliability guarantee of different degrees. Therefore, applying RAID technology to solid state disk arrays is an effective solution to increase the capacity and performance of storage systems. However, since the solid-state disk and the magnetic disk have different device characteristics, the operation mode and the access characteristics of the solid-state disk and the magnetic disk are also greatly different, such as: the disk adopts local updating without garbage collection operation, while the solid disk adopts remote updating and also needs garbage collection operation besides normal read/write operation. The RAID technology is based on a magnetic disk, and characteristics of a flash memory are not considered, so that the existing RAID technology is directly applied to a solid-state disk, which cannot achieve the effect of performance optimization, but reduces the performance of a storage system, and shortens the service life of the solid-state disk. Therefore, new data layout must be redesigned for the device characteristics of the solid-state disk to achieve optimal performance.
The unit of reading and writing of the traditional disk is a sector with the size of 512B, while the unit of reading and writing of the solid state disk based on the flash memory is a page, generally 2 KB-8 KB, and the flash memory needs an erasing operation to recover invalid data pages, and the unit of erasing is a block. The existing operating system and upper Layer application are designed based on a magnetic disk, and the solid state disk and the magnetic disk have different storage characteristics, so that the upper Layer application cannot directly operate the solid state disk, and needs to be managed by some specific software, which is the main function of a Flash Translation Layer (FTL). As shown in fig. 1, the FTL implements the same access interface as a disk, hiding the characteristics of flash memory access, while emulating a solid state disk as a traditional disk with only read and write operations. The FTL consists of an address mapping module, a garbage recycling module and a loss balancing module: the address mapping module is used for maintaining a mapping table relation between a logical address of a file system layer and a physical address in a flash memory page, the garbage collection module mainly works for erasing and collecting invalid data blocks, and the wear leveling module is mainly responsible for leveling the erasing times in the solid-state disk and preventing the service life of the solid-state disk from ending due to the fact that certain blocks are worn too fast.
The flash translation layer in the solid-state disk is used for packaging the flash chip, and although the access characteristic of the flash is shielded, the advantages of the flash are hidden. Due to the packaging property of the FTL, the parallelism between the flash memory chips and inside the chips in the solid state disk cannot be fully utilized in the storage system based on the solid state disk array, and the black box design hinders the performance improvement of the novel storage software system, for example: a Key-Value Database (KV Database for short) based on Key-Value pairs, a Database storage system, is generally established on a file system, and if a common solid-state disk is used as a bottom-layer device, the storage device is a black box for upper-layer application, and the internal activity information is unknown, so that the device characteristics cannot be well utilized for optimization. With the deep knowledge of the internal logic of the solid-state disk and the characteristics of the flash memory, the internal logic of the solid-state disk can be controlled through a host terminal and combined with an actual business logic system, so that the equipment characteristics of the solid-state disk are fully mined, and the limitation of the traditional disk is broken. The open channel solid state disk is essentially a simple version of a solid state disk, which only contains a NAND flash memory chip and a controller, wherein the controller does not contain a flash translation layer FTL. The main principle is that information of a bottom layer is exposed to upper-layer application, and a user can independently customize an efficient flash memory conversion layer according to the requirement of the user. The open channel SSD not only provides a transparent white-box design and a customization function, but also provides a standardized platform, and becomes a new direction for development of the future SSD.
In recent years, the technology of the solid state disk is continuously mature, and the products are numerous, so that an interface for disk access is provided, and the existing disk-based software can be compatible, so that the disk array technology is widely applied to the construction of the solid state disk. Although disk array technology has many advantages such as large capacity, high reliability and high performance,however, there are many problems in directly applying the Flash-based solid-state disk to the solid-state disk, and the Flash-based solid-state disk has read-write operations as with the conventional disk, and also needs garbage collection operations. When a chip in the solid-state disk performs garbage collection, all requests received by the chip are blocked, and the external service can not be continuously provided until the operation is completed. The irregular triggering of the garbage collection operation not only causes the performance of the system to fluctuate, but also reduces the service quality of the whole system, so that the execution efficiency of garbage collection has a direct influence on the performance of the solid-state disk storage system. To solve the problem, Kim and Oral, et al, propose a coordination strategy for Global Garbage Collection (GGC) of solid-state disk array in the thesis, and when one solid-state disk in the array performs Garbage Collection, GCC forces all other member disks to perform Garbage Collection, so as to shorten the Garbage Collection time of each disk, thereby reducing the influence of the solid-state disk on the overall performance[1]. The disadvantage of this approach is that when all arrays are undergoing garbage collection operations, the entire array cannot be serviced, which is unacceptable for applications that require reliable service for long periods of time.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to optimize the performance of the storage system based on the solid-state disk provides an effective solution for developing and constructing a next-generation flash memory system.
(II) technical scheme
In order to solve the above technical problem, the present invention provides a performance optimization system in a full flash memory array, the system is used for implementing performance optimization in the full flash memory array, and the system comprises: the system comprises a RAID controller and a storage device, wherein the RAID controller comprises a load identification module, a request redirection module and a mapping table;
the load identification module is used for predicting and marking the read-write intensive attribute of the data block to be processed in the I/O request according to the read-write access proportion in the data block historical record after receiving the I/O request;
the request redirection module is used for processing an I/O request of a user, redirecting the I/O request to a physical storage partition in the storage equipment based on the mapping table according to the reading and writing intensive attribute of the data block and the size of the I/O request;
the mapping table is used for recording the data redirected to the storage device.
Preferably, the mapping table is stored in a non-volatile storage medium of the RAID controller.
Preferably, the storage device is composed of a plurality of solid state disks, each solid state disk is divided into two RAIS storage areas of RAID5 and RAID1 according to requirements, the two RAIS storage areas can be allocated according to requirements, and the services provided by the RAID5 area are better than those provided by RAID 1.
Preferably, the request redirection module is specifically configured to perform redirection processing on the I/O request, and perform rearrangement on the I/O request according to the read-write intensive attribute of the data block and the size of the I/O request; wherein when the data block is read-intensive data, it is serviced by the RAID5 zone; when the data block is write-intensive data, determining whether a RAID5 area or a RAID1 area is served according to the size of the I/O request file; if the data block is hybrid data, it is directly serviced by the RAID1 zone.
Preferably, when the load identification module is used for identifying the read-write dense attribute, the granularity of the data is a factor affecting the accuracy of identification and the system overhead, and the read-write dense attribute of the data block is judged based on the historical access record, wherein a 5-bit bitmap is used for recording the proportion of the read-write times of the first 64 times of each data block.
Preferably, when the data block is write-intensive data, the request redirection module needs to determine the type of the size of the request according to the stripe size of the current RAID5, where the specific determination depends on the stripe size and the number of member disks in the array.
Preferably, if the storage device is a disk array formed by combining four solid-state disks, and the block size is 64KB, then the request redirection module is specifically configured to determine that the request is uppercase when the size of the request is greater than or equal to 192KB, or determine that the request is lowercase if the size of the request is greater than or equal to 192 KB; if the request is an upper-case request, the request is still redirected to a RAID5 area, and if the request is a lower-case request, the request is redirected to a RAID1 area; when an uppercase request is sent to a RAID5 region, performance is optimal when the request size is exactly equal to the stripe size of the array.
Preferably, the request redirection module is configured to maintain a relationship between an upper access address and a lower storage device address through a mapping table when performing redirection; for a received write request, firstly judging whether the I/O request has a mark with write-intensive attribute, and if not, directly issuing the write request to a RAID1 area; if the mark exists, the state of the load identification bitmap is continuously judged; the load identification bitmap records the read and write access times of a data block, when the read or write times are accumulated to a certain value, the bitmap is filled, because the read and write times of the data block are recorded by using the bitmap with the size of 5 bits, the read and write proportion of the previous 64 accesses is recorded, and when the read or write times of the data block reach half of the total record times, namely 32 times, the bitmap item is considered to be full; when the state of the load identification bitmap is judged, if the load bitmap item is not full, a write counter is directly added with 1, and then a request is issued to a RAID1 area; if the load bitmap item is full, firstly inquiring whether the mapping table is hit, if hit, directly sending the write request to a hit area, otherwise, continuously judging whether the size of the current request is capitalization; if it is an uppercase, it may be redirected to a RAID5 area; otherwise, the mapping relation is inserted into the mapping table while the mapping relation is redirected to the RAID1 area; similar to the write request, for the read request, first judging whether the data block carries a read intensive mark; if the label is carried, the read request is directly redirected to a RAID5 area with better read performance, and if the label is not carried, the state of the load identification bitmap is continuously judged; if the bitmap entry is not full, first increment the read counter by 1 and then redirect the request to a RAID1 area; when the bitmap item is full, whether the data block is a read-intensive data block or not can be judged through the load identification module, and if not, the request is issued to the RAID1 area; if yes, continuing to judge whether the mapping relation of the data exists in the mapping table; if the mapping relation exists in the mapping table, directly acquiring the storage position of the data from the mapping table; otherwise, selecting RAID5 area, and finally sending the request to the mapping area.
Preferably, the system is a solid state disk based storage system.
The invention also provides a performance optimization method in the full flash memory array realized by the system.
(III) advantageous effects
The invention analyzes the current research situation of applying the RAID technology to the current solid-state disk array by researching the working principle of the solid-state disk and the limitation of the flash-based solid-state disk. On the basis of the previous research, the access characteristics shown by the application load are analyzed, based on respective advantages of RAID5 and RAID1, an array Hybrid-RAID based on RAID5 and RAID1 mixed layout of load request size is provided, data layout is carried out by the Hybrid-RAID according to the reading and writing intensive property of user requests and different request sizes, and then the request preferences of the bottom RAID5 and RAID1 to different loads are combined to respond to external requests. Finally, performance test is carried out on the real equipment, and test results show that compared with the traditional disk arrays RAID5 and RAID1, the Hybrid-RAID scheme has the advantage that the average response time is reduced by 40.5% and 35.6% respectively.
Drawings
FIG. 1 is a schematic diagram of a conventional SSD system;
FIG. 2 is a schematic diagram of the system of the present invention;
FIG. 3 is a flow diagram of write request processing implemented in accordance with the system of the present invention;
fig. 4 is a flow chart of read request processing implemented based on the system of the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The invention relates to a performance optimization system in a full flash memory array, and a traditional magnetic disk becomes a main performance bottleneck of a large-scale storage system due to the limitation of mechanical rotation of the traditional magnetic disk. Flash-based solid state disks are widely used in personal computers and enterprise-class storage systems due to their excellent performance and low power consumption. However, since the solid-state disk has different device characteristics from the magnetic disk, simply applying conventional storage software to the solid-state disk cannot fully exploit the performance advantages of the solid-state disk. If the existing disk array technology is directly applied to the solid-state disk, the adverse effect brought by the garbage recovery operation of the equipment is aggravated, the performance of the solid-state disk is reduced, and the service life of the solid-state disk is shortened. And if the storage system based on the key value pair is directly deployed on the solid-state disk, the software and hardware can not be cooperatively optimized to achieve optimal performance. The flash memory-based solid-state disk has limitations such as poor small-write performance and limited erase times, and the disk array technology cannot be simply applied to the solid-state disk. Therefore, the invention optimizes the performance of the storage system based on the solid-state disk and provides an effective solution for developing and constructing the next generation flash memory system. The invention carries out rearrangement and optimization on the solid-state disk array storage system according to the characteristics of the flash memory, and optimizes the storage system by utilizing the characteristics of the flash memory through testing the storage platform of the SSD in the open channel mode, thereby obtaining good effect.
As shown in fig. 2, the improved solid-state disk-based storage system provided by the present invention is used for implementing performance optimization in a full flash memory array, and includes two parts, namely a RAID controller and a storage device, where the RAID controller includes a load identification module, a request redirection module, and a mapping table;
the system is mainly in butt joint with the existing RAID controller module through a load identification module and a request redirection module. The load identification module is used for predicting and marking the read-write intensive attribute of the data block to be processed according to the read-write access proportion in the data block historical record. The request redirection module is used for processing the I/O request of the user and redirecting the upper application request to the physical storage partition with the optimal response according to the reading and writing intensive attribute of the data block and the size of the I/O request. In addition, the RAID controller is also provided with a mapping table, and the data redirected to the solid state disk is recorded by adopting a data structure of a 'B-tree', so that the system overhead caused by query is reduced. The mapping table is stored in a nonvolatile storage medium of the RAID controller, so that data can be guaranteed not to be lost when the system is powered off or crashed. The storage device part of the system is composed of a plurality of solid-state disks, each solid-state disk is divided into two RAIS storage areas of RAID5 and RAID1 according to requirements, and the sizes of the two RAIS storage areas can be distributed according to requirements.
After the system receives an I/O request, firstly, a load identification module predicts and marks the read-write intensive attribute of a data block in the I/O request according to the read-write access proportion in a data block historical record, then a request redirection module redirects the I/O request, and the I/O request is rearranged according to the read-write intensive attribute of the data block and the size of the I/O request; where, when the data block is read-intensive, it is serviced by a RAID5 zone that favors better read performance; when the data block is write-intensive data, determining whether a RAID5 area or a RAID1 area is served according to the size of the I/O request file; if the data block is hybrid data, it is directly serviced by the RAID1 zone.
In the aspect of data layout, a 'lazy update' mode is adopted, new data is not temporarily inserted into a mapping table at the early stage of system operation and is directly laid out in the area of RAID5, and after the request amount in the system reaches a certain value, a load identification module judges the read-write intensive property of a data block according to historical access records, and at the moment, a small write intensive data block in the RAID5 area can be updated to the RAID1 area and inserted into the mapping table, so that the purpose of optimizing writing is achieved.
In practical application, the read-write density attribute of a certain data block changes along with the time, so that the judgment of the read-write density attribute of the data block in the system is a challenge. When reading and writing dense attributes are identified, the granularity of data is an important factor influencing the accuracy of identification and the system overhead. The smaller the granularity is, the higher the accuracy is, but the larger the corresponding access overhead is; the greater the granularity, the less accurate but less access overhead. Therefore, in practical applications, the two should be balanced or cut off according to specific situations. The invention uses the size of 1MB block as granularity, mainly uses historical access record as basis to judge the read-write dense attribute of the data block, and uses 5-bit bitmap to record the proportion of the read-write times of the first 64 times of each data block.
After the read-write dense attribute is judged, the request redirection module is required to redirect and select in combination with the proportion of the read-write access times in the multi-bitmap, the request size and the request preference of RAID5 and RAID1, and when the data block is read-dense data, the data block is directly redirected to an RAID5 area because the read performance of RAID5 is very high; when the data block is write-intensive, the type of size of the request needs to be determined according to the stripe size of the current RAID5, the specific determination depending on the stripe size and the number of member disks in the array. For example, a disk array composed of four solid-state disks, the block size is 64KB, and it is judged as uppercase when the requested size is greater than or equal to 192KB, and lowercase otherwise. If the request is an upper write request, the request is still redirected to a RAID5 area, and if the request is a lower write request, the request is redirected to a RAID1 area. When an upper writing request is sent to a RAID5 area, the performance is optimal when the request size is just equal to the stripe size of the array, so that the write advantage of the whole stripe of RAID5 can be fully exerted, the write amplification problem caused by frequent update of parity is reduced, and the parallel write capability of the array is improved. When the data block is mixed type data, due to the poor random writing performance of the RAID5 algorithm, the data is finally redirected to a RAID1 area with relatively good lower writing performance.
The request redirection module is mainly responsible for processing user requests, and maintains the relationship between the upper access address and the lower storage device address through a mapping table. For the received write request, the request redirection module firstly judges whether the I/O request has a mark with write-intensive attribute, and if not, the request redirection module directly issues the write request to a RAID1 area; if the mark exists, the state of the load identification bitmap is continuously judged. The load identification bitmap records the number of read and write accesses to a data block, and is filled when the number of read or write accesses reaches a certain value. The invention uses 5 bit bitmap to record the read-write times of data block, records the read-write proportion of the first 64 times access, when the read times or write times of the data block reach half of the total record times, namely 32 times, the bitmap item is considered to be full. When the state of the load identification bitmap is judged, if the load bitmap item is not full, a write counter is directly added with 1, and then a request is issued to a RAID1 area; if the load bitmap item is full, firstly inquiring whether the mapping table is hit, if hit, directly sending the write request to a hit area, otherwise, continuously judging whether the size of the current request is capitalization. If it is an uppercase, it may be redirected to a RAID5 area; otherwise, the mapping relation is inserted into the mapping table while the RAID1 area is redirected. Because the probability of redirection of the write-intensive data block is low, and the induced mapping table modification is also less, the layout scheme does not bring large overhead to the system, as shown in fig. 3. Similar to a write request, for a read request, the system first determines whether the data block carries a read-intensive tag. If the label is carried, the read request is directly redirected to the RAID5 area with better read performance, and if not, the state of the load identification bitmap is continuously judged. If the bitmap entry is not full, the read counter is first incremented by 1 and the request is then redirected to a RAID1 area. When the bitmap item is full, whether the data block is a read-intensive data block or not can be judged through the load identification module, and if not, the request is issued to the RAID1 area; if yes, the mapping table needs to be continuously judged whether the mapping relation of the data exists or not. If the mapping relation exists in the mapping table, the storage position of the data can be directly obtained from the mapping table; if not, the RAID5 area is selected, and finally the request is issued to the mapping area, as shown in fig. 4.
It can be seen that the present invention provides a Hybrid data layout strategy (Hybrid-RAID) of a flash memory array to improve the performance of a solid-state disk storage system. The Hybrid-RAID divides the solid-state disk array into two areas of RAID5 and RAID1, and rearranges data according to the preferences of RAID5 and RAID1 on different load request sizes and read-write dense attributes, thereby reducing the loss of the solid-state disk storage media caused by frequent update of check information and improving the performance of the solid-state disk storage system.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

8. The system of claim 5, wherein the request redirection module is configured to maintain a relationship between the upper access address and the lower storage device address through a mapping table when performing redirection; for a received write request, firstly judging whether the I/O request has a mark with write-intensive attribute, and if not, directly issuing the write request to a RAID1 area; if the mark exists, the state of the load identification bitmap is continuously judged; the load identification bitmap records the read and write access times of a data block, when the read or write times are accumulated to a certain value, the bitmap is filled, because the read and write times of the data block are recorded by using the bitmap with the size of 5 bits, the read and write proportion of the previous 64 accesses is recorded, and when the read or write times of the data block reach half of the total record times, namely 32 times, the bitmap item is considered to be full; when the state of the load identification bitmap is judged, if the load bitmap item is not full, a write counter is directly added with 1, and then a request is issued to a RAID1 area; if the load bitmap item is full, firstly inquiring whether the mapping table is hit, if hit, directly sending the write request to a hit area, otherwise, continuously judging whether the size of the current request is capitalization; if the write is the upper case, the write can be redirected to a RAID5 area; otherwise, redirecting to the RAID1 area, and simultaneously inserting the mapping relation into a mapping table; similar to the write request, for the read request, first judging whether the data block carries a read intensive mark; if the label is carried, the read request is directly redirected to a RAID5 area with better read performance, and if the label is not carried, the state of the load identification bitmap is continuously judged; if the bitmap item is not full, the read counter is increased by 1, and then the request is redirected to a RAID1 area; when the bitmap item is full, whether the data block is a read-intensive data block or not can be judged through the load identification module, and if not, the request is issued to the RAID1 area; if yes, continuing to judge whether the mapping relation of the data exists in the mapping table; if the mapping relation exists in the mapping table, directly acquiring the storage position of the data from the mapping table; otherwise, selecting RAID5 area, and finally sending the request to the mapping area.
CN202010887080.2A2020-08-282020-08-28Performance optimization system in full flash memory arrayActiveCN112000296B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010887080.2ACN112000296B (en)2020-08-282020-08-28Performance optimization system in full flash memory array

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010887080.2ACN112000296B (en)2020-08-282020-08-28Performance optimization system in full flash memory array

Publications (2)

Publication NumberPublication Date
CN112000296Atrue CN112000296A (en)2020-11-27
CN112000296B CN112000296B (en)2024-04-09

Family

ID=73465410

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010887080.2AActiveCN112000296B (en)2020-08-282020-08-28Performance optimization system in full flash memory array

Country Status (1)

CountryLink
CN (1)CN112000296B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113806083A (en)*2021-09-062021-12-17杭州迪普科技股份有限公司Method and device for processing aggregation stream data

Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030177309A1 (en)*2002-03-142003-09-18Kang Dong JaeCoherence preservation method of duplicated data in RAID subsystem
CN102521068A (en)*2011-11-082012-06-27华中科技大学Reconstructing method of solid-state disk array
CN103699457A (en)*2013-09-262014-04-02深圳市泽云科技有限公司Method and device for restoring disk arrays based on stripping
CN103902474A (en)*2014-04-112014-07-02华中科技大学Mixed storage system and method for supporting solid-state disk cache dynamic distribution
CN104536903A (en)*2014-12-252015-04-22华中科技大学Mixed storage method and system for conducting classified storage according to data attributes
CN104778077A (en)*2015-04-272015-07-15华中科技大学High-speed extranuclear graph processing method and system based on random and continuous disk access
CN104809075A (en)*2015-04-202015-07-29电子科技大学Solid recording device and method for accessing in real time and parallel processing
CN105045540A (en)*2015-08-282015-11-11厦门大学Data layout method of solid-state disk array
CN108121503A (en)*2017-08-082018-06-05鸿秦(北京)科技有限公司A kind of NandFlash address of cache and block management algorithm
CN108763099A (en)*2018-04-182018-11-06华为技术有限公司Startup method, apparatus, electronic equipment and the storage medium of system
CN109144411A (en)*2018-07-242019-01-04中国电子科技集团公司第三十八研究所Data center's hybrid magnetic disc array and its data dynamic migration strategy
US20190114258A1 (en)*2017-10-162019-04-18Fujitsu LimitedStorage control apparatus and method of controlling garbage collection
CN110413537A (en)*2019-07-252019-11-05杭州电子科技大学 A flash conversion layer and conversion method for hybrid solid-state hard drives
US20200042235A1 (en)*2018-08-012020-02-06EMC IP Holding Company LLCFast input/output in a content-addressable storage architecture with paged metadata

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030177309A1 (en)*2002-03-142003-09-18Kang Dong JaeCoherence preservation method of duplicated data in RAID subsystem
CN102521068A (en)*2011-11-082012-06-27华中科技大学Reconstructing method of solid-state disk array
CN103699457A (en)*2013-09-262014-04-02深圳市泽云科技有限公司Method and device for restoring disk arrays based on stripping
CN103902474A (en)*2014-04-112014-07-02华中科技大学Mixed storage system and method for supporting solid-state disk cache dynamic distribution
CN104536903A (en)*2014-12-252015-04-22华中科技大学Mixed storage method and system for conducting classified storage according to data attributes
CN104809075A (en)*2015-04-202015-07-29电子科技大学Solid recording device and method for accessing in real time and parallel processing
CN104778077A (en)*2015-04-272015-07-15华中科技大学High-speed extranuclear graph processing method and system based on random and continuous disk access
CN105045540A (en)*2015-08-282015-11-11厦门大学Data layout method of solid-state disk array
CN108121503A (en)*2017-08-082018-06-05鸿秦(北京)科技有限公司A kind of NandFlash address of cache and block management algorithm
US20190114258A1 (en)*2017-10-162019-04-18Fujitsu LimitedStorage control apparatus and method of controlling garbage collection
CN108763099A (en)*2018-04-182018-11-06华为技术有限公司Startup method, apparatus, electronic equipment and the storage medium of system
CN109144411A (en)*2018-07-242019-01-04中国电子科技集团公司第三十八研究所Data center's hybrid magnetic disc array and its data dynamic migration strategy
US20200042235A1 (en)*2018-08-012020-02-06EMC IP Holding Company LLCFast input/output in a content-addressable storage architecture with paged metadata
CN110413537A (en)*2019-07-252019-11-05杭州电子科技大学 A flash conversion layer and conversion method for hybrid solid-state hard drives

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113806083A (en)*2021-09-062021-12-17杭州迪普科技股份有限公司Method and device for processing aggregation stream data
CN113806083B (en)*2021-09-062023-07-25杭州迪普科技股份有限公司Method and device for processing aggregate flow data

Also Published As

Publication numberPublication date
CN112000296B (en)2024-04-09

Similar Documents

PublicationPublication DateTitle
Yang et al.Garbage collection and wear leveling for flash memory: Past and future
US8751731B2 (en)Memory super block allocation
CN108121503B (en)NandFlash address mapping and block management method
CN106681654B (en)Mapping table loading method and memory storage apparatus
CN101727295B (en)Data writing and reading method based on virtual block flash memory address mapping
US20140156966A1 (en)Storage control system with data management mechanism of parity and method of operation thereof
US20100250826A1 (en)Memory systems with a plurality of structures and methods for operating the same
CN106548789A (en)Method and apparatus for operating stacked tile type magnetic recording equipment
KR101480424B1 (en)Apparatus and method for optimization for improved performance and enhanced lifetime of hybrid flash memory devices
CN102981969A (en)Method for deleting repeated data and solid hard disc thereof
KR20220077573A (en)Memory system and operation method thereof
US11620072B2 (en)Memory management method, memory control circuit unit and memory storage apparatus
CN106775453B (en)A kind of construction method mixing storage array
CN115857793A (en) A hard disk scanning method and device
CN102867046A (en)Solid state disk based database optimization method and system
CN112000296B (en)Performance optimization system in full flash memory array
Li et al.Latency aware page migration for read performance optimization on hybrid ssds
CN102402396B (en) Composite storage device and its composite storage media controller and addressing method
KR101070511B1 (en)Solid state drive controller and method for operating of the solid state drive controller
CN101576851B (en) Storage unit configuration method and applicable storage medium
US12254183B2 (en)Storage device including non-volatile memory device and operating method of storage device
US20250217036A1 (en)Key-value storage method and system
YaoOn Suppressing Tail Latency via Synergizing the Erase Duality of Emerging Bit-Alterable NAND Flash
박지성Performance and Lifetime Optimizations for Large-Capacity NAND Storage Systems
Iaculo et al.Introduction to ssd

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp