Movatterモバイル変換


[0]ホーム

URL:


HK1185170B - Enhancing the lifetime and performance of flash-based storage - Google Patents

Enhancing the lifetime and performance of flash-based storage
Download PDF

Info

Publication number
HK1185170B
HK1185170BHK13112391.5AHK13112391AHK1185170BHK 1185170 BHK1185170 BHK 1185170BHK 13112391 AHK13112391 AHK 13112391AHK 1185170 BHK1185170 BHK 1185170B
Authority
HK
Hong Kong
Prior art keywords
data
flash
cache
storage device
based storage
Prior art date
Application number
HK13112391.5A
Other languages
Chinese (zh)
Other versions
HK1185170A (en
Inventor
Ky.斯里尼瓦桑
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 微软技术许可有限责任公司filedCritical微软技术许可有限责任公司
Publication of HK1185170ApublicationCriticalpatent/HK1185170A/en
Publication of HK1185170BpublicationCriticalpatent/HK1185170B/en

Links

Description

Enhancing life-time and performance of flash-based storage
Technical Field
The present invention relates to enhancing the life and performance of flash-based storage.
Background
Data storage hardware has changed in recent years to make flash-based storage much more common. Rotating media, such as hard disk drives and optical disk drives, are increasingly being replaced by flash-based storage, such as Solid State Disk (SSD) drives, which have no or fewer moving parts. Solid state disks are much more stable and are not subject to many kinds of environmental conditions that are detrimental to previous media. For example, rotating media are particularly susceptible to vibrations that may occur, for example, when a mobile computing device containing one rotating media is dropped. Flash-based storage also typically has a much faster access time and each region of the storage can be accessed with uniform latency. Rotating media exhibit different speed characteristics based on how close the data is stored to the central spindle (faster disk rotation). On the other hand, SSDs have a fixed amount of time to access a given memory location, and do not have the traditional seek time (which refers to the time to move the read head to a given location of the rotating media).
Unfortunately, SSDs do introduce new restrictions on how they can be read, written, and in particular erased. Typical flash-based storage can only erase one block at a time, but non-overlapping bits in a block can be set at any time. In a typical computing system, the operating system writes a first set of data to an SSD page, and if the user or system modifies the data, the operating system either rewrites the entire page or some portion of the data to a new location, or erases the entire block and rewrites the entire contents of the page. SSD lifetime is determined by the average number of times a block can be erased before the drive region can no longer maintain data integrity (or at least cannot be effectively erased and rewritten). Repeated erasing and rewriting of blocks and pages by the operating system, respectively, merely speeds up the expiration time of the SSD.
Several techniques have been introduced to help SSDs last longer. For example, many drives now internally perform wear leveling, where the drive's firmware selects the location where the data is stored in a manner that keeps each block erased approximately the same number of times. This means that the drive will not fail because one area of the drive is over-used while the other area is unused (which may cause the drive to appear to become smaller or fail completely over time). In addition, TRIM commands are introduced into the Advanced Technology Attachment (ATA) standard to allow the operating system to tell the SSD which data blocks are no longer used so that the SSD can decide when to erase. Ironically, all types of disk drives do not know which blocks are in use. This is because the operating system writes data and then typically only marks the flag indicating when the data was deleted at the file system level. Because the drive is typically not aware of the file system, the drive cannot distinguish between blocks that are in use by the file system and blocks that are no longer in use because the data has been marked as deleted by the file system. The TRIM command provides this information to the drive to enable the drive to reclaim unused blocks.
While these techniques are helpful, they still rely on the driver to primarily manage the driver itself and do not provide sufficient communication between the driver and the operating system to allow intelligent decisions to be made outside of the driver to extend the life of the driver. Writing is done today at a frequency determined by the application and thus does not incorporate any knowledge of how to write the device over time and across applications. Applications may repeatedly modify and save the same data block, resulting in multiple erase and rewrite cycles on flash-based devices, which results in more wear on the device.
Disclosure of Invention
A storage management system that decouples application write requests from write requests to flash-based storage devices is described herein. By placing a software intelligence layer between the application request to write data and the storage device, the system is able to make more efficient decisions about when and where to write data, which reduces wear and tear on the storage device and improves the performance of the storage device. Wear leveling is a problem in SSDs and other flash-based storage devices that focuses on making data identification and placement techniques play an important role in extending the flash memory used by SSDs and improving performance. An application has a set of performance characteristics and writes data at a frequency suitable for the application, but this is not necessarily efficient for hardware. By analyzing how the application is using the data, the system can strategically place the data in the storage device or even avoid using the storage device for certain operations to minimize wear. One technique for doing so is to create an in-memory cache that acts as a buffer between application requests and storage hardware. The system may provide what appears to be write access to a flash-based storage device, but in practice the device may not be written until the volatility of the data subsides. By caching writes in memory, the system can reduce the number of writes to the flash-based device. Thus, the storage management system leverages the operating system's knowledge of how data has been and will be used in order to place data on a flash-based storage device in a more efficient manner.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Drawings
FIG. 1 is a block diagram that illustrates components of the storage management system, in one embodiment.
FIG. 2 is a flow diagram that illustrates processing of the storage management system to process a write request from a software application, in one embodiment.
FIG. 3 is a flow diagram that illustrates processing of the storage management system to flush data to a flash-based storage device, in one embodiment.
FIG. 4 is a block diagram that illustrates operational components of the storage management system at a high level, in one embodiment.
Detailed Description
A storage management system that decouples application write requests from write requests to flash-based storage devices is described herein. By placing a software intelligence layer between the application request to write data and the storage device, the system is able to make more efficient decisions about when and where to write data, which reduces wear and tear on the storage device and improves the performance of the storage device. Wear leveling is a problem in SSDs and other flash-based storage devices that focuses on making data identification and placement techniques play an important role in extending the flash memory used by SSDs and improving performance. An application has a set of performance characteristics and writes data at a frequency suitable for the application, but this is not necessarily efficient for hardware. The operating system has an eye-box that includes information about requests for the application (or applications) and the profile of the hardware being used for storage.
Wear leveling in Solid State Drives (SSDs) is used to reclaim memory and extend the life of flash-based storage devices. Without wear leveling, highly written locations will wear out quickly, while other locations may eventually be rarely used. By analyzing how the application is using the data, the system can strategically place the data in the storage device or even avoid using the storage device for certain operations to minimize wear. One technique for doing so is to create an in-memory cache that acts as a buffer between application requests and storage hardware. Caching acts like copy-on-write (COW) technology, in which multiple parties are given access to shared data that they believe is a private copy, but does not provide a private copy until one party modifies the data. Similarly, the system may specify what appears to be a write access to a flash-based storage device, but in practice may not write to the device until the volatility of the data subsides. For example, consider an application that reads a file from memory that makes modifications to the file every few seconds for a period of minutes, and then closes the file. Using the common flash-based storage paradigm, each write can involve depleting a new block or repeatedly erasing a block to write the modifications. By caching these writes in memory, the system can reduce the number of writes to one when the file is closed, where the single write incorporates all the changes made by the application. In this way, a flash-based device has only one block erased, rather than multiple erases over time. Thus, the storage management system leverages the operating system's knowledge of how data has been and will be used in order to place data on a flash-based storage device in a more efficient manner.
As another example, consider a modification made sequentially to two adjacent or contiguous storage locations stored in the same block of flash memory. If the operating system follows the application's request and writes a first set of modifications to a first location followed by a second set of modifications to a second location, this may result in erasing the existing block with two new blocks or two times to write the modifications. By caching separate writes made by the application and observing that these writes affect the same block of storage, the system can reduce the number of new blocks used or the number of erasures by flushing the cache and committing both operations in the same action. As another example, many applications write temporary files, and during the time that an application creates a file, modifies the file several times, and finally deletes the file, a significant amount of wear may occur for flash-based devices that the system can avoid or reduce altogether.
The storage management system provides a strategy for enhancing the lifetime of flash-based memory devices that reduces device wear and enhances the read/write performance of the devices. Because the latency of reading and writing from a system or other memory is typically less than the average latency of flash-based storage, writes to and reads from a cache in memory are faster than reads/writes to flash-based devices. In some embodiments, the system implements the following policies: managing writes to flash devices in operating system device drivers so that all users of flash-based devices (e.g., applications, file systems, operating system services, etc.) can benefit significantly from these improvements. This also allows the system to make cross-application improvements to performance, such as when two applications access the same data.
After controlling the flash-based device, a device driver or other embodiment of the storage management system checks the geometry and capabilities of the storage device and allocates in-memory cache for the device. The cache will be used to retire writes directed to the flash device and can respond to read requests. Thus, for example, if logical block X number is to be modified in a flash memory device, the system directs the modified logical block X number to the in-memory cache managed by the driver. In essence, the driver implements a "copy-on-write" policy for the flash device being managed by caching writes in the memory cache. The driver maintains sufficient state to retrieve the correct logical block on the read side. When a read is issued, the driver checks to determine if the block is currently cached; if so, then a read is satisfied from the cache; if not, then the read is satisfied from the flash memory device. The cached writes are eventually synchronized with the data on the flash device to ensure write persistence, but the number and frequency of writes to the flash device may be different than the number and frequency of application requests to write to the flash device. The system carefully manages the synchronization to 1) improve the availability, performance, and persistence of the data and 2) minimize wear on the flash memory device.
The cached writes are periodically synchronized back to the flash device to ensure that writes are not lost in the event of a system crash. Since the size of the write cache is typically small compared to the capacity of the flash memory device, the system can also synchronize when the free blocks available in the write cache are exhausted.
FIG. 1 is a block diagram that illustrates components of the storage management system, in one embodiment. The system 100 includes a flash-based storage device 110, a device interface component 120, a data cache component 130, a cache write component 140, a cache read component 150, a cache flush component 160, an application interface component 170, and a fault management component. Each of these components is discussed in further detail herein.
The flash-based storage device 110 is a storage device that includes at least some flash-based non-volatile memory. Flash-based memory devices may include SSDs, Universal Serial Bus (USB) drives, storage built on motherboards, storage built into mobile smartphones, flash storage combined with traditional rotating storage media, and other forms of storage. Flash-based memory devices typically include NAND or NOR flash memory, but may include other forms of non-volatile Random Access Memory (RAM). Flash-based storage devices are typically characterized by fast access times, block-based erasures, and a limited number of non-overlapping writes that can be performed per page. A flash drive that is no longer able to be written to is said to have expired or failed.
The device interface component 120 provides an interface between the other components of the system 100 and the flash-based storage device 110. Device interface component 120 may utilize one or more operating system Application Programming Interfaces (APIs) to access the storage device and may use one or more protocols, such as serial ata (sata), parallel ata (pata), USB, or other protocols. Component 120 may also understand one or more proprietary or specific protocols supported by one or more devices or firmware that allow system 100 to retrieve additional information describing the available storage locations and layout of flash-based storage device 110.
Data caching component 130 provides in-memory caching of data that an application requests to write to a device. The system 100 may or may not flush the data in the cache to the device 110, or may select a particular opportunity to flush the data in the cache to the device 110 depending on the decisions of the cache flush component 160 as further described herein. The data caching component 130 provides an intermediate location where data requested to be stored by an application can be saved while incurring penalties or penalties for writing data to flash-based storage, while the system 100 waits to determine whether incurring these penalties are actually necessary, and while the system takes action to reduce these penalties by reducing the number of writes or erases to flash-based storage. At the same time, the data held in the cache may be accessed faster with lower latency than the data written to the flash-based storage device 100.
The cache write component 140 receives a request to write data to the flash-based storage device 110 and writes the data to the data cache component 130. The cache write component 140 can determine whether a particular block of the flash-based storage device 110 has already been stored in the cache and can perform administrative custody to store information in a data structure indicating that new data for the particular block of the device 110 is available in the cache. The cache write component 140 can quickly write data to the cache and can then respond to the requesting application with low latency so that the application can continue with other tasks instead of waiting for the typically slower storage request. Thus, the cache write component 140 decouples applications from the specific device capabilities and parameters of the flash-based storage device 110.
The cache read component 150 receives a request to read data from the flash-based storage device 110 and reads the data from the data cache component 130 or the flash-based storage device 110. Because the cache acts as an intermediary between the device 110 and the application, it is possible that the cache contains information that is more up-to-date than the information stored on the device 110. Because read operations typically rely on receiving the most recent copy of the data, cache read component 150 determines whether the more recent copy of the data is available in the cache and, if so, services the read request from the cache. If a particular block is not available in the cache, component 150 retrieves the data from the device and optionally caches the data if the application (or another application) subsequently attempts to re-read or modify (i.e., write) the same location. Servicing requests from the cache is faster and thus taking these actions improves the performance of the application while reducing wear on the flash-based storage device 110.
The cache flush component 160 copies data stored in the data cache component 130 to the flash-based storage device 110. The flush is the following operation: data that has been stored in the cache is eventually coordinated to non-volatile storage so that it is available even in the event of a power failure and across computing sessions that may last for a long time. Cache flush component 160 may use various techniques and processes known in the art to determine when to flush a cache. The cache may be flushed for the following reasons: the cache is full, the data has been retained in the cache for more than a threshold amount of time (and thus a higher risk of loss), more new data is available that would benefit from using the cache, and so on. The system 100 may apply any of these techniques to manage the operation of the cache to obtain the desired performance and wear-out benefits. In some embodiments, system 100 provides an option that an administrator can configure to alter the tradeoff between improved performance and data security obtained by cache flush component 160.
The application interface component 170 provides an interface between other components of the system 100 and applications that request reading from and writing to the flash-based storage device 110. The application interface component 170 may interoperate with the operating system as a kernel or user mode device driver. The operating system generally provides one or more APIs for writing data to the storage device, and the component 170 may intercept these requests to the flash-based storage device 110 to enable the techniques described herein to be applied to more efficiently use the device 110. System 100 may operate on any number of flash-based storage devices simultaneously attached to a computing platform. For example, a platform may include multiple flash-based hard drives, USB devices, etc., each of which may be used to store data.
The fault management component 180 optionally handles data access and/or movement to or from the flash-based storage device 110 when the device is near its wear limit or when a fault occurs. Whenever an intermediate storage location, such as a cache herein, is introduced for data, the likelihood of data loss due to a power failure or other failure increases. Component 180 can assist a user in moving data to a less lossy area of device 110 or retrieving data from outside device 110 to avoid data loss. For example, if a file has not been accessed for seven years, component 180 may suggest to the user to allow system 100 to delete the file from a less lossy location to allow other more important data to be written to that location. Similarly, component 180 can assist a user in locating files that can be easily replaced (e.g., operating system files that can be reloaded from a compact disk) that can be deleted or moved to make room for more difficult to replace data files within the area of the device 110 that is of transitional wear.
The computing device on which the storage management system is implemented may include a central processing unit, memory, input devices (e.g., keyboard and pointing devices), output devices (e.g., display devices), and storage devices (e.g., disk drives or other non-volatile storage media). The memory and storage devices are computer-readable storage media that may be encoded with computer-executable instructions (e.g., software) that implement or enable the system. In addition, the data structures and message structures may be stored on a computer-readable storage medium. Any computer readable media claimed herein includes only those media that fall within the statutory patentable category. The system may also include one or more communication links over which data may be communicated. Various communication links may be used, such as the Internet, a local area network, a wide area network, a point-to-point dial-up connection, a cellular telephone network, and so forth.
Embodiments of the system may be implemented in various operating environments that include personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, digital cameras, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, set top boxes, systems on a chip (SOCs), and so on. The computer system may be a cellular telephone, personal digital assistant, smart phone, personal computer, programmable consumer electronics, digital camera, or the like.
The system may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
FIG. 2 is a flow diagram that illustrates processing of the storage management system to process a write request from a software application, in one embodiment.
Beginning in block 210, the system receives a request from an application to write data to a flash-based storage device. The request may originate from a user request received by a software application and then received by an operating system in which the storage management system is implemented as a file system driver or other component for managing data storage on the flash-based device. The received request may include some information about the data, such as the location in the file system where the data will be stored, and may give some information about the purpose, access frequency and access type (read/write) required for the data. For example, if data is being written to a location in the file system reserved for a temporary file, the system may predict that the data will be written frequently for a short period of time and then deleted. Similarly, if a file is opened with the "delete on close" flag set, the system may conclude that the file will be used briefly and then deleted.
Continuing to block 220, the system determines the storage device to which the received request is directed. The system determines whether the target storage device is flash-based and would benefit from in-memory caching and techniques performed by the system. Many computer systems include a combination of flash-based and non-flash-based devices, and some devices combine flash-based portions with rotating or other media. The device may provide information to the operating system via basic input/output system (BIOS) information or Extended Firmware Interface (EFI), which the system may query to determine the physical characteristics of the device.
Continuing to decision block 230, if the system determines that the storage device is flash-based, the system continues to block 240, else the system completes and processes the write request in a conventional manner. The system may even perform other conventional caching techniques for non-flash-based storage devices. The techniques described herein are compatible with other storage performance enhancement policies and may be coupled with these policies to further improve the performance of the storage device, whether the device is flash-based or not.
Continuing to block 240, the system identifies an in-memory cache for collecting data requested by the application to be written to the storage device. If the cache does not already exist, the system may allocate in-memory cache at this stage to handle current and subsequent data write requests. A cache may be an area in RAM or other memory to which a system can write data without incurring a wear-out or other penalty of immediately writing the data to a flash-based storage device. The caches described herein operate between applications and storage devices and may act independently of and in addition to other caches such as a hardware cache in the device itself, a processor cache of a CPU of a computer system, and so forth. The cache may collect multiple write requests and service read requests before flushing data from the cache to the flash-based storage device.
Continuing to block 250, the system writes the data received with the request to the identified in-memory cache without writing the data to the flash-based storage device. The system may also update metadata or other data structures that indicate which data associated with the flash-based storage is currently held in the cache. At some point, the system flushes the data from the in-memory cache to the flash-based storage device, or determines that writing the data is unnecessary because the data has been deleted or is no longer needed. By combining multiple write requests to a particular block of flash-based memory, the system avoids excessive wear on flash-based devices and improves device performance. At the same time, any request to read write data or data in the same block (if the entire block is cached) can be serviced much faster due to the lower latency of the cache memory.
Continuing to block 260, the system returns a write result to the requesting application indicating success or failure of the write operation to the in-memory cache. Typically, the write will be successful and the application will receive the result in less time than writing the data directly to flash-based storage due to the lower latency of the cache in memory. This allows the application to continue with other production work and improve performance even if the cache cannot avoid any write operations to the flash-based storage at a later time.
Continuing to block 270, the system collects the additional write operations in the in-memory cache before writing data from the first write request and the at least one additional write operation to the flash-based storage device. In this manner, the system reduces the number of write requests to the flash-based storage device and decouples the timing and frequency of application write requests from device write requests. This improves application performance and reduces wear on flash-based storage devices. After block 270, these steps conclude.
FIG. 3 is a flow diagram that illustrates processing of the storage management system to flush data to a flash-based storage device, in one embodiment.
Beginning in block 310, the system receives a plurality of requests from one or more applications to store data on a flash-based storage device and caches the data in an intermediate cache without storing the data on the flash-based storage device while servicing the requests. The system may receive requests over a period of time using a process as described with reference to FIG. 2, cache data as each request is received, and then determine an appropriate time to flush the data in the cache to the flash-based storage device.
Continuing to block 320, the system identifies cached data in the intermediate cache and determines whether the cached data is to be written to the flash-based storage device. The system may use various criteria to determine a point in time to write data stored in the intermediate cache to the flash-based storage device. For example, the system may flush the cache or a portion thereof when the data in the cache is full, after a threshold time of residence of particular data in the cache, upon receipt of more recent data that is more likely to be referenced than earlier cached data, and so forth.
Continuing to decision block 330, if the system determines that at least one cache flush criterion has been met, the system continues to block 340, else the system completes and waits for the cache flush criterion to be met. The system may check for satisfaction of the cache criteria when some event occurs, such as receiving an indication that a write request has reached an expired cache or that a timer has expired to check the state of the cache. The system uses stored bookkeeping information describing data in the cache to determine whether caching criteria have been met. The bookkeeping information may include identification information (e.g., addresses) of blocks residing in the cache in the flash-based storage device, how long particular data has been stored in the cache, whether the application is still accessing a portion of the cache, and so forth.
Continuing to block 340, the system identifies a flash-based storage device to which the received request is directed, wherein the flash-based storage device acts as non-volatile storage for the data in the intermediate cache. In some cases, the system may manage multiple flash-based storage devices and separate caches for each device. In such a case, the system may make flush decisions for each cache simultaneously or independently. Each device may be identified by a driver letter, path, or other identifying information assigned to the device by the operating system or other software.
Continuing to block 350, the system writes at least some of the data stored in the intermediate cache to the identified flash-based storage device. After storing the data, the system may or may not flush the cache. For example, if the system determines that the application is likely to continue reading or writing data, saving the data in cache will allow faster access to the data and further optimize the number and frequency of writes to the flash-based storage device. On the other hand, if other data contends for use of the cache, the system may flush the cache and allow other data to use the cache. Data writes to the flash-based storage device occur after the original write operation to store the data to the intermediate cache has been completed. This results in application writes being decoupled from device writes and allows the system to reduce the number of writes and the resulting wear on the device. For frequently written or temporary data, this can result in greatly reduced device wear and improved performance. After block 350, these steps conclude.
FIG. 4 is a block diagram that illustrates operational components of the storage management system at a high level, in one embodiment. An intelligent flash driver 420 implementing a storage management system is located between flash device 410 and one or more applications 430. Flash driver 420 intercepts requests from application 430 to read from and write to flash device 410. Flash driver 420 redirects write requests to in-memory cache 440 and services read requests from flash device 410 or in-memory cache 440 depending on where the requested data is most readily available. Sometimes, flash driver 420 flushes the stored data in-memory cache 440 by writing the data to flash device 410. Typically, the number of requests from intelligent flash driver 420 to write data to flash device 410 will be less than the number of write data requests from application 430. Thus, intelligent flash driver 420 reduces the burden and wear on flash device 410.
In some embodiments, the storage management system operates in conjunction with conventional wear leveling techniques for flash-based storage devices. Frequent writes are managed using block remapping, which determines which block in a flash-based storage device to write in a manner that balances writes to each block. The system may apply the same techniques, but in conjunction with the in-memory caching described herein, this will eventually still perform fewer writes to the flash-based device, so that the overall effect is a lower average wear on the flash-based device. Similarly, the same may operate with other caches, wear leveling, or other performance enhancements.
In some embodiments, the storage management system applies logging or similar techniques to ensure that data written to the cache can be recovered or reliably rolled back in the event of a failure. The window between writing data to the cache and flushing data from the cache to the flash-based storage device is typically small, but may still fail. In this case, the system may use logging or other techniques to place the file system in a known state and recover data if possible, rather than losing data or placing data in an inconsistent state.
In some embodiments, the storage management system uses bulk transfers to flush data from the cache to the flash-based storage. In other words, even if the application writes several consecutive blocks in separate write requests, the system is able to transfer all data written to the cache to the flash-based storage device using a bulk transfer that copies consecutive blocks in a single request. This may result in better data organization and less fragmentation on flash-based storage devices.
From the foregoing, it will be appreciated that specific embodiments of the storage management system have been described herein for purposes of illustration, but that various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
FIG. 1 shows a schematic view of a
Storage management system
Flash-based storage device interface component data cache component cache write component
Cache read component cache flush component application interface component fault management component
FIG. 2
Processing application write requests
Receiving an application write request
Determining a target device
Based on flash memory?
Y is
N is no
Identifying intermediate caches
Writing data to cache
Returning write results to applications
Collecting subsequent writes in a cache
Complete the process
FIG. 3
Flushing cache
Receiving multiple application write requests
Identifying cached data
Meet the flush criteria?
Y is
N is no
Identifying flash-based storage
Writing cached data to storage
Complete the process
FIG. 4
Intelligent flash driver application for flash memory device
In-memory caching

Claims (15)

1. A computer-implemented method of processing write requests from a software application directed to a flash-based storage device, the method comprising:
receiving (210) a request from the application to write data to the flash-based storage device;
determining (220) a storage device to which the received request is directed;
when it is determined (230) that the storage device is flash-based, identifying (240) an in-memory cache for collecting data requested by an application to be written to the storage device, the in-memory cache being separate from the flash-based storage device;
writing (250) data received with the request to the identified in-memory cache without writing the data to the flash-based storage device;
returning (260) a write result to the requesting application indicating success or failure of a write operation to the in-memory cache; and
collecting (270) at least one additional write operation in the in-memory cache prior to sending data from the write request and the additional write operation to the flash-based storage device,
wherein the foregoing steps are performed by at least one processor.
2. The method of claim 1, wherein receiving the request comprises receiving the request in response to a user action in a software application, wherein the software application calls an operating system Application Programming Interface (API) to store data on the flash-based storage device.
3. The method of claim 1, wherein receiving the request comprises receiving information related to the data, including an indication of an expected frequency of access for the data.
4. The method of claim 1, wherein determining the storage device comprises determining whether a target storage device is flash-based and would benefit from caching in memory for write requests.
5. The method of claim 1, wherein determining the storage device comprises reading information provided by a basic input/output system (BIOS) or an Extended Firmware Interface (EFI) to determine a physical characteristic of the device.
6. The method of claim 1, wherein identifying an in-memory cache comprises determining that the cache does not yet exist and allocating the in-memory cache to process current and subsequent data write requests.
7. The method of claim 1, wherein identifying an in-memory cache comprises identifying a region in Random Access Memory (RAM) to which data can be written without incurring a wear or other penalty of immediately writing the data to the flash-based storage device.
8. The method of claim 1, wherein the cache collects a plurality of write requests and services any received read requests prior to flushing the data from the cache to the flash-based storage device.
9. The method of claim 1, wherein writing the data to the cache comprises updating metadata that indicates which data associated with the flash-based storage is currently held in the cache.
10. The method of claim 1, further comprising flushing the data from the in-memory cache to the flash-based storage device or determining that writing the data is unnecessary because the data has been deleted.
11. The method of claim 1, wherein writing data to the cache and later flushing the data combines multiple write requests to a particular block of flash-based memory, thereby avoiding excessive wear on the flash-based device and improving device performance.
12. The method of claim 1, wherein returning a write result comprises providing the result to the application in less time than writing the data directly to the flash-based storage due to a lower latency of a cache in the memory.
13. A computer system for enhancing the life and performance of flash-based storage, the system comprising:
a flash-based storage device (110) comprising at least some flash-based non-volatile memory;
a device interface component (120) that provides an interface between other components of the system and the flash-based storage device;
a data cache component (130) that provides an in-memory cache of data requested by an application to be written to the flash-based storage device, the in-memory cache being separate from the flash-based storage device;
a cache write component (140) that receives a request to write data to the flash-based storage device and instead writes the data to the data cache component;
a cache read component (150) that receives a request to read data from the flash-based storage device and determines whether the requested data can be retrieved from the data cache component without accessing the flash-based storage device;
a cache flush component (160) that copies data stored in the data cache component to the flash-based storage device (110) at a time decoupled from an application request to write data;
an application interface component (170) that provides an interface between other components of the system and one or more applications that request reading from and writing to the flash-based storage device.
14. The system of claim 13 wherein the data caching component provides an intermediate location in which data requested to be stored by an application can be saved without incurring penalties for writing the data to a flash-based storage, and at the same time the system reduces these penalties by reducing the number of writes/erases to the flash-based storage.
15. The system of claim 13, wherein the cache write component determines whether a particular block of the flash-based storage device is already stored in the cache and stores information in a data structure indicating that new data for the particular block of the device is available in the cache.
HK13112391.5A2012-04-022013-11-04Enhancing the lifetime and performance of flash-based storageHK1185170B (en)

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US13/437,0062012-04-02

Publications (2)

Publication NumberPublication Date
HK1185170A HK1185170A (en)2014-02-07
HK1185170Btrue HK1185170B (en)2018-03-02

Family

ID=

Similar Documents

PublicationPublication DateTitle
US8918581B2 (en)Enhancing the lifetime and performance of flash-based storage
US10275162B2 (en)Methods and systems for managing data migration in solid state non-volatile memory
US10936203B2 (en)Memory storage device and system employing nonvolatile read/write buffers
US8719501B2 (en)Apparatus, system, and method for caching data on a solid-state storage device
CN106471478B (en)Device controller and method for performing multiple write transactions atomically within a non-volatile data storage device
EP2605142B1 (en)Lba bitmap usage
US20180121351A1 (en)Storage system, storage management apparatus, storage device, hybrid storage apparatus, and storage management method
US9323659B2 (en)Cache management including solid state device virtualization
US8738882B2 (en)Pre-organization of data
US8694754B2 (en)Non-volatile memory-based mass storage devices and methods for writing data thereto
CA2574756C (en)Systems, methods, computer readable medium and apparatus for memory management using nvram
US9146688B2 (en)Advanced groomer for storage array
US20070094445A1 (en)Method to enable fast disk caching and efficient operations on solid state disks
US20110320733A1 (en)Cache management and acceleration of storage media
US9208101B2 (en)Virtual NAND capacity extension in a hybrid drive
US20170228191A1 (en)Systems and methods for suppressing latency in non-volatile solid state devices
CN114185492B (en) A Solid State Disk Garbage Collection Method Based on Reinforcement Learning
KR102304130B1 (en)Segment cleaning method using non-volatile random access memory and memory management apparatus thereof
US11941246B2 (en)Memory system, data processing system including the same, and operating method thereof
KR20100099888A (en)A method for log management in flash memory-based database systems
HK1185170B (en)Enhancing the lifetime and performance of flash-based storage
HK1185170A (en)Enhancing the lifetime and performance of flash-based storage
Ou et al.Towards an efficient flash-based mid-tier cache
CN119645305A (en) A method and device for managing solid state disk write cache
Lim et al.Investigating of Dynamic Interval of Journaling for more Lurked Discard Regions for Flash Memory Storage

[8]ページ先頭

©2009-2025 Movatter.jp