Detailed Description
FIG. 1A illustrates a method 110 for extending memory lifetime, according to one embodiment. As shown, the spare space of the memory is increased. See operation 112. In addition, the spare space in the memory is increased, so that the service life of the memory can be prolonged. See operation 114.
In the context of this specification, the lifetime of a memory may include any duration during which the memory exhibits any desired degree of availability. For example, in various embodiments, such lifetimes may include, but are not limited to, expected lifetimes, actual lifetimes, estimated lifetimes, and the like. Further, availability may relate to any parameter related to availability, such as the percentage of components (e.g., blocks, cells, etc.) that are still operational, the reliability of their memory or components, and/or other parameters related thereto.
Further, in the context of this specification, spare space of a memory refers to any space (e.g., blocks, units, etc.) available in the memory. Further, in various embodiments, the memory may include, but is not limited to, mechanical memory devices (e.g., disk drives, etc.), solid state memory devices (e.g., Dynamic Random Access Memory (DRAM), flash memory, etc.), and/or any other memory device. Where the memory includes flash memory, the flash memory may include, but is not limited to, Single Level Cell (SLC) devices, multi-level cell (MLC) devices, NOR flash memory, NAND flash memory, MLC NAND flash memory, SLC NAND flash memory, and the like. In one embodiment, the nonvolatile memory device may include at least one of a one-bit-per-cell NOR flash memory, a multi-bit-per-cell NOR flash memory, a one-bit-per-cell NAND flash memory, a multi-bit-per-cell NAND flash memory, a phase change memory, a resistive memory, a carbon nanotube memory, and an electromigration memory.
More exemplary information will be described below regarding various alternative architectures and features with which the foregoing architecture may or may not be implemented, depending upon the needs of the user. For example, the foregoing techniques may be used in a scheme to guarantee or extend memory life. It should be particularly noted that the information given below is for illustrative purposes and should not be construed as limiting in any way. Any of the following features may be optionally incorporated with or without the inclusion of other features.
FIG. 1B illustrates a memory module 150 having an extended lifetime, in accordance with one embodiment. Alternatively, the memory module 150 may be implemented to perform the method 110 of FIG. 1A. Of course, the memory module 150 may be implemented in any desired environment. It is noted that the foregoing definitions apply to the entire content of this specification.
As shown, the number of spare blocks 160 of memory module 150 is increased. The life of the memory module 150 is extended due to the increased number of spare blocks 160 of the memory module 150. In one embodiment, the number of spare blocks 160 of memory may be increased by compressing the data stored in the memory module 150. Such compression may include lossless compression (e.g., Burrows-Wheeler, Lempel-Ziv (LZ), LZ77, LZ78, etc.), or in some embodiments lossy compression (e.g., lossy prediction code, lossy transform code, etc.).
In the case of adding spare blocks 160 using compression, the compression rate may change. In another embodiment, the number of spare blocks 160 of memory may be increased by deleting duplicate data stored in the memory module 150.
Alternatively, the number of spare blocks 160 of memory may be increased by compressing the data stored in the memory module 150 and deleting the copied data stored in the memory module 150. In this case, the remaining data is compressed after the copied data is deleted. Of course, it is equally possible to compress the data and then delete the copied data.
In another embodiment, spare space may be increased by detecting deleted data and adding spare data with the space occupied by the deleted data. Alternatively, the deleted data may be transferred from the host or RAID controller, or discovered by the disk controller from the data contained on the disks. In another embodiment, any combination of compression, de-duplication, and recovery (replication) of deleted files may be used to increase the amount of spare space.
As a particular example, compression and/or de-duplication and/or clearing of deleted data may be used to make a certain amount of memory in the memory module 150 spare. In this case, the storage cost can be reduced appropriately for the spare space. For example, the data 156 may be written to flash pages 158 contained in the plurality of memory blocks 152.
The data 156 may be compressed to increase the number of free blocks 160. In one embodiment, extending the life of memory using the spare blocks 160 may be implemented in conjunction with guaranteeing the life of memory.
Alternatively, the end of life times (edges) of the blocks of the memory module 150 may be equal. For example, spare blocks 160 may be selected to equalize the end-of-life times of the blocks of memory module 150. In this case, different blocks of the memory module 150 may be used to store data, thereby equalizing the end-of-life time of the blocks of the module 150 in memory.
Various operations that reduce the lifetime of the memory may be controlled in order to extend the lifetime, according to different embodiments to be described. In the context of this specification, these operations may refer to write operations, erase operations, program operations, and/or any other operations that can reduce the aforementioned lifetime. Further, it is noted that although the present embodiment describes spare space in a memory in terms of memory blocks, such embodiments are equally contemplated as being within the scope of any spare space of a memory (e.g., memory cells, etc.).
FIG. 1C illustrates a method 100 for delaying operations that reduce memory lifetime, according to one embodiment. Alternatively, the method 100 may be implemented in the context of the details of fig. 1A and 1B. Of course, the method 100 may be performed in any desired environment. It is noted that the foregoing definitions apply to the entire content of this specification.
As shown, at least one characteristic related to memory life is determined. See operation 102. In the context of this description, in various embodiments, the characteristics relating to the lifetime determined in operation 102 may include a time period, an operation rate to reduce the lifetime of the memory, a total allowed number of operations to reduce the lifetime of the memory (total allowed number), a lifetime duration, and the like. Further, in one exemplary embodiment, given the total allowable number of operations and the selected or expected lifetime as previously described, the maximum average rate of operation in units of number of operations per time period can be directly calculated. Of course, this exemplary aspect is given for illustrative purposes only, as any other characteristic of lifetime is also absolutely determinable, for reasons that will become apparent.
To this end, at least one operation that shortens the lifetime of the memory is delayed based on this aspect. See operation 104. Thus, such a delay may be performed in any manner that is functional for at least a portion of the aspect of memory life determined in operation 102. In the context of the present specification, the aforementioned delay of operation is considered to include the case of delaying only a partial operation. For example, where an operation may include multiple components, such delay may be applied to one or more (or all) portions of such operation.
In one embodiment, the operation is delayed by delaying the command to initialize the operation. For example, in response to a command to write or erase, execution of the command may be delayed. Of course, in other embodiments, only the operation itself may be delayed. With this design, such a delay in one or more operations that would shorten the life of the memory may result in at least a partial reduction in the extent of such shortening.
Additional exemplary information will be given below regarding various alternative architectures and features with which the foregoing architecture may or may not be implemented, depending on the needs of the user. For example, the delay may be managed in a number of different ways using a number of different techniques, examples of which are given below.
FIG. 2 illustrates a technique 200 for delaying operations that reduce memory lifetime, in accordance with another embodiment. Alternatively, the technique 200 may be implemented within the context of the details of FIGS. 1A-1C. Of course, the technique 200 may also be implemented in any desired environment. It is noted that the foregoing definitions apply throughout this specification.
As shown, the technique 200 considers a total number of operations 202 that result in the memory exhibiting a minimum degree of availability and a minimum expected lifetime 204 of the memory. From these data points, a maximum average operating rate 206 to reach the minimum expected life 204 may be calculated.
In use, the number of lifetime-reducing operations may be monitored over time. If at any time, the number of such operations over time exceeds the maximum average operating rate 206, any excess operations (which contribute to exceeding the rate) may be delayed by a calculated amount, a predetermined amount of time, or adaptively delayed based on an earlier or predicted rate of lifetime-reducing operations in the manner presented. In one embodiment, this predetermined amount of time may be a time that results in the maximum average operating rate 206 not being exceeded.
In various embodiments, the operations that require the delay (and the length of the delay itself) may be determined based on a variety of factors. For example, in one embodiment, the delay may be based on the application that initiated the operation. In this embodiment, operations initiated by applications with low priority may be delayed without (when possible) delaying operations initiated by applications with high priority.
Of course, other embodiments are contemplated in which the delay is managed in operation in a manner independent of the application. For example, regardless of the initial application, the delay may be applied to all operations of a particular type (e.g., erase operations, etc.). Of course, embodiments including hybrid approaches are also contemplated.
Also, it is contemplated that the delayed operation may include embodiments of operations or forms of operations that result in an abnormally short lifetime. In one embodiment, only these forms may be delayed. For example, a virus or coarse (rough) application operating form may be detected and only this form of operation delayed.
FIG. 3 illustrates a time interval based technique 300 for delaying operations that reduce memory lifetime, in accordance with yet another embodiment. Alternatively, the present technique 300 may be implemented to perform the method 100 of FIG. 1C and/or further perform the method of the context of the technique 200 of FIG. 2. Of course, the technique 300 may be performed in any desired environment. It should also be noted once again that the foregoing definitions apply throughout this specification.
Similar to the technique of FIG. 2, the technique 300 considers a total number of operations 302 that result in the memory exhibiting a minimum degree of availability and a minimum expected lifetime 304 of the memory. From these data points, a maximum average operating rate 306 to reach the minimum expected life 304 may be calculated. In use, the number of lifetime-reducing operations may be monitored over time.
If at any time, the number of such operations over time exceeds the maximum average operation rate 306, then any excess operations need not be delayed in an unconditional manner in the manner presented (as with the technique 200 of FIG. 2). Instead, such excessive operation may be conditionally delayed based on a time interval during which the operation is initialized. For example, such time intervals may include, but are not limited to, a period of a day, a day of a week, a month of a year, and the like. In additional embodiments, the time interval may be adaptively or dynamically adjusted to an optimal period. For example, such adaptive or dynamic adjustment may be based on a histogram or the like of the frequency of the lifetime-reducing operation over the subintervals of the interval.
For example, in the manner shown, if the excess amount of operations is determined on Monday, Tuesday, Wednesday, Thursday, etc., it may be appreciated (e.g., expected) that the amount of operations that may be determined on subsequent Friday, Saturday, and Sunday may be reduced. Thus, rather than unconditionally delaying such an excess number of operations, they may be performed immediately, based on the probability that the average operating rate (taking into account the entire week) will not exceed the maximum average operating rate 306. Of course, if this is not indicated, some delay or the like will occur in the following week. While the above example gives days in a week, other more "macro" embodiments are contemplated in which fluctuations in memory usage are taken into account over the range of weeks of a month, months of a year, etc.
In additional embodiments, the conditional delay of an operation is generalized so as to be based not on an interval, but on a historical usage record of the memory, and/or even a predicted usage of the memory. In such embodiments, any desired statistical analysis is performed by using historical data to predict future use, more accurately identify situations where delay-out operations need not necessarily occur, and the like.
FIG. 4 illustrates an integration-based technique 400 for delaying operations that reduce memory lifetime, in accordance with yet another embodiment. Alternatively, the technique 400 may be implemented to perform the method 100 of fig. 1C and/or further perform the methods of the context of the techniques 200 and 300 of fig. 2-3. Of course, the technique 400 may be performed in any desired environment. It should also be noted once again that the foregoing definitions apply throughout this specification.
Similar to the previous techniques, the technique 400 considers a total number of operations 402 that result in the memory exhibiting a minimum degree of availability and a minimum expected lifetime 404 of the memory. From these data points, a maximum average operating rate 406 to reach the minimum expected lifetime 404 may be calculated. In use, the number of lifetime-reducing operations may be monitored over time.
If at any time, the number of such operations over time exceeds the maximum average operation rate 306, then any excess operations need not be delayed in an unconditional manner in the manner presented (as with the technique 200 of FIG. 2). Instead, such excessive operations may be conditionally delayed based on an integration function that reflects memory usage. Specifically, the integral of the difference between the total rate of lifetime-reducing operations over time and the maximum average operating rate 406 may be calculated based on the current state. For this reason, the aforementioned delay need not necessarily occur if such integration information indicates that such operation may exceed the maximum average operation rate 406.
FIG. 5 illustrates a system 500 for delaying operations that reduce memory lifetime when an expected lifetime duration exceeds an estimated lifetime duration, in accordance with another embodiment. Alternatively, the system 500 may be implemented to perform the method 100 of fig. 1C and/or further optionally in combination with any of the techniques of fig. 2-4. Of course, the system 500 may also be used in any desired manner.
As shown, a storage system 503 is included that includes a plurality of storage devices 530 and 540. At least one memory bus 502 connects the at least one controller 511 and the at least one computer 501. In various embodiments, the storage bus 502 may include, but is not limited to, a Serial Advanced Technology Attachment (SATA) bus, a serial attached scsi (sas) bus, a fibre channel bus, a memory bus interface, a flash memory bus, a NAND flash memory bus, an Integrated Drive Electronics (IDE) bus, an Advanced Technology Attachment (ATA) bus, a Consumer Electronics (CE) bus, a Universal Serial Bus (USB) bus, a smart card bus, a multimedia card (MMC) bus, and the like. Thus, the controller 511 can interface between a system (e.g., the computer 501) and a secondary memory (e.g., at least one of the storage devices 530, 540). Further included is at least one means 510 for extending the life of memory associated with the memory devices 530, 540.
As shown, the apparatus 510 includes a controller 511 coupled to storage devices 530, 540 via a plurality of corresponding buses 521, 522, respectively. To execute commands received from computer 501 via memory bus 502, controller 511 uses multiple buses 521, 522 to control and exchange data with multiple memory devices 530, 540. Each storage device 530, 540 includes at least one module or block 531, 532, 533, 541, 542, 543 for storing data. Further, at least a portion of the aforementioned commands are lifetime-reducing commands that have a negative impact on at least one module or block 531, 532, 533, 541, 542, 543. In use, the apparatus 510 may be used to extend the life of the memory devices 530, 540 despite such lifetime-reducing commands.
To achieve this, the controller 511 is connected to a lifetime estimator module 514 via a corresponding bus 512. The apparatus 510 further includes a time module 517 coupled to the lifetime estimator module 514 via a bus 518, the time module for providing a current time. In use, the lifetime estimator module 514 is arranged to receive commands sent from the computer 501 to the controller 511 via the memory bus 502. Further, the lifetime estimator module 514 calculates an estimated lifetime assuming that the command received over the bus 512 has been executed.
With continued reference to fig. 5, the lifetime estimator module 514 is coupled to a throttling module 516 via a bus 515. The lifetime estimator module 514 uses the bus 515 to send the estimated lifetime of the command currently being executed by the controller 511 to the throttle module 516. In one embodiment, the currently executed command may be the same as the command received by the lifetime estimator module 514 via the bus 512, and may also be the same as the command received by the controller 511 from the computer 501 via the storage bus 502.
The current time module 517 is also coupled to the throttle module 516 via the bus 518. Thus, the current time module 517 also communicates the current time to the throttle module 516. In one embodiment, the current time module 517 may be implemented as a simple counter or the like that increments at constant time intervals.
The throttling module 516 is further coupled to the life expectancy module 520 via a bus 519 and to the controller 511 via a bus 513. In use, the expected lifetime module 520 is used to store the expected lifetime. With this design, the throttling module 516 may be configured to communicate information to the controller 511 over the bus 513 to instruct the controller 511 to delay execution of the current command.
In one embodiment, the throttling module 516 of the device 510 may operate such that execution of the current command is delayed until the execution has an effect on the lifetime that the estimated lifetime is greater than or equal to the expected lifetime stored in the expected lifetime module 520. In one embodiment, if the estimated lifetime received via the bus 515 is shorter than the expected lifetime received via the bus 519, the function of the throttling module 516 is reduced to providing a delay signal to the controller 511.
In another embodiment, the above-described functionality of the controller 511, the lifetime estimator module 514, and the throttling module 516 may be applied to a set of commands received within a predetermined time interval. This arrangement allows the system 500 to meet the necessary lifetime without unnecessarily throttling short character sets (bursts) for lifetime-reducing commands. For example, by selecting time intervals as one day, this technique allows the system 500 to provide higher instantaneous performance for life-shortening commands because there are time intervals during some time of day (e.g., at night, etc.) for which the frequency of life-shortening commands is reduced compared to the average frequency of life-shortening commands.
In an alternative embodiment, consistency may be maintained over time. As an example of a coherency method, if the lifetime-shortening command A is delayed, all commands that depend on the data of A or the resulting value of executing command A (whether or not the lifetime is shortened) are also delayed.
In another embodiment, time may be replaced with various approximations of time, such as the time of power-up of a disk. In another embodiment, the computer 501, RAID controller, and/or other device may provide additional information to increase the accuracy of the tracked time. Thus, when one or more of the storage devices 530, 540 are turned off, the time counter does not count. This may unnecessarily degrade performance since real time is constantly advancing. In this scenario, the computer 501, software, and/or controller may provide information regarding the time when the system 500 was shut down to address the problem.
In another embodiment, the system 500 may be equipped with internal storage device redundancy capabilities for reduced cost and improved performance. In this embodiment, data may be moved between the various storage devices 530, 540 according to any characteristics associated with their lifetimes (e.g., see operation 102 of FIG. 1C, etc.). For example, this case includes a first storage device 530 that includes a set of data that has a higher frequency of rewriting relative to the data in a second storage device 540. In this case, the data may be moved from the first storage device 530 to the second storage device 540 over a predetermined amount of time, or during garbage collection, or at the end of life-averaging, or other event determined by the system, and the first storage device 530 or one or more blocks/modules 531, 532, 533 thereof may be used to store data at a low write frequency or no longer for further use.
To this end, memory devices within a lifetime are properly allocated to avoid one memory device failing at a point in time prematurely relative to the other memory devices of the group. Of course, the present techniques may be applied not only to different memory devices, but also to portions thereof. To this end, the lifetime of any memory component may be managed in this manner.
In any case, the controller 511 may be configured to reduce and/or allocate write operations. With this feature, the life of suitable storage devices 530, 540 may be extended. An exemplary method of performing this technique will be given in the description of fig. 6.
FIG. 6 illustrates a method 600 for delaying operations that reduce memory lifetime when an expected lifetime duration exceeds an estimated lifetime duration, in accordance with another embodiment. Alternatively, the method 600 may be performed using the system 500 of fig. 5 and/or further optionally in combination with any of the techniques of fig. 1-4. Of course, the method 600 may be used in any desired manner. And the foregoing definitions apply throughout the present specification.
Initially, at operation 601, the method 600 continues with controller control (e.g., controller 511 of FIG. 5, etc.) waiting for a command 602 issued by a computer (e.g., computer 501, etc.) to at least one storage device (e.g., storage devices 530, 540, etc.). Once the controller receives the command, the method proceeds to decision 603, where the controller determines whether the command received in operation 602 is a lifetime-reducing command (e.g., an erase operation, a write operation, etc.). If it is determined in decision 603 that the currently received command is not a lifetime shortening command, then the command is processed only by operation 607.
On the other hand, if it is determined in decision 603 that the currently received command is indeed a lifetime shortening command, then an estimated lifetime is calculated by a lifetime estimator module (e.g., lifetime estimator module 514, etc.) based on the command received in operation 602, the previous lifetime, and the current time (e.g., by time module 517, etc.). See operation 604. In one embodiment, the previous age may represent a previous state of the age estimator module. In another embodiment, the previous lifetime may be obtained by measuring one or more characteristics of at least one memory device.
Next, in any case, the life estimated by the life estimator module is provided to a throttling module (e.g., throttling module 516, etc.). In decision 605, the throttle module determines that throttling is necessary if the estimated lifetime received from the lifetime estimator is shorter than the expected lifetime sent to the throttle module. If throttling is necessary, the method 600 proceeds to operation 606 by delaying (e.g., throttling, etc.) the life shortening command. However, if the estimated lifetime is not shorter than the expected lifetime, the method 600 proceeds to operation 607 described above.
Specifically, in operation 606, the throttling module may throttle execution of the life shortening command using the controller. In one embodiment, throttling is achieved by delaying execution of the lifetime-shortening command using the controller until the lifetime estimated by the lifetime estimator is longer than or equal to the expected lifetime.
In another embodiment, the pacing is determined during a predetermined time period and applied to commands in a subsequent predetermined time period. In this embodiment, a limit value as to how much the lifetime can be shortened in a predetermined time interval may be applied. In a further embodiment, a limit value as to how much the lifetime can be shortened in a time interval can be determined in one or more preceding time intervals. In another embodiment, the throttling may be determined based on an analysis of a plurality of pending operations, allowing operations that do not reduce lifetime to be performed before lifetime-reducing operations or operations that depend on the lifetime-reducing operations.
With this design, a data storage system for controlling the lifetime-shortening operation to ensure a desired minimum lifetime can be provided. Therefore, the influence of the lifetime shortening operation on such a minimum expected lifetime can be estimated, and the frequency of the lifetime shortening operation can be adaptively controlled.
FIG. 7 shows a graphical user interface 700 for measuring memory lifetime, according to another embodiment. Alternatively, the graphical user interface 700 may be implemented within the scope of the functionality and structure of fig. 1-6. Of course, the graphical user interface 700 may be used in any desired environment. Also, it should be noted that the foregoing definitions are applicable throughout this specification.
As shown, various indicia may be displayed that reflect at least one characteristic associated with memory life. In one embodiment, the feature may be identified as operation 102 of FIG. 1C. Of course, this lifetime-related characteristic may include any desired characteristic that is at least partially related to the lifetime of the memory. For example, in the context of the system 500 of FIG. 5, the features may be obtained by the controller 511 from any module processed by and/or merely passed to the computer 501, which may in turn display the associated indicia under the control of a software application (e.g., a plug-in, etc.).
For example, in one embodiment, the aforementioned indicia may include a gauge 702 for indicating an amount of life reserved for one or more memories. In this embodiment, the gauge 702 may indicate the amount of total memory life as a function of the number of life shortening operations performed over time. In another embodiment, the aforementioned indicia may include an estimate 705 for indicating life based on an inference of previous usage and assuming that throttling operations are discontinued.
In another embodiment, the aforementioned indicia may include a warning message 704 indicating a minimum amount of life for one or more memory spares. For example, the lifetime may be estimated from historical usage data of the memory. With this design, the user can be reminded to replace the memory or the like within a predetermined period of time. Of course, other embodiments are contemplated in which any desired indicia may be used to report various information related to memory lifetime.
FIG. 8 illustrates a method 800 for reducing write operations in a memory using difference information, in accordance with another embodiment. Alternatively, method 800 may or may not be performed in conjunction with the functions and structures of FIGS. 1-7. Of course, the method 800 may be performed in any desired environment, and it should be noted that the foregoing limitations apply throughout this specification.
As shown, a write operation performed on data stored in memory is identified. See operation 802. In the context of this specification, such a write operation may include any operation that results in data stored in memory being modified. Further, such a write operation is identified in any desired manner by intercepting the write command associated with the operation, the write operation itself, and so forth.
As shown in operation 804, a difference between the result of the write operation and the data stored in the memory is determined. In the context of the present specification, the aforementioned differences may reflect, at least in part, any differences between a first state of data stored in memory and a second state resulting from a previous write operation.
In another embodiment, a difference between data stored in memory is determined. For example, a new modified version of the file may be created and written to a new location in memory so that differences in data at different locations in memory may be determined. Alternatively, the location of the data may be determined from a hash, a halo filter, or the like. To this end, in one exemplary embodiment where different instances of the same data are written to different locations of the memory, the determined differences may include the location of the data, and not necessarily the data itself.
In one embodiment, the difference information associated with the difference may be stored in a memory (e.g., the same memory in which the data is stored, etc.). In another embodiment, the difference information may also be stored in a different buffer, in respect of which form different embodiments will be described in detail later. It is noted that the difference information may include any information that at least partially describes the difference determined in operation 804. As will become apparent in later described embodiments, in one embodiment, the difference information may be stored using an instruction set. As described below, in various embodiments, the instruction set may be adaptively changed and/or dynamically expanded.
For this reason, the writing operation can be reduced using the difference information. See operation 806. With this design, this reduction in write operations may selectively extend the life of the memory.
More illustrative information will be given regarding various alternative structures and features, which may or may not be performed as desired by the user. For example, an exemplary system for implementing an exemplary manner of reducing write operations based on the difference information will be presented. It is specifically noted that the following information is for explanatory purposes and should not be construed as limiting in any manner. Any of the following features may optionally be combined or not with features other than those described.
FIG. 9 illustrates a system 900 for reducing write operations in a memory, in accordance with another embodiment. Alternatively, the system 900 may be implemented to perform the method 800 of fig. 8 and/or further optionally incorporate any of the methods or techniques of fig. 1-7. Of course, the system 900 may also be used in any desired manner. And the foregoing definitions apply throughout this specification.
As shown, the system 900 includes a computer 901 coupled to a storage device 930 via an input/output (I/O) bus 902 in a manner to be described. The I/O bus 902 includes a read channel 903 and a write channel 904. The storage device 930 includes a plurality of storage blocks 931, 932, 933. The storage blocks 931, 932, 933 are written to and read from the computer 901.
For reasons that will become apparent, the predetermined portion 934 of each storage block 931, 932, 933 can be allocated to store difference information reflecting any changes made by the computer 901 to the data stored in the spare portion 935 of the corresponding storage block 931, 932, 933. In various embodiments, the size of predetermined portion 934 may be user configurable. Further, the difference information stored therein may be in any form.
Table 1 shows one possible format for representing an example of the difference information (a plurality of difference information may be stored in each predetermined portion 934 of the storage blocks 931, 932, 933).
TABLE 1
| Operation code | Source start address | Size and breadth | Data of |
| End up | N/A | N/A | N/A |
| Replacement of | <Address> | <Length in bytes> | <Replacement data> |
| Upward movement | <Address> | <Length in bytes> | <Starting address of data to be moved> |
| Move down | <Address> | <Length in bytes> | <Starting address of data to be moved> |
| Insert into | <Address> | <Length in bytes> | <Data to be inserted> |
| Deleting | <Address> | <Length in bytes> | N/A |
In this embodiment, the operation code may represent an operation performed on data stored in the spare portion 935 of the corresponding storage block 931, 932, 933. Examples of such operations may include, but are not limited to, end, replace, move up, move down, delete, insert, and/or any other operation. Alternatively, each of the operations uses a corresponding code for a concise representation (e.g., replace "001", move up "010", etc.).
Further, the source start address and size may be directed to and indicate the size of the data stored in the spare portion 935 of the corresponding storage block 931, 932, 933 (respectively), which is the subject of the operation. And in a state where the operation requires replacement/modification of data or the like, the data itself may be stored as part of the difference information. Alternatively, a compression algorithm may be applied to the difference information for more efficient storage. Alternatively, in a state where the operation requires moving data, since the data is stored in the initial memory block, the source address of the data may be specified without specifying the data itself.
In another embodiment, new operations may be created adaptively. For example, a repeating sequence of a first operation may be replaced with a new second operation. This new second operation optionally describes a sequence of the first operation. In this way, new operations may be adaptively created so that the system 900 may optimally adapt to new applications.
Of course, the data structure of Table 1 is for purposes of explanation only and should not be construed as limiting in any way. For example, an instance of the difference information may include only the data that was replaced (without any complex commands).
Means 910 are further provided for reducing write operations in the memory. The apparatus 910 includes a coalescing memory 920 having a plurality of coalescing buffers 921, 922, 923. In one embodiment, the size of each merge buffer 921, 922, 923 may be a predetermined size (e.g., 4Kb, etc.), which may be associated with a minimum block portion that is capable of being written to each storage block 931, 932, 933 in a single operation. Further, in different embodiments, the merge buffer 921 may include on-chip memory, external memory, DRAM, SRAM, and the like.
It is apparent that each merge buffer 921, 922, 923 holds an instance of difference information for the corresponding storage block 931, 932, 933 (see, e.g., Table 1). In other words, an instance of the difference information of the first storage block 931 is held in the first merge buffer 921, an instance of the difference information of the second storage block 932 is held in the second merge buffer 922, an instance of the difference information of the third storage block 933 is held in the third merge buffer 923, and so on.
The apparatus 910 further includes an update module 912 connected to the coalescing memory 920 via a bus 914 for writing the difference information stored in the coalescing memory buffers 921, 922, 923 to the corresponding storage blocks 931, 932, and 933. In one embodiment, the write operation is initiated by filling at least one instance of the difference information into one of the merge memory buffers 921, 922, 923 (thereby forming a minimum write size for the appropriate one of the memory blocks 931, 932, and 933). To complete the write operation, the update module 912 is coupled to the memory device 930 via the bus 915. Also, the output of the update module 912 is connected to the I/O bus 902 via a read path 903.
Also, the difference calculation module 911 is connected to the update module 912 via the read path bus 903, to the I/O bus 902 via the write channel bus 904, and further to the merge memory 920 via the bus 913. In use, the difference calculation module 911 is capable of reading data from the storage device 930 and reconstructing the current state of the data using the difference information from the associated storage blocks 931, 932, and 933 and/or the consolidated storage buffers 921, 922, 923.
The difference calculation module 911 is further capable of identifying the difference between the current state of the data and the state generated after the write operation (initialized by the computer 901) by first reconstructing the current state (similar to the read operation described above), and appropriately filling one or more instances of the difference information for updating the relevant memory blocks 931, 932, and 933 into the merged memory buffers 921, 922, 923 to write the data into the memory device 930. More information concerning such read and write operations will be given in the description of fig. 10 and 11.
In various embodiments, the difference calculation module 911 may use any desired technique to identify the aforementioned differences. For example, various string matching algorithms, data motion estimation techniques, and the like may be used. In additional embodiments, the difference may be determined byte by byte.
Further, the difference calculation may include any one or more of the following: finding byte strings to insert, finding byte strings to delete, finding byte strings to replace, finding byte strings to copy, determining whether to update byte strings by adding values, finding a backup of a storage block and creating a query thereof, finding a block split, finding a block merge, and the like.
FIG. 10 illustrates a method 1000 of reading memory using difference information, according to one embodiment. Alternatively, method 1000 may be performed using system 900 of fig. 9 and/or selectively in conjunction with any of the techniques of fig. 1-8, as desired. Of course, the method 1000 may also be used in a desired form. Also, the foregoing limitations apply throughout this specification.
As shown, the method 1000 may begin with operation 1001 by reading blocks (e.g., blocks 931, 932, 933 of FIG. 9, etc.) from memory (e.g., storage device 930, etc.) at the request of a computer (e.g., computer 901, etc.). The read storage block data is sent to an update module (e.g., update module 912, etc.). Next, in response to the read operation, the difference information is read from the merge buffers (e.g., merge buffers 921, 922, 923, etc.) corresponding to the memory blocks (associated with the computer request), and/or from the memory blocks themselves. See operation 1002. The appropriate source of difference information may depend on whether the requested information has been written from the merge buffer into the corresponding memory block at the time of the read request. Alternatively, the difference information may be interspersed between data in the flash memory. Furthermore, differences relating to particular data may be divided into one or more groups.
Next, in operation 1003, the update module applies the difference reflected in the difference information from operation 1002 to the corresponding block read in operation 1001. To this end, the data reconstructed in operation 1003 is transmitted to a computer through a read channel (e.g., read channel 903, etc.). See operation 1004.
In various embodiments, the aforementioned data read operation may include mapping a logical storage block number to a physical storage block number. Also, method 1000 may further provide error detection and correction in conjunction with read operations. Error detection and correction of the read data may further include a reread operation to attempt to recover the data and to relocate the recovered data to another storage location. For example, the relocation of the recovery data may include a logical storage block transfer and/or based on error rate information of the candidate storage blocks.
FIG. 11 illustrates a method 1100 for writing to memory using difference information, according to one embodiment. Alternatively, the method 1100 may use the system 900 of fig. 9 and/or further selectively combine any of the techniques of fig. 1-8, 10 as desired. Of course, the method 1100 may also be implemented in any desired manner. Also, the foregoing limitations apply throughout this specification.
Similar to the read method 1000 of FIG. 10, the method 1100 may begin with an operation 1101 of reading a block (e.g., blocks 931, 932, 933, etc. of FIG. 9) of a memory (e.g., storage device 930, etc.), where the operation is controlled by a write request of a computer (e.g., computer 901, etc.). The read storage block data is then sent to an update module (e.g., update module 912, etc.). Next, in operation 1102, difference information is read from a merge buffer (e.g., merge buffers 921, 922, 923, etc.) corresponding to the memory block (associated with the computer request), and/or from the memory block itself. Next, in operation 1103, the update module applies the differences reflected in the difference information from operation 1102 to the corresponding blocks read in operation 1101 to reconstruct the data that needs to be read or written.
To this end, the data reconstructed in operation 1103 is sent to a difference calculation module (e.g., difference calculation module 911, etc.) and compared with a data state resulting from performing a write operation requested by the computer. See operation 1104. To this end, differences between the reconstructed data and the data state resulting from performing the write operation may be identified. In one embodiment, the difference may be caused by an application (running on the computer) that updates the data. The update may include, but is not limited to, replacing byte strings, inserting byte strings, deleting byte strings, copying byte strings, and the like.
In operation 1105, difference information associated with the differences calculated in operation 1104 is appended to the appropriate merge buffer corresponding to the block, where there is at least one difference calculated in operation 1104. This appending may be done by writing to the end of the merge buffer in the merge memory. In one embodiment, the appending may further include decompressing the merge buffer, appending the data, and recompressing the appropriate merge buffer. Alternatively, the merge buffer memory may be reallocated to the merge buffer on demand.
In alternative embodiments, the difference information may be stored as operations that describe a function performed on the data (e.g., writing, etc.). For example, the difference information may reflect changes made by operations performed on the B-tree and thus may represent differences related to the operations. The B-tree is optionally used by databases, mail servers, file systems, etc.
Next, in decision 1106, the merge buffers are examined to determine whether they are full. If no merge buffer is full, the method 1100 proceeds to operation 1110. On the other hand, if at least one merge buffer is full, the method 1100 proceeds to operation 1107. In operation 1107, any full merge buffers may be appended to the difference information. In addition, these full merge buffers are emptied (for reuse, etc.), as shown in operation 1112.
It is further determined whether the difference information is full (operation 1114). If it is determined that the difference information is not full, the method 1100 proceeds to operation 1110. However, in response to determining that the difference information is full, a change in the difference information is applied to the data. See operation 1116. Further, as shown in operation 1118, the data block with the changes applied is written and the old data is discarded. Also, as shown in operation 1120, the difference information is cleared. To this end, a data storage system may be provided that utilizes the difference between written and existing data to reduce write operations and allocate write operations among memory blocks, thereby improving reliability of block-based storage.
In various embodiments, the memory to which the foregoing embodiments relate may include mechanical storage devices (e.g., disk drives including SATA disk drives, SAS disk drives, fibre channel disk drives, IDE disk drives, ATA disk drives, CE disk drives, USB disk drives, smart card disk drives, MMC disk drives, etc.) and/or non-mechanical storage devices (e.g., semiconductor-based devices, etc.). The non-mechanical memory may comprise, for example, volatile or non-volatile memory. In various embodiments, the non-volatile memory device may include flash memory (e.g., one-bit per cell NOR flash memory, multi-bit per cell NOR flash memory, one-bit per cell NAND flash memory, multi-level multi-bit per cell NAND flash memory, bulk flash memory, etc.). Although various examples of memories are given herein, it should be noted that the various principles are applicable to any type of memory for which the lifetime may be reduced by various operations performed thereon.
FIG. 12 illustrates an exemplary system 1200 that can implement the various structures and/or functions of the previous various embodiments. For example, exemplary system 1200 may represent a computer as described in some of the previous embodiments. Also, various devices set forth above may be part of system 1200.
As shown, the system 1200 is configured to include at least one main processor 1201 coupled to a communication bus 1202. The system 1200 also includes a main memory 1204. Control logic (software) and data are stored in the main memory 1204, which may take the form of Random Access Memory (RAM).
The system 1200 also includes a graphics processor 1206 and a display 1208, such as a computer monitor. System 1200 may also include secondary storage 1210. For example, secondary storage 1210 includes a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. The removable storage drive reads from and/or writes to the removable storage module in a well known manner.
Computer programs, or computer logic control algorithms, may be stored in the main memory 1204 and/or the secondary storage 1210. The computer programs, when executed, enable the system 1200 to perform various functions. Memory 1204, storage 1210, and/or any other storage may be examples of computer-readable media.
In one embodiment, the structures and/or functions of the various previous figures may be implemented within the context of the main processor 1201, the graphics processor 1206, the secondary storage 1210, an integrated circuit (not shown) having at least some of the functionality of the main processor 1201 and the graphics processor 1206, a chipset (i.e., a set of integrated circuits designed to operate and sold as a module or the like for performing the associated functions), and/or any other integrated circuit associated therewith.
Also, the structure and functionality of the various figures described above may be implemented within the scope of a general purpose computer system, a circuit board system, a game control system for entertainment purposes, a dedicated system, and/or other desired systems. For example, system 1200 may take the form of a desktop computer, a laptop computer, and/or any other type of logic circuitry. Also, system 1200 may take the form of various other devices including, but not limited to, Personal Digital Assistant (PDA) devices, mobile telephone devices, televisions, and the like.
Further, although not shown, system 1200 can be connected to a network for communication purposes (e.g., a telecommunications network, a Local Area Network (LAN), a wireless network, a Wide Area Network (WAN) such as the Internet, a peer-to-peer network, a wired network, etc.).
Further, although not shown, system 1200 can be connected to a network for communication purposes (e.g., a telecommunications network, a Local Area Network (LAN), a wireless network, a Wide Area Network (WAN) such as the Internet, a peer-to-peer network, a wired network, etc.).
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims appended hereto and their equivalents.