RELATED APPLICATIONSThis application is related to U.S. application Ser. No. 15/847,652 filed Dec. 19, 2017 and U.S. application Ser. No. 15/847,693 filed Dec. 19, 2017, which are incorporated herein by reference for all purposes.
BACKGROUNDField of the InventionThis invention relates to creating snapshots of a storage volume.
Background of the InventionIn many contexts, it is helpful to be able to return a database to an original state or some intermediate state. In this manner, changes to software or other database configuration parameters may be tested without fear of corrupting critical data.
The systems and methods disclosed herein provide an improved approach for creating snapshots of a database and returning to a previous snapshot.
BRIEF DESCRIPTION OF THE DRAWINGSIn order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
FIG. 1 is a schematic block diagram of a network environment for implementing methods in accordance with an embodiment of the present invention;
FIG. 2 is a process flow diagram of a method for coordinating snapshot creation with compute nodes and storage nodes in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the storage of data within a storage node in accordance with an embodiment of the present invention;
FIG. 4 is a process flow diagram of a method for processing write requests in a storage node in accordance with an embodiment of the present invention;
FIG. 5 is a process flow diagram of a method for processing a snapshot instruction by a storage node in accordance with an embodiment of the present invention;
FIG. 6 is a process flow diagram of a method for performing garbage collection on segments in accordance with an embodiment of the present invention;
FIG. 7 is a process flow diagram of a method for reading data from a snapshot in accordance with an embodiment of the present invention;
FIG. 8 is a process flow diagram of a method for cloning a snapshot in accordance with an embodiment of the present invention;
FIG. 9 illustrates a snapshot hierarchy created in accordance with an embodiment of the present invention;
FIG. 10 is a process flow diagram of a method for rolling back to a prior snapshot in accordance with an embodiment of the present invention;
FIG. 11 illustrates the snapshot hierarchy ofFIG. 9 as modified according to the method ofFIG. 10 in accordance with an embodiment of the present invention;
FIG. 12 is a process flow diagram of a method for reading from a clone snapshot in accordance with an embodiment of the present invention;
FIG. 13 is a process flow diagram of a method for deleting snapshots in accordance with an embodiment of the present invention;
FIGS. 14A and 14B are diagrams illustrating approaches for the processing of IOPs on a hybrid node in accordance with an embodiment of the present invention;
FIG. 15A is a process flow diagram of a method for processing write IOPs on a hybrid node in accordance with an embodiment of the present invention;
FIG. 15B is a process flow diagram of a method for processing read IOPs on a hybrid node in accordance with an embodiment of the present invention;
FIG. 16 is a process flow diagram of a method for annotating data with encoding tags in accordance with an embodiment of the present invention; and
FIG. 17 is a schematic block diagram of an example computing device suitable for implementing methods in accordance with embodiments of the invention.
DETAILED DESCRIPTIONReferring toFIG. 1, the methods disclosed herein may be performed using the illustratednetwork environment100. Thenetwork environment100 includes astorage manager102 that coordinates the creation of snapshots of storage volumes and maintains records of where snapshots are stored within thenetwork environment100. In particular, thestorage manager102 may be connected by way of anetwork104 to one ormore storage nodes106, each storage node having one ormore storage devices108, e.g. hard disk drives, flash memory, or other persistent or transitory memory. Thenetwork104 may be a local area network (LAN), wide area network (WAN), or any other type of network including wired, fireless, fiber optic, or any other type of network connections.
One ormore compute nodes110 are also coupled to thenetwork104 and host user applications that generate read and write requests with respect to storage volumes managed by thestorage manager102 and stored within thememory devices108 of thestorage nodes108.
The methods disclosed herein ascribe certain functions to thestorage manager102,storage nodes106, andcompute node110. The methods disclosed herein are particularly useful for large scale deployment including large amounts of data distributed overmany storage nodes106 and accessed bymany compute nodes110. However, the methods disclosed herein may also be implemented using a single computer implementing the functions ascribed herein to some or all of thestorage manager102,storage nodes106, andcompute node110.
Referring toFIG. 2, the illustratedmethod200 may be performed in order to invoke the creation of a new snapshot. Other than a current snapshot, which is still subject to change, a snapshot captures the state of a storage volume at a moment in time and is preferably not altered in response to subsequent writes to the storage volume.
Themethod200 includes receiving, by the storage manager102 a request to create a new snapshot for a storage volume. A storage volume as referred to herein may be a virtual storage volume that may divided into individual slices. For example, storage volumes as described herein may be 1 TB and be divided into 1 GB slices. In general, a slice and its snapshot are stored on asingle storage node106, whereas a storage volume may have the slices thereof stored bymultiple storage nodes106.
The request received atstep202 may be received from a human operator or generated automatically, such as according to backup scheduler executing on thestorage manager102 or some other computing device. The subsequent steps of themethod200 may be executed in response to receiving202 the request
Themethod200 may include transmitting204 a quiesce instruction to allcompute nodes110 that are associated with the storage volume. For example, allcompute nodes110 that have pending write requests to the storage volume. In some embodiments, thestorage manager102 may store a mapping ofcompute nodes110 to a particular storage volume used by thecompute nodes110. Accordingly, step204 may include sending204 the quiesce instruction to all of these compute nodes. Alternatively, the instruction may be transmitted204 to allcompute nodes110 and include an identifier of the storage volume. Thecompute nodes110 may then suppress any write instructions referencing that storage volume.
The quiesce instruction instructs thecompute nodes110 that receive it to suppress206 transmitting write requests to thestorage nodes106 for the storage volume referenced by the quiesce instruction. The quiesce instruction may further cause thecompute nodes110 that receive it to report208 to thestorage manager102 when no write requests are pending for that storage volume, i.e. all write requests issued to one ormore storage nodes106 and referencing slices of that storage volume have been acknowledged by the one ormore storage nodes106.
In response to receiving the report ofstep208 from one or more compute nodes, e.g. all compute nodes that are mapped to the storage node that is the subject of the snapshot request ofstep202, thestorage manager102 transmits210 an instruction to thestorage nodes106 associated with the storage volume to create a new snapshot of that storage volume.Step210 may further include transmitting210 an instruction to thecompute nodes110 associated with the storage volume to commence issuing write commands to thestorage nodes106 associated with the storage volume. In some embodiments, the instruction ofstep110 may include an identifier of the new snapshot. Accordingly, subsequent input/output operations (IOPs) transmitted214 from the compute nodes may reference that snapshot identifier. Likewise, thestorage node106 may associate the snapshot identifier with data subsequently written to the storage volume, as described in greater detail below.
In response to receiving210 the instruction to create a new snapshot, eachstorage node106 finalizes212 segments associated with the current snapshot, which may include performing garbage collection, as described in greater detail below. In addition, subsequent IOPs received by the storage node may also be processed216 using the new snapshot as the current snapshot, as is also described in greater detail below.
Referring toFIG. 3, the method by which slices are allocated, reassigned, written to, and read from may be understood with respect to the illustrated data storage scheme. The data of the storage scheme may be stored in transitory or persistent memory of thestorage node106, such as in thestorage devices108.
For each logical volume, thestorage manager102 may store and maintain avolume map300. For each slice in the logical volume, the volume map may include an entry including anode identifier302 identifying thestorage node106 to which the slice is assigned and an offset304 within the logical volume at which the slice begins. In some embodiments, slices are assigned both to astorage node106 and a specific storage device hosted by thestorage node106. Accordingly, the entry may further include a disk identifier of thestorage node106 referencing the specific storage device to which the slice is assigned.
The remaining data structures ofFIG. 3 are stored on eachstorage node106. Thestorage node106 may store aslice map308. Theslice map308 may include entries including alocal slice identifier310 that uniquely identifies each slice of thestorage node106, e.g. each slice of each storage device hosted by thestorage node106. The entry may further include avolume identifier312 that identifies the logical volume to which thelocal slice identifier310 is assigned. The entry may further include the offset304 within the logical volume of the slice of the logical volume assigned to thestorage node106.
In some embodiments, an entry in theslice map308 is created for a slice of the logical volume only after a write request is received that references the offset304 for that slice. This further supports the implementation of overprovisioning such that slices may be assigned to astorage node106 in excess of its actual capacity since the slice is only tied up in theslice map308 when it is actually used.
Thestorage node106 may further store and maintain asegment map314. Thesegment map314 includes entries either including or corresponding to a particular physical segment identifier (PSID)316. For example, thesegment map314 may be in an area of memory such that each address in that area corresponds to onePSID316 such that the entry does not actually need to include thePSID316. The entries of thesegment map314 may further include aslice identifier310 that identifies a local slice of thestorage node106 to which thePSID316 has been assigned. The entry may further include a virtual segment identifier (VSID)318. As described in greater detail below, each time a segment is assigned to logical volume and a slice of a logical volume, it may be assigned aVSID318 such that theVSIDs318 increase in value monotonically in order of assignment. In this manner, the mostrecent PSID316 assigned to a logical volume and slice of a logical volume may easily be determined by the magnitude of theVSIDs318 mapped to thePSIDs316. In some embodiments,VSIDs318 are assigned in a monotonically increasing series for all segments assigned tovolume ID312. In other embodiments, each offset304 and itscorresponding slice ID310 is assigned VSIDs separately, such that eachslice ID310 has its own corresponding series of monotonically increasingVSIDs318 assigned to segments allocated to thatslice ID310.
The entries of thesegment map314 may further include a data offset320 for thePSID316 of that entry. As described in greater detail below, when data is written to a segment it may be written at a first open position from a first end of the segment. Accordingly, the data offset320 may indicate the location of this first open position in the segment. The data offset320 for a segment may therefore be updated each time data is written to the segment to indicate where the new first open position is.
The entries of thesegment map314 may further include a metadata offset322. As described in detail below, for each write request written to a segment, a metadata entry may be stored in that segment at a first open position from a second end of the segment opposite the first end. Accordingly, the metadata offset322 in an entry of thesegment map314 may indicate a location of this first open position of the segment corresponding to the entry.
EachPSID316 corresponds to aphysical segment324 on a device hosted by thestorage node106. As shown,data payloads326 from various write requests are written to thephysical segment324 starting from a first end (left) of the physical segment. The physical segment may further storeindex pages328 such that index pages are written starting from a second end (right) of thephysical segment324.
Eachindex page328 may include aheader330. Theheader330 may be coded data that enables identification of a start of anindex page328. The entries of theindex page328 each correspond to one of thedata payloads326 and are written in the same order as thedata payloads326. Each entry may include a logical block address (LBA)332. TheLBA332 indicates an offset within the logical volume to which the data payload corresponds. TheLBA332 may indicate an offset within a slice of the logical volume. For example, inasmuch as thePSID316 is mapped to aslice ID310 that is mapped to an offset304 within aparticular volume ID312,maps308 and314, and anLBA332 within the slice may be mapped to the corresponding offset304 to obtain a fully resolved address within the logical volume.
In some embodiments, the entries of theindex page328 may further include a physical offset334 of thedata payload326 corresponding to that entry. Alternatively, or additionally, the entries of theindex page328 may include asize336 of thedata payload326 corresponding to the entry. In this manner, the offset to the start of adata payload326 for an entry may be obtained by adding up thesizes336 of previously written entries in the index pages328.
The metadata offset322 may point to the last index page328 (furthest from right in illustrated example) and may further point to the first open entry in thelast index page328. In this manner, for each write request, the metadata entry for that request may be written to the first open position in thelast index page328. If all of the index pages328 are full, anew index page328 may be created and stored at the first open position from the second end and the metadata for the write request may be added at the first open position in thatindex page328.
Thestorage node106 may further store and maintain ablock map338. Ablock map338 may be maintained for each logical volume and/or for each slice offset of each logical volume, e.g. for eachlocal slice ID310 which is mapped to a slice offset and logical volume byslice map308. The entries of theblock map338 map include entries corresponding to eachLBA332 within the logical volume or slice of the logical volume. The entries may include theLBA332 itself or may be stored at a location within the block map corresponding to anLBA332.
The entry for eachLBA332 may include thePSID316 identifying thephysical segment324 to which a write request referencing that LBA was last written. In some embodiments, the entry for eachLBA332 may further indicate the physical offset334 within thatphysical segment324 to which the data for that LBA was written. Alternatively, the physical offset324 may be obtained from the index pages328 of that physical segment. As data is written to anLBA332, the entry for thatLBA332 may be overwritten to indicate thephysical segment324 and physical offset334 within thatsegment324 to which the most recent data was written.
In embodiments implementing multiple snapshots for a volume and slice of a volume, thesegment map314 may additionally include asnapshot ID340 identifying the snapshot to which thePSID316 has been assigned. In particular, each time a segment is allocated to a volume and slice of a volume, the current snapshot identifier for that volume and slice of a volume will be included as thesnapshot ID340 for thatPSID316.
In response to an instruction to create a new snapshot for a volume and slice of a volume, thestorage node106 will store the new current snapshot identifier, e.g. increment the previously storedcurrent snapshot ID340, and subsequently allocated segments will include thecurrent snapshot ID340.PSIDs316 that are not filled and are allocated to theprevious snapshot ID340 may no longer be written to. Instead, they may be finalized or subject to garbage collection (seeFIGS. 5 and 6).
FIG. 4 illustrates amethod400 for executing write instructions by astorage node106, such as write instructions received from an application executing on acompute node110.
Themethod400 includes receiving402 a write request. The write request may include payload data, payload data size, and an LBA as well as fields such as a slice identifier, a volume identifier, and a snapshot identifier. Where a slice identifier is included, the LBA may be an offset within the slice, otherwise the LBA may be an address within the storage volume.
Themethod400 may include evaluating404 whether aPSID316 is allocated to the snapshot referenced in the write request and whether thephysical segment324 corresponding to the PSID316 (“the current segment”) has space for the payload data. In some embodiments, as write requests are performed with respect to aPSID316, the amount of data written asdata326 andindex pages328 may be tracked, such as by way of the data offset320 and metadata offset322 pointers. Accordingly, if the amount of previously-writtendata326 and the number of allocatedindex pages328 plus the size of the payload data and its corresponding metadata entry exceeds the capacity of the current segment it may be determined to be full atstep404.
If the current segment is determined404 to be full, themethod400 may include allocating406 anew PSID316 as thecurrent PSID316 and its correspondingphysical segment324 as the current segment for the snapshot referenced in the write request. In some embodiments, the status ofPSIDs316 of thephysical storage devices108 may be flagged in thesegment map314 as allocated or free as a result of allocation and garbage collection, which is discussed below. Accordingly, afree PSID316 may be identified in thesegment map314 and flagged as allocated.
Thesegment map314 may also be updated408 to include aslice ID310 andsnapshot ID340 mapping thecurrent PSID316 to the snapshot ID,volume ID312, and offset304 included in the write request. Upon allocation, thecurrent PSID316 may also be mapped to a VSID (virtual segment identifier)318 that will be a number higher than previously VSIDs318 such that the VSIDs increase monotonically, subject, of course, to the size limit of the field used to store theVSID318. However, the size of the field may be sufficiently large that it is not limiting in most situations.
Themethod400 may include writing410 the payload data to the current segment. As described above, this may include writing410payload data326 to the free location closest to the first end of the current segment.
Themethod400 may further include writing412 a metadata entry to the current segment. This may include writing the metadata entry (LBA, size) to the first free location closest to the second end of the current segment. Alternatively, this may include writing the metadata entry to the first free location in anindex page328 that has room for it or creating anew index page328 located adjacent aprevious index page328.Steps410,412 may include updating one or more pointers or table that indicates an amount of space available in the physical segment, such as apointer320 to the first free address closest to the first end and apointer322 to the first free address closest to the second end, which may be the first free address before thelast index page328 and/or the first free address in the last index page. In particular, these pointers may be maintained as the data offset320 and metadata offset in thesegment map314 for thecurrent PSID316.
Themethod400 may further include updating416 theblock map338 for the current snapshot. In particular, for eachLBA332 referenced in the write request, an entry in theblock map338 for thatLBA332 may be updated to reference thecurrent PSID316. A write request may write to a range ofLBAs332. Accordingly, the entry for eachLBA332 in that range may be updated to refer to thecurrent PSID316.
Updating theblock map338 may include evaluating414 whether an entry for a givenLBA332 referenced in the write request already exists in theblock map338. If so, then that entry is overwritten418 to refer to thecurrent PSID316. If not, an entry is updated416 in theblock map318 that maps theLBA332 to thecurrent PSID316. In this manner, theblock map338 only references LBAs332 that are actually written to, which may be less than all of theLBAs332 of a storage volume or slice. In other embodiments, theblock map338 is of fixed size and includes and entry for eachLBA332 regardless of whether it has been written to previously. Theblock map338 may also be updated to include the physical offset334 within the current segment to which thedata326 from the write request was written.
In some embodiments, thestorage node106 may execute multiple write requests in parallel for thesame LBA332. Accordingly, it is possible that a later write can complete first and update theblock map338 whereas a previous write request to thesame LBA332 completes later. The data of the previous write request is therefore stale and theblock map338 should not be updated.
Suppressing of updating theblock map338 may be achieved by using theVSIDs318 and physical offset334. When executing a write request for an LBA, theVSID318 mapped to thesegment324 and the physical offset334 to which the data is to be, or was, written may be compared to theVSID318 and offset334 corresponding to the entry in theblock map338 for theLBA332. If theVSID318 mapped in thesegment map314 to thePSID316 in the entry of theblock map338 corresponding to theLBA332, then theblock map338 will not be updated. Likewise, if theVSID318 corresponding to thePSID316 in theblock map338 is the same as theVSID318 for the write request and the physical offset334 in theblock map338 is higher than the offset334 to which the data of the write request is to be or was written, theblock map338 will not be updated for the write request.
As a result of steps414-418, theblock map338 only lists thePSID316 where the valid data for a givenLBA332 is stored. Accordingly, only the index pages328 of thephysical segment324 mapped to thePSID316 listed in theblock map338 need be searched to find the data for a givenLBA332. In instances where the physical offset334 is stored in theblock map338, no searching is required.
FIG. 5 illustrates amethod500 executed by astorage node106 in response to the new snapshot instruction ofstep210 for a storage volume. Themethod500 may be executed in response to an explicit instruction to create a new snapshot or in response to a write request that includes anew snapshot ID340. Themethod500 may also be executed with respect to a current snapshot that is still being addressed by new write requests. For example, themethod500 may be executed periodically or be triggered based on usage.
Themethod500 may include allocating502 anew PSID316 and its correspondingphysical segment324 as thecurrent PSID316 and current segment for the storage volume, e.g., by including aslice ID310 corresponding to avolume ID312 and offset304 included in the new snapshot instruction or the write request referencing thenew snapshot ID340. Allocating502 a new segment may include updating504 an entry in thesegment map314 that maps thecurrent PSID316 to thesnapshot ID340 and aslice ID310 corresponding to avolume ID312 and offset304 included in the new snapshot instruction.
As noted above, when aPSID316 is allocated, theVSID318 for thatPSID316 may be a number higher than allVSIDs318 previously assigned to thatvolume ID312, and possibly to that slice ID310 (where slices have separate series of VSIDs318). Thesnapshot ID340 of the new snapshot may be included in the new snapshot instruction or thestorage node106 may simply assign a new snapshot ID that is theprevious snapshot ID340 plus one.
Themethod500 may further include finalizing506 and performing garbage collection with respect to PSIDs316 mapped to one or moreprevious snapshots IDs340 for thevolume ID312 in thesegment map314, e.g.,PSIDs316 assigned to thesnapshot ID340 that was the current snapshot immediately before the new snapshot instruction was received.
FIG. 6 illustrates amethod600 for finalizing and performing garbage collection with respect tosegment IDs340 for a snapshot (“the subject snapshot”), which may include the current snapshot or a previous snapshot. Themethod600 may include marking602 as valid latest-written data for anLBA332 in thePSID316 having thehighest VSID318 in thesegment map314 and to which data was written for thatLBA332. Marking602 data as valid may include making an entry in a separate table that lists the location of valid data or entries for metadata in a givenphysical segment324 or setting a flag in the metadata entries stored in the index pages328 of aphysical segment324, e.g., a flag that indicates that the data referenced by that metadata is invalid or valid.
Note that theblock map338 records thePSID316 for the latest version of the data written to a givenLBA332. Accordingly, any references to thatLBA332 in thephysical segment324 of aPSID316 mapped to a lower-numberedVSID318 may be marked604 as invalid. For thephysical segment324 of thePSID316 in theblock map338 for a givenLBA332, the last metadata entry for thatLBA332 may be found and marked as valid, i.e. the last entry referencing theLBA332 in theindex page328 that is thelast index page328 including a reference to theLBA332. Any other references to theLBA332 in thephysical segment324 may be marked604 as invalid. Note that the physical offset334 for theLBA332 may be included in theblock map334, so all metadata entries not corresponding to that physical offset334 may be marked as invalid.
Themethod600 may then include processing606 each segment ID S of thePSIDs316 mapped to the subject snapshot according to steps608-620. In some embodiments, the processing ofstep606 may exclude acurrent PSID316, i.e. thelast PSID302 assigned to the subject snapshot. As described below, garbage collection may include writing valid data from a segment to a new segment. Accordingly, step606 may commence with thePSID316 having the lowest-valuedVSID318 for the subject snapshot. As anysegments324 are filled according to the garbage collection process, they may also be evaluated to be finalized or subject to garbage collection as described below.
Themethod600 may include evaluating608 whether garbage collection is needed for the segment ID S. This may include comparing the amount of valid data in thephysical segment324 for the segment ID S to a threshold. For example, if only 40% of the data stored in thephysical segment324 for the segment ID S has been marked valid, then garbage collection may be determined to be necessary. Other thresholds may be used, such as value between 30% and 80%. In other embodiments, the amount of valid data is compared to the size of thephysical segment324, e.g., the segment ID S is determined to need garbage collection if the amount of valid data is less than X % of the size of thephysical segment324, where X is a value between 30 and 80, such as 40.
If garbage collection is determined608 not to be needed, themethod600 may include finalizing610 the segment ID S. Finalizing may include flagging the segment ID S in thesegment map314 as full and no longer available to be written to. This flag may be stored in another table that lists finalizedPSIDs316.
If garbage collection is determined608 to be needed, then themethod600 may include writing612 the valid data to a new segment. For example, if the valid data may be written to acurrent PSID316, i.e. the most-recently allocatedPSID316 for the subject snapshot, until its correspondingphysical segment324 full. If there is no room in thephysical segment324 for thecurrent PSID316, step612 may include assigning anew PSID316 as thecurrent PSID316 for the subject snapshot. The valid data, or remaining valid data, may then be written to thephysical segment324 corresponding to thecurrent PSID316 for the subject snapshot.
Note that writing612 the valid data to the new segment maybe processed in the same manner as for any other write request (seeFIG. 4) except that the snapshot ID used will be thesnapshot ID340 of the subject snapshot, which may not be the current snapshot ID. In particular, the manner in which thenew PSID316 is allocated to the subject snapshot may be performed in the same manner described above with respect to steps406-48 ofFIG. 4. Likewise, the manner in which the valid data is written to the current segment may be performed in the same manner as for steps410-412 ofFIG. 4. In some embodiments, writing of valid data to a new segment as part of garbage collection may also include updating the block map with the new location of the data for anLBA332, such as according to steps414-418 ofFIG. 4. When thephysical segment324 of thecurrent PSID316 is found to be full, it may itself be subject to theprocess600 by which it is finalized or subject to garbage collection.
After the valid data is written to a new segment, themethod600 may further include freeing614 the PSID S in thesegment map314, e.g., marking the entry insegment map314 corresponding to PSID S as free.
The process of garbage collection may be simplified forPSIDs316 that are associated with the subject snapshot in thesegment map314 but are not listed in theblock map338 with respect to anyLBA332. Thephysical segments324 ofsuch PSIDs316 do not store any valid data. Entries forsuch PSIDs316 in thesegment map314 may therefore simply be deleted and marked as free in thesegment map314
FIG. 7 illustrates amethod700 that may be executed by astorage node106 in response to a read request. The read request may be received from an application executing on acompute node110. The read request may include such information as a snapshot ID, volume ID (and/or slice ID), LBA, and size (e.g. number of 4 KB blocks to read).
The following steps of themethod700 may be initially executed using thesnapshot ID340 included in the read request as “the subject snapshot,” i.e., the snapshot that is currently being processed to search for requested data. Themethod700 includes receiving702 the read request by thestorage node106 and identifying704 one ormore PSIDs316 in thesegment map314 assigned to the subject snapshot and searching706 the metadata entries for thesePSIDs316 for references to theLBA332 included in the read request.
The searching of step706 may be performed in order of decreasingVSID318, i.e. such that the metadata entries for the last allocatedPSID316 is searched first. In this manner, if reference to theLBA332 is found, the metadata of any previously-allocatedPSIDs316 does not need to be searched.
Searching706 the metadata for aPSID316 may include searching one ormore index pages328 of thephysical segment324 corresponding to thePSID316. As noted above, one ormore index pages328 are stored at the second end of thephysical segment324 and entries are added to the index pages328 in the order they are received. Accordingly, the last-written metadata including theLBA332 in the last index page328 (furthest from the second end of the physical segment324) in which theLBA332 is found will correspond to the valid data for thatLBA332. To locate thedata326 corresponding to the last-written metadata for theLBA332 in thephysical segment324, thesizes336 for all previously-written metadata entries may be summed to find a start address in thephysical segment324 for thedata326. Alternatively, if the physical offset334 is included, then thedata326 corresponding to the metadata may be located without summing thesizes336.
If reference to theLBA332 is found708 in thephysical segment324 for any of thePSIDs316 allocated to the subject snapshot, thedata326 corresponding to the last-written metadata entry including thatLBA332 in thephysical segment324 mapped to thePSID316 having thehighest VSID318 of allPSIDs316 in which the LBA is found will be returned710 to the application that issued the read request.
If theLBA332 is not found in the metadata entries for any of thePSIDs316 mapped to subject snapshot, themethod700 may include evaluating712 whether the subject snapshot is the earliest snapshot for the storage volume of the read request on thestorage node106. If so, then the data requested is not available to be read and themethod700 may include returning714 a “data not found” message or otherwise indicating to the requesting application that the data is not available.
If an earlier snapshot than the subject snapshot is present for the storage volume on thestorage node106, e.g., there exists at least onePSID316 mapped to asnapshot ID340 that is lower than thesnapshot ID340 of the subject snapshot ID, then the immediately precedingsnapshot ID340 will be set716 to be the subject snapshot and processing will continue atstep704, i.e. thePSIDs316 mapped to the subject snapshot will be searched for theLBA332 in the read request as described above.
Themethod700 is particularly suited for reading data from snapshots other than the current snapshot that is currently being written to. In the case of a read request from the current snapshot, theblock map338 may map eachLBA332 to thePSID316 in which the valid data for thatLBA332 is written. Accordingly, for such embodiments,step704 may include retrieving thePSID332 for theLBA332 in the write request from theblock map338 and only searching706 the metadata corresponding to thatPSID316. Where theblock map338 stores a physical offset334, then the data is retrieved from that physical offset within thephysical segment314 of thePSID336 mapped to theLBA332 of the read request.
In some embodiments, theblock map332 may be generated for a snapshot other than the current snapshot in order to facilitate executing read requests, such as where a large number of read requests are anticipated in order to reduce latency. This may include searching the index pages328 of thesegments324 allocated to the subject snapshot and its preceding snapshots to identify, for eachLBA332 to which data has been written, thePSID316 having thehighest VSID318 of thePSIDs316 havingphysical segments324 storing data written to the eachLBA332. ThisPSID316 may then be written to theblock map318 for the eachLBA332. Likewise, the physical offset334 of the last-written data for thatLBA332 within thephysical segment324 for thatPSID316 may be identified as described above (e.g., as described above with respect to steps704-716).
Referring toFIG. 8, in some instances it may be beneficial to clone a storage volume. This may include capturing a current state of a principal copy of a storage volume and making changes to it without affecting the principal copy of the storage volume. For purposes of this disclosure a “principal copy” or “principal snapshot” of a storage volume refers to an actual production copy that is part of a series of snapshots that is considered by the user to be the current, official, or most up-to-date copy of the storage volume. In contrast, a clone snapshot is a snapshot created for experimentation or evaluation but changes to it are not intended by the user to become part of the production copy of the storage volume. Stated differently, only one snapshot may be a principal snapshot with respect to an immediately preceding snapshot, independent of the purpose of the snapshot. Any other snapshots that are immediate descendants of the immediately preceding snapshot are clone snapshots.
The illustratedmethod800 may be executed by thestorage manager102 and one ormore storage nodes106 in order to implement this functionality. Themethod800 may include receiving802 a clone instruction and executing the remaining steps of themethod800 in response to the clone instruction. The clone instruction may be received by thestorage manager102 from a user or be generated according to a script or other program executing on thestorage manager102 or a remote computing device in communication with thestorage manager102.
Themethod800 may include recording804 a clone branch in a snapshot tree. For example, referring toFIG. 9, in some embodiments, for each snapshot that is created for a storage volume, thestorage manager102 may create a node S1-S5 in asnapshot hierarchy900. In response to a clone instruction, thestorage manager102 may create a clone snapshot and branch to a node A1 representing the clone snapshot. In the illustrated example, a clone instruction was received with respect to the snapshot of node S2. This resulted in the creation of clone snapshot represented by node A1 that branches from node S2. Note node S3 and its descendants are also connected to node S2 in the hierarchy.
In some embodiments, the clone instruction may specify which snapshot the clone snapshot is of In other embodiments, the clone instruction may be inferred to be a snapshot of a current snapshot. In such embodiments, a new principal snapshot may be created and become the current snapshot. The previous snapshot will then be finalized and be subject to garbage collection as described above. The clone will then branch from the previous snapshot. In the illustrated example, if node S2 represented the current snapshot, then a new snapshot represented by node S3 would be created. The snapshot of node S2 would then be finalized and subject to garbage collection and clone snapshot represented by A1 would be created and node A1 would be added to the hierarchy as a descendent of node S2.
In some embodiments, the clone node A1, and possibly its descendants A2 to A4 (representing subsequent snapshots of the clone snapshot), may be distinguished from the nodes S1 to S5 representing principal snapshots, such as by means of a flag, a classification of the connection between the node A1 and node S2 that is its immediate ancestor, or by storing data defining node A1 in a separate data structure.
Following creation of a clone snapshot, other principal snapshots of the storage volume may be created and added to represented in the hierarchy by one or more nodes S2 to S5. A clone may be created of any of these snapshots and represented by additional clone nodes. In the illustrated example, node B1 represents a clone snapshot of the snapshot represented by node S4. Subsequent snapshots of the clone snapshot are represented by nodes B1 to B3.
Referring again toFIG. 8, the creation of a clone snapshot on thestorage node106 may be performed in the identical manner as for any other snapshot, such as according to the methods ofFIGS. 2 through 6. In particular, one ormore segments806 may be allocated to the clone snapshot onstorage nodes106 storing slices of the cloned storage volume and mapped to the clone snapshot. IOPs referencing the clone snapshot may be executed808, such as according to themethod400 ofFIG. 4.
In some instances, it may be desirable to store a clone snapshot on adifferent storage node106 than the principal snapshots. Accordingly, themethod800 may include allocating806 segments to the clone snapshot on thedifferent storage node106. This may be invoked by sending a new snapshot instruction referencing the clone snapshot (i.e., an identifier of the clone snapshot) to thedifferent storage node106 and instructing one ormore compute nodes110 to route IOPs for the clone snapshot to thedifferent storage node106.
Thestorage node102 may store in each node of the hierarchy, data identifying one ormore storage nodes106 that store data for the snapshot represented by that node of the hierarchy. For example, each node may store or have associated therewith one or more identifiers ofstorage nodes106 that store a particular snapshot ID for a particular volume ID. The node may further map one or more slice IDs (e.g., slice offsets) of a storage volume to onestorage nodes106 storing data for that slice ID and the snapshots for that slice ID.
Referring toFIG. 10, one of the benefits of snapshots is the ability to capture the state of a storage volume such that it can be restored at a later time.FIG. 10 illustrates amethod1000 for rolling back a storage volume to a previous snapshot, particularly for a storage volume having one or more clone snapshots.
Themethod1000 includes receiving1002, by thestorage manager102, an instruction to rollback a storage volume to a particular snapshot SN. Themethod1000 may then include processing1004 each snapshot that is a represented by a descendent node of the node representing snapshot SN in the snapshot hierarchy, i.e. snapshots SN+1 to SMAX, where SMAX is the last principal snapshot that is a descendent of snapshot SN (each “descendent snapshot”). For each descendent snapshot, processing1004 may include evaluating1006 whether the each descendent is an ancestor of a node representing a clone snapshot. If not, then thestorage manager102 may instruct allstorage nodes106 storing segments mapped to the descendent snapshot to free1008 these segments, i.e. delete entries from the segment map referencing the descendent snapshot and markingcorresponding PSIDs316 as free in thesegment map314.
If the descendent snapshot is found1006 to be an ancestor of a clone snapshot, then step1008 is not performed and the snapshot and any segments allocated to it are retained.
FIG. 11 illustrates the snapshot hierarchy following execution of themethod1000 with respect to the snapshot represented by node S3. As is apparent, snapshot S5 has been removed from the hierarchy and any segments corresponding to these snapshots will have been freed on one ormore storage nodes106.
However, since node S4 is an ancestor of clone node B1, it is not removed and segments corresponding to it are not freed on one or more storage nodes in response to the roll back instruction. Inasmuch as each snapshot contains only data written to the storage volume after it was created, previous snapshots may be required to recreate the storage volume. Accordingly, the snapshots of nodes S3 to S1 are needed to create the snapshot of the storage volume corresponding to node B1.
Subsequent principal snapshots of the storage volume will be added as descendants of the node to which the storage volume was rolled back. In the illustrated example, a new principal snapshot is represented by node S6 that is an immediate descendent of node S3. Node S4 is only present due to clone node B1 and therefore may itself be classified as a clone node in the hierarchy in response to the rollback instruction ofstep1002.
Note thatFIG. 11 is a simple representation of a hierarchy. There could be any number of clone snapshots, clones of clone snapshots and descendent snapshots of any of these snapshots represented by nodes of a hierarchy. Accordingly, to roll back to a particular snapshot of a clone, themethod1000 is the same, except that descendants of the clone snapshot are treated the same as principal snapshots and clones of any of these descendants are treated the same as a clone snapshot.
Referring toFIG. 12, the illustratedmethod1200 may be used to execute a read request with respect to a storage volume that is represented by a hierarchy generated as described above with respect toFIGS. 8 through 11. The illustratedmethod1200 may also be executed with respect to a storage volume that includes only principal snapshots that are distributed across multiple storage nodes, i.e., all the segments corresponding to snapshots of the same slice of the storage volume are not located on thesame storage node106. In that case, the hierarchy stored on thestorage manager102 stores the location of the segments for each snapshot and therefore enables them to be located.
Themethod1200 may be executed by a storage node106 (“the current storage node”) with information retrieved from thestorage manager102 as noted below. Themethod1200 may include receiving1202 a read request, which may include such information as a snapshot ID, volume ID (and/or slice ID), LBA, and size (e.g. number of 4 KB blocks to read).
Note that the read request may be issued by an application executing on acompute node110. Thecompute node110 may determine whichstorage node106 to transmit the read request using information from thestorage manager102. For example, thecompute node110 may transmit a request to obtain an identifier for thestorage node102 storing data for a particular slice and snapshot of a storage volume. The storage manager may then obtain an identifier and/or address for thestorage node106 storing that snapshot and slice of the storage volume from the hierarchical representation of the storage volume and return it to the requestingcompute node110. For example, thestorage manager102 may retrieve this information from the node in the hierarchy representing the snapshot included in the read request.
In response to the read request, the current storage node performs the algorithm illustrated by subsequent steps of themethod1200. In particular, themethod1200 may include identifying1204 segments assigned to the snapshot ID of the read request in the segment (“the subject snapshot”).
Themethod1200 may include searching1206 the metadata of the segments identified instep1204 for the LBA of the read request. If the LBA is found, the data from the highest numbered segment having the LBA in its metadata is returned, i.e. the data that corresponds to the last-written metadata entry including the LBA.
If the LBA is not found in any of the segments mapped to subject snapshot, then themethod1200 may include evaluating1212 whether the subject snapshot is the earliest snapshot on the current storage node. If not, then steps processing continues atstep1204 with the previous snapshot set1214 as the subject snapshot.
Steps1204-1214 may be performed in the same manner as for steps704-714 of themethod700, including the various modifications and variations described above with respect to themethod700.
In contrast to themethod700, if the LBA is not found in any of the segments corresponding to the subject snapshot for any of the snapshots evaluated, then themethod1200 may include requesting1216 a location, e.g. storage node identifier, where an earlier snapshot for the volume ID or slice ID is stored. In response to this request, thestorage manager102 determines an identifier of astorage node106 storing the snapshot corresponding to the immediate ancestor of the earliest snapshot stored on the current storage node in the hierarchy. Thestorage manager102 may determine an identifier of thestorage node106 relating to the immediate-ancestor snapshot and that stores data for a slice ID and volume ID of the read request as recorded for the ancestor nearest ancestor node in the hierarchy of the node corresponding to the earliest snapshot stored on the current storage node.
If the current storage node is found1218 to be the earliest snapshot for the storage volume ID and/or slice ID of the read request, then the data thestorage manager102 may report this fact to the storage node, which will then return1220 a message indicating that the requested LBA is not available for reading, such as in the same manner asstep714 of themethod700.
If another storage node stores an earlier snapshot for the volume ID and/or slice ID of the read request, then the read request may be transmitted1222 to this next storage node by either the current storage node or thestorage manager102. The processing may then continue atstep1202 with the next storage node as the current storage node. The read request transmitted atstep1222 may have a snapshot ID set to the latest snapshot ID for the storage volume ID and or slice ID of the original read request.
Themethod1200 may be performed repeatedly acrossmultiple storage nodes106 until the earliest snapshot is encountered or the LBA of the read request is located.
FIG. 13 illustrates amethod1300 for deleting snapshots. Themethod1300 may include receiving1302, by thestorage manager102, an instruction to delete a snapshot (“the subject snapshot”) for a storage volume (“the subject volume”). The instruction may be received from a user or from a script or other scheduling program that deletes snapshots after a certain amount of time or when they are otherwise no longer needed.
In response, thestorage manager102flags1304 the subject snapshot as deleted in the snapshot hierarchy for the subject volume. The instruction of step1302 may include an identifier of the subject snapshot and subject volume. For example, in the hierarchy ofFIG. 9, snapshot S1 may be deleted. Accordingly, the hierarchy as shown inFIG. 9 would remain unchanged except that an annotation would associated with the hierarchy that indicates that S1 is now deleted.
Thestorage manager102 then transmits1306 an instruction to delete the snapshot to all implicated storage nodes. For example, as shown inFIG. 3, thevolume map300 for the subject volume may indicate thenode302 on which a slice having a given address (offset304) is stored. Accordingly, the instruction may be transmitted to thestorage node106 corresponding to eachnode ID302 mapped to the subject volume. The instruction may include identifiers of the subject snapshot and subject volume.
Upon receiving the instruction, eachstorage node106 that receives it may update1308 itssegment map314 as stored in memory without updating a persistent copy of thesegment map314 stored on a storage device108 (e.g., hard disk drive (HDD), solid state drive (SSD)) of thatstorage node106. In this manner, the delete instruction does not impair production IOP processing by thestorage node106 on thestorage device108.
Updating1308 thesegment map314 may include removing reference to the deleted storage node. For example, suppose snapshots are designated S(i), i=1 to N, with N being the number of snapshots and S(N) being the latest snapshot. If an instruction is received to delete S(M), M<N, then all references to snapshots S(M) in thesegment map314 in memory will be changed to S(M+1), or the earliest non-deleted snapshot following SM. Accordingly, for eachPSID316 including asnapshot ID340 corresponding to S(M) will be changed such that thesnapshot ID340 references S(M+1). A persistent copy of thesegment map314 in memory will still refer to S(M) in the entries corresponding to thosesame PSIDs316.
In the event that thestorage node106 crashes or otherwise is found1310 to be restarted, thesegment map314 in memory will be lost. In response to detecting1310 restarting, thestorage node106 will therefore request1312 the snapshot hierarchy from thestorage manager102, which then transmits1314 the snapshot hierarchy to thestorage node106.
In response to receiving the snapshot hierarchy, thestorage node106 then reads the persistent copy ofsegment map314 from its storage location on thestorage device108 into memory. Thestorage node108 againupdates1316 thesegment map314 in memory without updating the persistent copy of thestorage map314. The updating may be performed in the same manner as forstep1308 with references to any snapshots that are flagged as deleted in the snapshot hierarchy being changed as described above with respect to step1308.
Multiple snapshots may have been deleted prior to restarting being detected1310. However, the process is the same: all references to deleted snapshots SM in thesegment map314 in memory will be changed to S(M+1), or the earliest non-deleted snapshot following S(M). In this case a “non-deleted” snapshot is a snapshot that is not flagged as deleted in the snapshot hierarchy.
As described above, garbage collection (seeFIG. 6) is performed for snapshots. As described above, segments that have little valid data may have that valid data written to a new segment and then marked as free in thesegment map314. As a result of this process, it can be expected that the segments referencing a deleted snapshot will eventually all be marked as free.
Accordingly, thestorage node106 may periodically evaluate1318 the persistent copy of thesegment map314. In the event that all segments referring to a deleted snapshot are found1318 to have been freed, either with or without reallocation, then thestorage node106 may notify1320 thestorage manager102, such as by transmitting an identifier of the deleted snapshot and its corresponding storage volume to thestorage manager102 with a message indicating that it is no longer referenced.
Whether a segment referencing the deleted snapshot has been freed may be determined by comparing theVSIDs318 of the segment maps. If theSlice ID310 andVSID318 of aPSID316 entry corresponding to the deleted snapshot in the persistent copy of thesegment map314 do not match the both theSlice ID310 andVSID318 of the entry for thesame PSID316 in thesegment map314 in memory, then thatPSID316 has been freed and reallocated. Of course, if the entry in memory is flagged as free for aPSID316, then this clearly indicates that the segment has been garbage collected and is no longer allocated to the deleted snapshot.
In response, thestorage manager102 deletes1322 reference to the deleted snapshot from the snapshot hierarchy for the storage volume identified in the notification ofstep1320. Using the example ofFIG. 9, where S1 is deleted, the hierarchy would be updated to remove reference to it such that S2 is the oldest snapshot in the hierarchy.
Where a snapshot that is deleted has a clone snapshot as a descendent, then the deleted snapshot may become a clone node in a branch including the clone snapshot but not be deleted, as discussed above. If a snapshot is deleted that is the only non-clone ancestor of a clone node, then the deleted snapshot and any descendent clone nodes are no longer connected to the snapshot hierarchy and may be treated as a separate snapshot hierarchy. For example, if S1 and S2 were to be deleted inFIG. 9, A1 to A4 would no longer have any connection to snapshots S3-S5 and would be unaffected by subsequent changes to the original snapshot hierarchy.
Note that the only disk writes required for deletion of a snapshot on the storage node are those that would occur during normal operation as a result of garbage collection. Accordingly, deletion of a snapshot does not significantly interfere with processing of production IOPs.
Referring toFIG. 14A in some instances acompute node110 also operates as astorage node106. This may be the case where network latency must be reduced. Accordingly, astorage node106 may be required to process IOPs that are generated locally and those that are received over a network from aremote compute node110.
In the illustratedconfiguration1400a, a disk virtualization manager (DVM)1402 executes the functions ascribed to astorage node106 in the above-described methods. TheDVM1402 may be implemented as a daemon executing on the storage node that is invoked by a kernel in response to procedure calls referencing it, including remote procedure calls (RPCs) from remote compute noes110.
Inasmuch as theDVM1402 is configured as a network service, local IOPs may be routed in a manner such that theDVM1402 processes them in the same manner as IOPs received as RPCs. For example, an IOP from a locally executing application may be sent to anetwork buffer1406 of thestorage node106 and be addressed to an IO (input-output)module1404 executing on thestorage node106, such as a daemon process.
TheIO module1404 determines that the IOP is for thelocal DVM1402 and copies the IOP tomemory1408 of the network stack of akernel space1410 in the form of RPC addressed to theDVM1402. TheIO module1404 andDVM1402 may operate in user application space1412. The kernel then processes the RPC frommemory1408 and routes it to theDVM1402, which then processes the IOP by executing a read or write operation, such as according to the methods described above.
A response to the IOP may be copied tomemory1414 in the network stack inkernel space1410, such as in the form of a RPC addressed to theIO module1404. TheIO module1404 receives the response and then returns it to the network stack ofkernel space1410 addressed to the application from which it was received.
If theIO module1404 receives and IOP for aremote storage node106, theIO module1404 may transmit the IOP to theremote storage node106 as a RPC transmitted through the network stack of thekernel space1410.
As is apparent, this approach is complex and requires various intermediate steps in order to simulate an RPC addressed to the RVM even though the application issuing the IOP is executing on thesame storage node106 as theDVM1402.
FIG. 14B illustrates analternative approach1400bfor implementing ahybrid storage node106 that also functions as acompute node110. In this approach, theIO module1404 andDVM1402 are components of asingle process1416 that may operate as a daemon or other persistent service executing on thestorage node106.
Themodules1404,1402 may communicate with one another by means of library function calls to one another and by way of sharedmemory1408 inkernel space1410. Alocal application1420 executing in user space will then issue IOPs to thenetwork buffer1406 inkernel space1410, which will be addressed to theIO module1404 of theunified process1416. IOPs addressed to aremote storage node106 may be transmitted to thatstorage node106 by means of anRPC1420 issued by the kernel in response to receiving the IOP from theapplication1420 or as instructed by theIO module1404.
FIGS. 15A and 15B illustratemethods1500a,1500bshowing details of theapproach1400b.FIG. 15A illustrates a method for processing a write IOP using theIO module1404 andDVM module1402 of theunified process1416. Themethod1500aincludes receiving1502 an IOP by theIO module1404 from anapplication1420 operating in user space1412, such as by way of thenetwork buffer1406 fromkernel space1410. In other approaches, the application may address an IOP to theIO module1404 through some other process inkernel space1410 or user space1412.
TheIO module1404 determines1504 a destination of the IOP, such as in the form of an IP address, storage node identifier, or other addressing information. If the destination is found1506 not to be local, theIO module1404 transmits the IOP to theDVM module1402 of theremote storage node106 addressed by the IOP. TheDVM module1402 of the remote storage node may be part of aunified process1416 on that node or may be implemented according to theapproach1400a. TheIO module1404 may transmit the IOP by generating anRPC1508 inkernel space1410 that is transmitted by the kernel to theremote storage node106.
If the destination is found1506 to be local, payload data from the write IOP may be written1510 to the sharedmemory1418 inkernel space1410. The payload data is the data requested to be written to persistent storage on thestorage node106 by the IOP.
Themethod1500afurther includes invoking1512 a library function call to theDVM module1402 of theunified process1416. The library function call may be made directly to theDVM module1402 directly through theunified process1416 executing in user space1412 and therefore does not require transmitting information through the network stack inkernel space1410.
TheDVM module1402 receives1514 the library call and, in response, executes1516 the IOP using the payload data stored in the sharedmemory1418. The function call may include data from the write IOP sufficient to identify the location to which the payload is to be written and may include the write IOP itself, other than the payload data. Executing1516 the function call may include writing the payload data to the location referenced by the write IOP according to themethod400 or using any approach for processing write commands using any disk virtualization approach known in the art. Accordingly, the write IOP may include data sufficient to identify the location to write the data according to themethod400 or whichever disk virtualization approach is used.
TheDVM module1402 may then invoke1518 a function call to theTO module1518 within theunified process1418 indicating a result of the TOP, e.g. an acknowledgment of successful completion, an error message, or some other message. TheTO module1404, receives this function call and, in response, returns1520 the response to theapplication1420 either directly or by way of thenetwork buffer1406 inkernel space1410.
FIG. 15B illustrates anexample method1500bfor processing a read TOP using theapproach1400b. In themethod1500b, a read TOP is received and processed according to steps1502-1508 in the same manner as a write TOP.
If the read TOP is found1506 to be local, a function call is again invoked1512 to theDVM module1402. Inasmuch as a read TOP may not contain a significant amount of data, any writing to the sharedmemory1418 may be omitted in this case. In other embodiments, some or all of the data of the read TOP is written to the sharedmemory1418.
TheDVM module1402 receives1514 and executes1516 the function call as for themethod1500a. For themethod1500b, the function call may include data from the read TOP sufficient to identify the data to be read or include the read TOP itself. Executing1516 the function call may include reading the data referenced by the read TOP according to themethod700 or using any approach for processing read commands using any disk virtualization approach known in the art. Accordingly, the read TOP may include data sufficient to identify the location from which to read data according to themethod700 or whichever disk virtualization approach is used.
TheDVM module1402 may thewrite1522 payload data read atstep1516 to the sharedmemory1418 and invoke1518 a function call within theunified process1416 to theTO module1404. For themethod1500b, thefunction call1518 may indicate that a result of executing1516 the read TOP, which may be a message indicating success, an error, or communicating some other information.
In response to receiving the function call ofstep1518, theTO module1404 returns1524 a response to theapplication1420 that issued the read TOP, which may include the payload data as read from the sharedmemory1418 if the read command was successful. Where the read command was not successful, theTO module1404 may forward the status message fromstep1518 to theapplication1420. Returning1524 the response may include directly transmitting the response to theapplication1420 or by way of thenetwork buffer1406 inkernel space1410 or by some other process executing inkernel space1410.
Referring toFIG. 16, in some instances, data may be encoded in some form prior to being written to astorage device108. This encoding may be encryption, compression, addition of error correction codes, or any other type of encoding known in the art. Inasmuch as a storage volume may be in use over an extended period of time, an encoding protocol may change during its use. Accordingly, earlier stored data may use a different protocol than later stored data. The illustratedmethod1600 may be used to make possible these changes in encoding while still enabling recovery of data.
The illustratedmethod1600 may be preceded by an instruction to astorage node106 to use a particular encoding protocol for a storage volume. The encoding protocol may be for encryption, compression, error correction, or some other purpose. This instruction may be received from thestorage manager102, such as in response to a user instruction to use a particular encoding protocol for a particular purpose.
The illustratedmethod1600 is described as being performed by anTO module1404 andDVM module1402 that may be implemented according to the approach ofFIG. 14A or 14B. Likewise, the distribution of actions between theTO module1404 andDVM module1402 is exemplary only and may be performed by a single component or a different component executing on astorage node106. Accordingly, steps relating to communication among these components may be omitted in such embodiments.
Themethod1600 may include receiving1602 a write TOP from an application executing locally or on aremote compute node110. TheTO module1404 determines1604 one or more current encoding protocols specified for the storage volume referenced in the write TOP (encrypt, compress, error correction, etc.).
TheTO module1404 then encodes1606 the payload data from the write TOP according to the one or more protocols determined atstep1604, which may include one or more of encrypting, compressing, and adding error correction. The TO module transmits1608 the write TOP to theDVM module1402 along with tags indicating the encoding protocols executed atstep1606.
TheDVM module1402 then executes1610 the write TOP using the encoded payload data, i.e. writes the encoded payload data to an address included in the write TOP according to any method known in the art or according to any of the methods described above, such as themethod400 ofFIG. 4.
TheDVM module1402 further adds1612 the tags, or data representing the tags, transmitted1616 with the write IOP to the metadata entry for the write IOP. In particular, as shown inFIG. 3, each write IOP may result in creation of a metadata entry in anindex page328 for eachLBA332 referenced in the write IOP. According to themethod1600, this metadata entry for eachLBA332 in anindex page328 will also include the tags, or a representation of data indicted by the tags, indicating the encoding protocols used to encode the payload data written to thatLBA332. Accordingly, there may be one or more tags depending on the protocols used, such as an encryption protocol tag, compression protocol tag, error correction code tag, or any other tag sufficient to identify an encoding protocol.Step1612 may be performed as part ofstep412 of themethod400 or at a different point in the execution of a write IOP.
Steps1614-1626 illustrate an example approach for processing read IOPs with respect to data that has been encoded and written according to steps1602-1612.
TheIO module1404 of thestorage node106 receives1614 a read IOP from a local application or aremote compute node110. TheIO module1404 transmits1616 the read IOP to theDVM module1402, which then executes1618 the read IOP and retrieves payload data referenced by the read IOP using any method for executing read IOPs, such as according to themethod700 ofFIG. 7.
TheDVM module1402 further retrieves1620 the one or more tags from the metadata entry for the data read atstep1618, i.e. in the metadata entry for theLBA332 referenced by the read IOP. TheDVM module1402 then transmits1622 the payload data and one or more tags to theIO module1404, which decodes1624 the payload data using the protocols indicated by the tags to obtain the payload data as encoded atstep1606. The protocols may be applied in a reverse order than that in which they were applied atstep1606. Accordingly, the ordering of the tags as stored in the metadata may indicate the order in which protocols were applied atstep1606 such that corresponding decoding protocols may be performed in the correct reverse order. The decoded data may then be returned1626 to the application that issued the read IOP atstep1614.
FIG. 17 is a block diagram illustrating anexample computing device1700.Computing device1700 may be used to perform various procedures, such as those discussed herein. Thestorage manager102,storage nodes106, and computenodes110 may have some or all of the attributes of thecomputing device1700.
Computing device1700 includes one or more processor(s)1702, one or more memory device(s)1704, one or more interface(s)1706, one or more mass storage device(s)1708, one or more Input/output (I/O) device(s)1710, and adisplay device1730 all of which are coupled to abus1712. Processor(s)1702 include one or more processors or controllers that execute instructions stored in memory device(s)1704 and/or mass storage device(s)1708. Processor(s)1702 may also include various types of computer-readable media, such as cache memory.
Memory device(s)1704 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM)1714) and/or nonvolatile memory (e.g., read-only memory (ROM)1716). Memory device(s)1704 may also include rewritable ROM, such as Flash memory.
Mass storage device(s)1708 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown inFIG. 17, a particular mass storage device is ahard disk drive1724. Various drives may also be included in mass storage device(s)1708 to enable reading from and/or writing to the various computer readable media. Mass storage device(s)1708 include removable media1726 and/or non-removable media.
I/O device(s)1710 include various devices that allow data and/or other information to be input to or retrieved fromcomputing device1700. Example I/O device(s)1710 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
Display device1730 includes any type of device capable of displaying information to one or more users ofcomputing device1700. Examples ofdisplay device1730 include a monitor, display terminal, video projection device, and the like.
Interface(s)1706 include various interfaces that allowcomputing device1700 to interact with other systems, devices, or computing environments. Example interface(s)1706 include any number of different network interfaces1720, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface1718 andperipheral device interface1722. The interface(s)1706 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
Bus1712 allows processor(s)1702, memory device(s)1704, interface(s)1706, mass storage device(s)1708, I/O device(s)1710, anddisplay device1730 to communicate with one another, as well as other devices or components coupled tobus1712.Bus1712 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components ofcomputing device1700, and are executed by processor(s)1702. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
In the above disclosure, reference has been made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.
It should be noted that the sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s).
At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure.