Movatterモバイル変換


[0]ホーム

URL:


US6907507B1 - Tracking in-progress writes through use of multi-column bitmaps - Google Patents

Tracking in-progress writes through use of multi-column bitmaps
Download PDF

Info

Publication number
US6907507B1
US6907507B1US10/326,432US32643202AUS6907507B1US 6907507 B1US6907507 B1US 6907507B1US 32643202 AUS32643202 AUS 32643202AUS 6907507 B1US6907507 B1US 6907507B1
Authority
US
United States
Prior art keywords
memory
data
map
state
bits
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US10/326,432
Inventor
Oleg Kiselev
Anand A. Kekre
John A. Colgrove
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arctera Us LLC
Original Assignee
Veritas Operating Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Veritas Operating CorpfiledCriticalVeritas Operating Corp
Priority to US10/326,432priorityCriticalpatent/US6907507B1/en
Assigned to VERITAS SOFTWARE CORPORATIONreassignmentVERITAS SOFTWARE CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: KEKRE, ANAND A., COLGROVE, JOHN A., KISELEV, OLEG
Assigned to VERITAS OPERATING CORPORATIONreassignmentVERITAS OPERATING CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: VERITAS SOFTWARE CORPORATION
Priority to US11/068,545prioritypatent/US7089385B1/en
Publication of US6907507B1publicationCriticalpatent/US6907507B1/en
Application grantedgrantedCritical
Assigned to SYMANTEC OPERATING CORPORATIONreassignmentSYMANTEC OPERATING CORPORATIONCHANGE OF NAME (SEE DOCUMENT FOR DETAILS).Assignors: VERITAS OPERATING CORPORATION
Assigned to VERITAS US IP HOLDINGS LLCreassignmentVERITAS US IP HOLDINGS LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: SYMANTEC CORPORATION
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENTreassignmentBANK OF AMERICA, N.A., AS COLLATERAL AGENTSECURITY INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: VERITAS US IP HOLDINGS LLC
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENTreassignmentWILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENTSECURITY INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: VERITAS US IP HOLDINGS LLC
Assigned to VERITAS TECHNOLOGIES LLCreassignmentVERITAS TECHNOLOGIES LLCMERGER AND CHANGE OF NAME (SEE DOCUMENT FOR DETAILS).Assignors: VERITAS TECHNOLOGIES LLC, VERITAS US IP HOLDINGS LLC
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENTreassignmentWILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENTSECURITY INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: VERITAS TECHNOLOGIES LLC
Assigned to VERITAS US IP HOLDINGS, LLCreassignmentVERITAS US IP HOLDINGS, LLCTERMINATION AND RELEASE OF SECURITY IN PATENTS AT R/F 037891/0726Assignors: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT
Adjusted expirationlegal-statusCritical
Assigned to ACQUIOM AGENCY SERVICES LLC, AS ASSIGNEEreassignmentACQUIOM AGENCY SERVICES LLC, AS ASSIGNEEASSIGNMENT OF SECURITY INTEREST IN PATENT COLLATERALAssignors: BANK OF AMERICA, N.A., AS ASSIGNOR
Assigned to ARCTERA US LLCreassignmentARCTERA US LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: VERITAS TECHNOLOGIES LLC
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENTreassignmentBANK OF AMERICA, N.A., AS COLLATERAL AGENTSECURITY INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: ARCTERA US LLC
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENTreassignmentWILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENTPATENT SECURITY AGREEMENTAssignors: ARCTERA US LLC
Assigned to VERITAS TECHNOLOGIES LLCreassignmentVERITAS TECHNOLOGIES LLCRELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS).Assignors: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT
Assigned to VERITAS TECHNOLOGIES LLC (F/K/A VERITAS US IP HOLDINGS LLC)reassignmentVERITAS TECHNOLOGIES LLC (F/K/A VERITAS US IP HOLDINGS LLC)RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS).Assignors: ACQUIOM AGENCY SERVICES LLC, AS COLLATERAL AGENT
Expired - Lifetimelegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Disclosed is a method and apparatus for tracking in-progress writes to a data volume and a copy thereof using a multi-column bit map. The method can be implemented in a computer system and, in one embodiment, includes creating a data volume in a first memory, and creating a copy of the data volume in a second memory. In response to the computer system receiving a request to write first data to the data volume, the computer system switches the state of first and second bits of a map entry in a memory device, wherein the state of the first and second bits are switched using a single write access to the memory device.

Description

BACKGROUND OF THE INVENTION
Many businesses rely on large-scale data processing systems for storing and processing their data. Insurance companies, banks, brokerage firms, etc., rely heavily on data processing systems. Often the viability of a business depends on the reliability of access to data contained within its data processing system. As such, businesses seek reliable ways to consistently protect their data processing systems and the data contained therein from natural disasters, acts of terrorism, or computer hardware and/or software failures.
Businesses must be prepared to eliminate or minimize data loss and recover quickly with useable data after an unpredicted event such as a hardware or software failure in the data processing system. When an unexpected event occurs that causes loss of data, businesses often recreate the data using backup copies of their data made of magnetic storage tape. Restoring data from a backup tape is typically a time-consuming process that often results in a substantial loss of business opportunity. Business opportunity is lost because the data processing system cannot process new data transactions while data is being recreated from backup copies. Further, restoration from tape usually involves loss of data. For most businesses, this kind of data loss and down time is unacceptable. Mirrored data volume and virtual point-in-time (PIT) backup technologies offer better solutions for full and accurate restoration of business critical data. Mirrored data volume and virtual PIT technologies not only minimize or eliminate data loss, but also enable rapid recovery of a data processing system when compared to conventional bulk data transfer from sequential media.
FIG. 1 illustrates an exemplarydata processing system10 that employs mirrored data volume and virtual PIT backup technology.Data processing system10 includes ahost node12 coupled to data storage systems14-18. Data storage systems14-18 include data memories24-28, respectively. As will be more fully described below,data memory24 stores a primary data volume,data memory26 stores a virtual PIT backup copy of the primary data volume, anddata volume28 stores a mirrored copy of the data volume.Data processing system10 shown in FIG.1 and its description herein should not be considered prior art to the invention disclosed and/or claimed.
Host node12 may take form in a computer system (e.g., a server computer system) that received and processes requests to read or write data to the primary data volume stored withinmemory24. In response to receiving these requests,host node12 generates read or write-data transactions for reading or writing data to the primary data volume withindata memory24. It is noted thathost node12 is capable of accessing each of the memories24-28 or memory internal to hostnode12 for reading or writing data to the data volumes stored therein.
Each of the data memories24-28 may include several memory devices such as arrays of magnetic or optical discs. The primary data volume maybe distributed across several memory devices ofmemory24. Likewise, the virtual PIT copy of the primary data volume is distributed across several memory devices ofdata memory26, and the mirrored copy of the primary data volume is distributed across several memory devices ofdata memory28. Each of the data memories24-28 may include nmaxphysical memory blocks into which data can be stored. More particularly,data memory24 includes nmaxmemory blocks allocated byhost node12 for storing data of the primary data volume,data memory26 includes nmaxmemory blocks allocated to store the virtual PIT backup copy of the primary data volume, anddata memory28 includes nmaxmemory blocks allocated to store the mirrored copy of the primary data volume. Corresponding memory blocks in data memories24-28 are equal in size. Thus,memory block1 ofdata memory24 is equal in size tomemory block1 ofdata memories26 and28.
Host node12 creates the virtual PIT backup copy of the primary data volume according to the methods described in copending U.S. patent application Ser. No. 10/143,059, filed May 10, 2002 entitled “Method and Apparatus for Creating a Virtual Data Copy,” and U.S. patent application Ser. No. 10/254,753, filed Sep. 25, 2002 and entitled “Method and Apparatus for Restoring a Corrupted Data Volume,” each of which is incorporated herein by reference in entirety. These references describe howhost node12 reads and writes data to the virtual PIT copies. Additionally, the above referenced applications describe how the virtual PIT backup copy can be converted to a real PIT backup copy using a background copying process executed byhost node12.
Host node12 creates the virtual PIT backup copy of the primary data volume when host node receives or internally generates a command to create a PIT copy of the primary data volume. Initially (i.e., before any virtual PIT copy is created in data memory26)data memory26 contains no data. Whenhost node12 creates the first virtual PIT backup copy inmemory26,host node12 creates a pair of valid/modified (VM) maps, such asmaps30 and32 shown in FIG.2.FIG. 2 also shows athird map34 which will be more fully described below.VM maps30 and32 corresponds to the primary data volume and the virtual PIT copy thereof, respectively, stored withindata memories24 and26, respectively. Hence,VM maps30 and32 will be referred to asprimary VM map30 and PIT backupcopy VM map32.
Each of theVM maps30 and32 include nmaxentries, each entry having two bits. Each entry ofprimary VM map30 corresponds to a respective block ofdata memory24, while each entry of the virtual PIT backupcopy VM map32 corresponds to a respective block ofdata memory26. The first and second bits in each entry ofVM maps30 and32 are designated Vnand Mn, respectively. Vnof each entry, depending on its state (i.e., logical 0 or logical 1), indicates whether block n of the associated memory contains valid data. Vnof theprimary VN map30 indicates whether a corresponding memory block n inmemory24 stores valid data. For example, when set to logical 1, V2ofVM map30 indicates thatblock2 ofdata memory24 contains valid data of the primary data volume, and when set to logical 0, V2of theVM map30 indicates thatblock2 ofdata memory24 contains no valid data of the primary data volume. Vnof the PIT backupcopy VM map32 indicates whether memory block n inmemory26 stores a valid point-in-time copy of data in block n ofmemory24. For example, V2ofVM map32, when set to logical 1, indicates thatblock2 ofmemory26 contains a valid copy of data that existed inblock2 ofmemory24 at the time the pit backup copy was first created or at the time the PIT backup copy was last refreshed. Copending U.S. application Ser. No. 10/326,427, filed Dec. 19, 2002, entitled “Instant Refresh of A Data Volume” describes one method of refreshing a PIT backup copy and is incorporated herein by reference in entirety. V2ofVM map32, when set to logical 0, indicates thatblock2 ofmemory26 does not contain a valid copy of data. The Vnbit of PIT backupcopy VM map32 is used to determine when data in block n ofmemory24 is to be copied to block n ofmemory26. More particularly, whenhost node12 generates a write-data transaction for writing data to block n ofmemory24,host node12 checks the status of Vnin PIT backupcopy VM map32. If Vnis set to logical 0, then the PIT backup copy inmemory26 lacks a valid copy of data of block n inmemory24. Before data can be written to block n ofmemory24 in accordance with the write-data transaction, data in block n ofmemory24 must first be copied to block n ofmemory26. If, on the other hand, Vnis set to logical 1, data can be written to block n ofmemory24 in accordance with the write-data transaction without first copying data to block n ofmemory26.
Mnin each entry ofVM map30 and PITcopy VM map32, depending upon its state, indicates whether data within a respective block n of the corresponding memory is modified (or new) since the time the PIT backup copy was first created or last refreshed. For example, when set to logical 1, M3of theprimary VM map30, indicates thatblock3 ofdata memory24 contains data which has been modified since the time the PIT backup copy was first created or last refreshed. When set to logical 0, M3of the primary VM map of30 indicates that data has been modified or written toBlock3 ofdata memory24 since the time the PIT backup copy was first created or refreshed.
WhenVM maps30 and32 are first created byhost node12, each entry of PITVM map32 is set to logical 0 thus indicating thatdata memory24 contains no valid or modified data. For purposes of explanation, it is presumed that each block ofdata memory24 contains valid data of the primary volume. Accordingly, VM of each entry withinprimary VM map30 is initially set to logical 1. Lastly, Mnof each entry inprimary VM map30 is initially set to logical 0.
As noted above,data memory28 stores a mirrored copy of the primary data volume. A mirrored copy is considered a real time copy of the primary data volume. Eachtime host node12 writes data to the primary data volume stored inmemory24 via a write-data transaction, host node also generates a transaction to write the same data to the mirrored copy stored withinmemory28. Thus, the mirrored copy, as its name implies, closely tracks the changes to the primary data volume. The mirrored copy withinmemory28 provides an immediate backup data source in the event thatdata storage system14 fails as a result of hardware and/or software failure. Ifdata storage system14 suddenly becomes unusable or inaccessible,host node12 can service client computer system read or write requests using the mirrored volume withindata memory28.
Host node12 is also subject to hardware and/or software failure. In other words hostnode12 may crash. Whenhost node12 recovers from a crash,host node12 expects contents of the primary data volume to be consistent with the contents of the primary data volume. Unfortunately,host node12 may crash after a write-data transaction is completed todata memory24, but before the mirrored copy can be updated with the same data of the write-data transaction. If this is to happen, the contents of the primary data copy stored inmemory24 will be inconsistent with the mirrored copy inmemory28 whenhost node12 recovers from its crash.
Host node12 uses the dirty region (DR) map34 shown inFIG. 2 to safeguard against inconsistencies betweenmemories24 and28 after crash recovery.DR map34 includes nmaxentries corresponding to the nmaxmemory blocks indata memories24 and28. Each entry includes one bit which, when set to logical 0 or logical 1, indicates whether respective memory blocks indata memories24 and28 are the subject of an in-progress write-data transaction. For example,host node12 sets DR2to logical 1 beforehost node12 writes data tomemory block2 indata memories24 and28. Setting or clearing a bit inDR map34 requires a write (i.e., an I/O operation) to memory that storesDR map34. After the write transaction is completed tomemory block2 ofdata memories24 and28,host node12 switches the state of DR2back to 0 using another write operation.
Ifhost node12 crashes, the contents ofDR map34 is preserved. After recovering from the crash,host node12 copies the contents of memory blocks indata memory24 to respective memory blocks indata memory28, or vice versa, for each respective DRnbit set to logical 1 inDR map34. After these memory blocks are copied,host node12 is ensured that the primary data volume ofdata memory24 is consistent with the mirrored copy ofdata memory28.
Host node12 is burdened by having to generate two separate write operations to updateVM map30 andDR map34 with each write-data transaction to the primary data volume. More particularly, with each write-datatransaction host node12 must generate a first write operation to updateVM map30 and a second write operation to updateDR map34. Additionally, a substantial amount of time is needed to complete separate write operations toVM map30 andDR map34. To illustrate,FIG. 3 is a flow chart illustrating operational aspects of modifying or writing data of the primary volume withindata memory24 in accordance with a write-data transaction. More particularly, whenhost node12 receives a request to write data to the primary volume from one of the computer systems,host node12 generates a write-data transaction to write data to block n ofmemory24. Instep42,host node12accesses VM map32 to determine whether Vnis set to logical 1. If Vnof theVM map32 is set to logical 0, thenhost computer12, as shown instep44, copies the contents of block n ofmemory24 to the corresponding block n inmemory26. Thereafter, instep46host computer12 sets Vnof theVM map32 to logical 1. Beforehost computer system12 writes or modifies data of block n ofmemory24 that stores a portion of the primary volume and block n ofmemory28 that stores a portion of the mirrored copy,host computer system12 must set Mnand DRnofprimary VM map30 andDR map34, respectively, to logical 1. It is noted that Mnand DRnare in separate tables, and thus have separate addresses within memory. Accordingly,host computer system12 must generate separate writes or IO functions to set Mnand DRnto logical 1. The time it takes, for example, for the read/write head (not shown) to travel across a magnetic disc to reach the appropriate sector that stores MnofVM map30, followed by the time it takes for the magnetic read/write to travel across the magnetic disc to reach another sector that stores DRnof theDR map34, may be substantial. Once Mnand DRnare set to logical 1 insteps50 and52,host node12 writes data to block n inmemory24 as shown instep54 and writes data to block n ofmemory28.
SUMMARY OF THE INVENTION
Disclosed is a method and apparatus for tracking in-progress writes to a data volume and a copy thereof using a multi-column bit map. The method can be implemented in a computer system and, in one embodiment, includes creating a data volume in a first memory, and creating a copy of the data volume in a second memory. In response to the computer system receiving a request to write first data to the data volume, the computer system switches the state of first and second bits of a map entry in a memory device, wherein the state of the first and second bits are switched using a single write access to the memory device.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings.
FIG. 1 is a block diagram of a data processing system;
FIG. 2 is a block diagram of VM and DR maps created by the host node shown inFIG. 1;
FIG. 3 is a flow chart illustrating operational aspects of writing or modifying data in the primary data volume and the mirrored copy thereof, after creation of a virtual PIT backup copy of the primary data volume;
FIG. 4 is a block diagram of a data processing system employing one embodiment of the present invention;
FIG. 5 is a block diagram of VM/DR and VM maps created by the host node shown inFIG. 4, and;
FIG. 6 is a flow chart illustrating operational aspects of writing or modifying data in the primary data volume and the mirrored copy thereof of the data processing system shown inFIG. 4, after creation of a virtual PIT backup copy of the primary data volume.
The use of the same reference symbols in different drawings indicates similar or identical items.
DETAILED DESCRIPTION
The present invention relates to an apparatus and method for tracking in-progress write I/O's through the use of multi-column bit maps.FIG. 4 illustrates (in block diagram form) relative components of adata processing system60 employing one embodiment of the present invention.Data processing system60 is similar todata processing system10 of FIG.1.Data processing system60 includes ahost node62 coupled to data storage systems64-68. It is noted that the term “coupled devices” should not be limited to devices coupled directly together by a communication link. Two devices may be coupled together even though communication occurs through an intervening device.
Data storage systems64-68 include memories74-78, respectively.Memory74 stores one or more primary data volumes. For purposes of explanation,memory74 stores one primary data volume, it being understood that the present invention will not be limited thereto.Memory76 stores a point-in-time (PIT) backup copy of the primary data volume whether real or virtual, whilememory78 stores a mirrored copy of the primary data volume.
The primary data volume (and the PIT backup and mirrored copies thereof) is a collection of files that store data. While it is said that files store data, in reality data is stored in physical blocks ofmemory74 which are allocated to the files byhost node62.Host node62 may take form in a computer system (e.g., a server computer system) that includes a datastorage management system70 in the form of software instructions executing on one or more data processors (not shown). Datastorage management system70 may include a file system (not shown) and a system (not shown) for managing the distribution of volume data across several memory devices. Volume Manager™ provided by Veritas Corporation of Mountain View, Calif., is an exemplary system for managing the distribution of volume data across several memory devices.Host node62 generates read or write-data transactions in response tohost node62 receiving requests from client computer systems to read or write data to the primary data volume.
As noted, memories74-78 store the primary data volume, the PIT backup copy of the primary data volume, and the mirrored copy of the primary data volume, respectively. Data memories74-78 may take form in one or more dynamic or static random access memories, one or more arrays of magnetic or optical data storage disks, or combinations thereof. Data memories74-78 should not be limited to the foregoing hardware components; rather, data memories74-78 may take form in any hardware, software, or combination of hardware and software in which data may be accessed and persistently stored. Data memories74-78 may take form in a complex construction of several hardware components operating under the direction of software. The memories may take form in mirrored hardware. It is further noted that the present invention may find use with many types of redundancy/reliability systems. For example, the present invention may be used with Redundant Array of Independent Disks (RAID) systems. Moreover, the present invention should not be limited to use in connection with the host node of a data storage network. The present invention may find use in a storage switch or in any of many distinct appliances that can be used with a data storage system.
Each of the data memories74-78 includes nmaxphysical blocks of memory into which data can be stored. It is noted that any or all of the memories74-78 may have more than nmaxmemory blocks. However, the first nmaxblocks in memories74-78 are allocated byhost node62 for storing the primary data volume, the PIT backup copy of the primary data volume, and the mirrored copy of the primary data volume, respectively. Corresponding memory blocks in memories74-78 are equal in size. Thus, memory blocks1 ofmemory74 is equal in size tomemory blocks1 inmemories74 and76. Each of the memory blocks withinmemory74 may be equal in size to each other. Alternatively, the memory blocks inmemory74 may vary in size.
Host node62 can access each of the blocks in memories74-78 in response to generating read or write-data transactions. The read or write-data transactions are generated byhost node62 in response tohost node62 receiving read or write requests from client computer systems coupled thereto. The primary data volume stored inmemory74 is the “working” volume ofdata processing system60. Thus, when client computer systems request a read or write access to data, the access is directed to the primary data volume. The PIT backup copy of the primary data volume acts as a safeguard against data corruption of the primary data volume due to a software or operator error. The mirrored copy withinmemory78 provides a alternative source to data in the event thatdata storage system64 is rendered inaccessible tohost node62 due to hardware and/or software failure.
Host node creates a virtual PIT backup copy of the primary data volume by creating primary VM/DR map80 and aPIT VM map82 shown within FIG.5. These maps may be stored within one of the memories74-78, in a memory withinhost node62 or in a memory (not shown) attached to hostnode62. The contents of each of themaps80 and82 is accessible byhost node62 using a read or write operation.Maps80 and82, likemaps30 and32 described above, include nmaxentries. Each entry of primary VM/DR map80 corresponds to a respective blocks ofdata memories74 and78, while each entry of theVM map82 corresponds to a respective block ofdata memory76.VM map82 is substantially similar to map32, while VM/DR map80 is substantially similar to a combination ofVM map30 andDR map34.
Each entry in primary VM/DR map80 consists of three bits designated Vn, Mn, and DRn. Vnin the primary VM/DR map, depending on its state, indicates whether a corresponding block inmemory74 contains valid primary volume data. For example, when set to logical 1, V2of the primary VM/DR map80 indicates thatblock2 ofmemory74 contains valid primary volume data, and when set to logical 0, V2of primary VM/DR map80 indicates thatblock2 ofdata memory74 contains no valid primary volume data. V2ofVM map82, when set to logical 1, indicates thatblock2 ofmemory76 contains a valid copy of data that existed inblock2 ofmemory74 at the time the PIT backup copy was first created or at the time the PIT backup copy was last refreshed. V2ofVM map82, when set to logical 0, indicates thatblock2 ofmemory76 does not contain a valid copy of data.
Mnin each entry of primary VM/DR map80, depending upon its state, indicates whether data has been written to or modified in the corresponding block n ofmemory74 since the last time the PIT backup copy of the primary data volume was created or refreshed. For example, when set to logical 1, M3of primary VM/DR map80 indicates that data has been written to block3 ofmemory74 since the last time PIT backup copy was refreshed. Likewise, when set to logical 0, M3of the primary VM/DR map80 indicates that data has not been written or modified inblock3 of thememory74 since the last time a PIT backup copy was refreshed.
DRnin each entry, depending on its state, indicates whether the corresponding block n ofmemories74 and79 are the subject of an in-progress write-data transaction. For example, when set to logical 1, DR5of VM/DR map80 indicates thatmemory block5 ofmemories74 and78 are the subject of an in-progress write-data transaction. When set to logical 0, DR5of primary VM/DR map80 indicates thatmemory block5 ofmemories74 and78 are not the subject of an in progress write transaction.
Multiple bits in entries of primary VM/DR map80 are can be set byhost node62 using a single write operation to memory that stores VM/DR map80. In one embodiment, multiple bits of an entry can be updated with a single write operation since the bits are stored adjacent to each other in the memory that stores VM/DR map80. More particularly, host node can write a multi-bit binary value to an entry of VM/DR map80 using a single write operation thereby updating or changing one or more bits of the entry. For example, suppose V3, M3, and DR3ofentry3 of VM/DR map80 are initially set to logical 1, logical 0, and logical 0, respectively. In other words,entry3 of VM/DR map80 stores binary value “100” where the most significant bit (i.e., the left most bit) corresponds to M3and the least significant bit (i.e., the right most bit) corresponds to DR3. Suppose further thathost node62 generates a write-data transaction for writing data to block3 ofmemory74. Before or afterhost node62 writes data to block3,host node62 can write the binary value “111” toentry3 of VM/DR map80 thereby changing the state of M3and DR3from logical 0 to logical 1 (V3is maintained at logical 1 even though V3may be overwritten) using a single write operation. Incontrast host node12, described in the background section above, requires separate operations to update M3ofVM map30 and DR3ofDR map34 where one operation changes the state of M3ofVM map30 while the other operation changes the state of DR3ofDR map34. It is noted thathost node62 in this example could write the binary value “x11” toentry3 of VM/DR map80 instead of the binary value “111.” In this alternative embodiment, V3could be masked from the write operation so that it is not overwritten by the binary value “x.” Indeed, any one of the Vn, Mn, or DRnbits can be masked when a multi-binary value is written to VM/DR map.
In yet another alternative embodiment, multiple multi-bit entries in VM/DR map80 can b modified byhost node62 via a single write operation. For example, supposehost node62 generates a write-data transaction for writing data to block4 ofmemory74 after host node writes data to block3 ofmemory78. Further suppose thatentries3 and4 of VM/DR map80 store binary values “111” and “100,” respectively, at thetime host node62 generates the write-data transaction for writing data to block4 ormemory74. DR3is set to logical 1 thus indicating thatblock3 inmemories74 and76 are the subject of a write data transactions when, in fact, they are not. VM/DR map80 can be updated, before or after data is modified inblock4 ofmemory78 in accordance with the new write-data transaction, by writing binary values “110” and “111” toentries3 and4, respectively, via a single write operation sinceentries3 and4 rather than two write operations where the first operation writes binary value “110” toentry3 and the second operation writes binary value “111” toentry4.Entries3 and4 can be updated with a single write operation sinceentries3 and4 ofmap80 are adjacent to each other in the memory that stores VM/DB map80. It is noted that V3, V4, and M3are not updated in the sense that the value of these bits do not change after the single write operation to VM/DR map80. After the single write operation, DR3is properly updated to logical 0 thus indicating thatblock3 inmemories74 and78 are no longer subject to a write-data transaction, and M4is updated to logical 1 thus indicating that data ofblock4 inmemory74 has been modified.
Operational aspects of tracking in-progress writes is best understood with reference to the flow chart ofFIG. 6 which shows operational aspects of one embodiment of the present invention. Tracking in-progress writes begins afterhost node62 receives a request to write data to the primary data volume.Host node62 identifies the block n ofmemory74 that is the target of the received write request.Host node62 generates a write-data transaction for writing data to block n instep90.Host node62, instep92, accessesVM map82 to determine the state of Vn. If Vnis set to logical 0, then hostnode62 copies the data of block n inmemory74 to block n ofmemory76 and subsequently sets VnofVM map82 to logical 1 with a write operation to the memory that storesmap82. If VnofVM map82 is determined to be set to logical 1 instep92 or after Vnis set to logical 1 instep96, the process proceeds to step100 were host node sets Mnand DRnbits of VM/DR map80 using a single write operation to memory that stores VM/DR map80. Thereafter insteps102 and104, block n ofmemories64 and68 are updated with the write data of the request instep90. It is noted thatsteps102 and104 can be reversed in order or performed in parallel. Eventually, when block n ofmemories64 and68 have been updated with the new data,host node62, in synchronous fashion, accesses entry n of VM/DR map80 and sets the DRnbit to logical 0, thus indicating that the data write transaction to block n ofmemories74 and78 have completed and block n ofmemory74 stores valid and modified data.
Because only one write operation is needed to update the Mnand DRnbits of primary VM/DR map80, the amount of processing overhead onhost node62 is reduced when compared tohost node12 in FIG.1. Moreover, because write operations ofstep100 shown inFIG. 6 requires a single access to a physical address within memory that stores the primary VM/DR map80, the amount of time needed to perform Mnand DRnupdating is substantially reduced.

Claims (16)

5. The method ofclaim 1 further comprising:
creating a map in the memory device, wherein the map comprises a plurality of entries, each entry comprising first, second and third bits;
wherein each entry corresponds to a respective memory block in the first memory;
wherein each first bit, when set to a first or second state, indicates whether its respective memory block of the first memory stores valid data;
wherein each second bit, when set to a first or second state, indicates whether its respective memory block of the first memory stores data which has been modified since creation of the data volume copy in the second memory;
wherein each third bit, when set to a first or second state, indicates whether its respective memory block of the first memory will be subject to a write operation, and;
wherein the pair of bits are the second and third bits, respectively, of one entry of the map.
10. The computer readable medium ofclaim 7 wherein the method further comprises:
creating a map in the memory device, wherein the map comprises a plurality of entries, each entry comprising first, second, and third bits;
wherein each entry corresponds to a respective memory block in the first memory;
wherein each first bit, when set to a first or second state, indicates whether its respective memory blocks of the first memory stores valid data;
wherein each second bit, when set to a first or second state, indicates whether its respective memory block of the first memory stores data which has been modified since creation of the data volume copy in the second memory;
wherein each third bit, when set to a first or second state, indicates whether its respective memory block of the first memory will be subject to a write operation, and;
wherein the pair of bits are the second and third bits, respectively, of one entry of the map.
11. A computer readable medium comprising instructions executable by a computer system, wherein the computer system performs a method in response to executing the instructions, the method comprising:
creating a data volume in first memory coupled to the computer system;
creating a mirror of the data volume, wherein the mirror is created in a second memory coupled to the computer system;
creating a map in a memory device, wherein the map comprises a plurality of entries, wherein each entry comprises first and second bits;
wherein each first bit, when set to a first or second state, indicates whether data has been written to a respective block of the first memory since allocation of memory for the map;
wherein each second bit, when set to a first or second state, indicates whether respective blocks in the first and second memories will be subject to separate in-progress write operations.
14. A computer readable medium comprising instructions executable by a computer system, wherein the computer system performs a method in response to executing the instructions, the method comprising:
creating a data volume, a portion of which is created in a first memory block;
creating a copy of the data volume, a portion of which is created in a second memory block;
creating a map in a memory device, wherein the map comprises a plurality of entries, wherein one of the plurality of entries comprises first and second bits, wherein the first bit, depending on its state, indicates whether data has been written to the first memory block since allocation of memory for the map, wherein the second bit, depending on its state, indicates that the data contents of the first memory block should be overwritten with the data contents of the second memory block or vice versa;
switching the states of the first and second bits with a single write access to the memory device.
US10/326,4322002-12-192002-12-19Tracking in-progress writes through use of multi-column bitmapsExpired - LifetimeUS6907507B1 (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
US10/326,432US6907507B1 (en)2002-12-192002-12-19Tracking in-progress writes through use of multi-column bitmaps
US11/068,545US7089385B1 (en)2002-12-192005-02-28Tracking in-progress writes through use of multi-column bitmaps

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US10/326,432US6907507B1 (en)2002-12-192002-12-19Tracking in-progress writes through use of multi-column bitmaps

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US11/068,545ContinuationUS7089385B1 (en)2002-12-192005-02-28Tracking in-progress writes through use of multi-column bitmaps

Publications (1)

Publication NumberPublication Date
US6907507B1true US6907507B1 (en)2005-06-14

Family

ID=34632682

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US10/326,432Expired - LifetimeUS6907507B1 (en)2002-12-192002-12-19Tracking in-progress writes through use of multi-column bitmaps
US11/068,545Expired - LifetimeUS7089385B1 (en)2002-12-192005-02-28Tracking in-progress writes through use of multi-column bitmaps

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
US11/068,545Expired - LifetimeUS7089385B1 (en)2002-12-192005-02-28Tracking in-progress writes through use of multi-column bitmaps

Country Status (1)

CountryLink
US (2)US6907507B1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20040220939A1 (en)*2003-03-262004-11-04Miller Wayne EugeneMethods and systems for management of system metadata
US20040236915A1 (en)*2001-11-202004-11-25Hitachi, Ltd.Multiple data management method, computer and storage device therefor
US20040260873A1 (en)*2003-06-172004-12-23Hitachi, Ltd.Method and apparatus for managing replication volumes
US20060161808A1 (en)*2005-01-182006-07-20Burkey Todd RMethod, apparatus and program storage device for providing intelligent copying for faster virtual disk mirroring
US20070220223A1 (en)*2006-03-172007-09-20Boyd Kenneth WRemote copying of updates to primary and secondary storage locations subject to a copy relationship
US7308546B1 (en)*2002-12-202007-12-11Symantec Operating CorporationVolume restoration using an accumulator map
US7406487B1 (en)*2003-08-292008-07-29Symantec Operating CorporationMethod and system for performing periodic replication using a log
US7415488B1 (en)*2004-12-312008-08-19Symantec Operating CorporationSystem and method for redundant storage consistency recovery
US20080288546A1 (en)*2007-05-162008-11-20Janet Elizabeth AdkinsMethod and system for handling reallocated blocks in a file system
US7493458B1 (en)*2004-09-152009-02-17Emc CorporationTwo-phase snap copy
US7617259B1 (en)*2004-12-312009-11-10Symantec Operating CorporationSystem and method for managing redundant storage consistency at a file system level
US7627727B1 (en)2002-10-042009-12-01Symantec Operating CorporatingIncremental backup of a data volume
US7650533B1 (en)*2006-04-202010-01-19Netapp, Inc.Method and system for performing a restoration in a continuous data protection system
EP2211267A2 (en)2009-01-232010-07-28Infortrend Technology, Inc.Method and apparatus for performing volume replication using unified architecture
US8010758B1 (en)*2005-05-202011-08-30Symantec Operating CorporationSystem and method for performing secondary site synchronization based on a single change map
WO2015057240A1 (en)2013-10-182015-04-23Hitachi Data Systems Engineering UK LimitedTarget-driven independent data integrity and redundancy recovery in a shared-nothing distributed storage system
US9021296B1 (en)2013-10-182015-04-28Hitachi Data Systems Engineering UK LimitedIndependent data integrity and redundancy recovery in a storage system
US9069784B2 (en)2013-06-192015-06-30Hitachi Data Systems Engineering UK LimitedConfiguring a virtual machine
JP2017182368A (en)*2016-03-302017-10-05日本電気株式会社Information processing system, storage device, information processing method, and program
US11782610B2 (en)*2020-01-302023-10-10Seagate Technology LlcWrite and compare only data storage
US12105982B1 (en)*2023-03-172024-10-01Dell Products L.P.Techniques for optimized data resynchronization between replication sites

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8645647B2 (en)*2009-09-022014-02-04International Business Machines CorporationData storage snapshot with reduced copy-on-write
US10712953B2 (en)2017-12-132020-07-14International Business Machines CorporationManagement of a data written via a bus interface to a storage controller during remote copy operations

Citations (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5155824A (en)1989-05-151992-10-13Motorola, Inc.System for transferring selected data words between main memory and cache with multiple data words and multiple dirty bits for each address
US5497483A (en)1992-09-231996-03-05International Business Machines CorporationMethod and system for track transfer control during concurrent copy operations in a data processing storage subsystem
US5506580A (en)1989-01-131996-04-09Stac Electronics, Inc.Data compression apparatus and method
US5532694A (en)1989-01-131996-07-02Stac Electronics, Inc.Data compression apparatus and method using matching string searching and Huffman encoding
US5574874A (en)1992-11-031996-11-12Tolsys LimitedMethod for implementing a checkpoint between pairs of memory locations using two indicators to indicate the status of each associated pair of memory locations
US5649152A (en)1994-10-131997-07-15Vinca CorporationMethod and system for providing a static snapshot of data stored on a mass storage system
US5778395A (en)1995-10-231998-07-07Stac, Inc.System for backing up files from disk volumes on multiple nodes of a computer network
US5835953A (en)1994-10-131998-11-10Vinca CorporationBackup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US5907672A (en)1995-10-041999-05-25Stac, Inc.System for backing up computer disk volumes with error remapping of flawed memory addresses
US6141734A (en)1998-02-032000-10-31Compaq Computer CorporationMethod and apparatus for optimizing the performance of LDxL and STxC interlock instructions in the context of a write invalidate protocol
US6189079B1 (en)1998-05-222001-02-13International Business Machines CorporationData copy between peer-to-peer controllers
US6282610B1 (en)1997-03-312001-08-28Lsi Logic CorporationStorage controller providing store-and-forward mechanism in distributed data storage system
US6341341B1 (en)1999-12-162002-01-22Adaptec, Inc.System and method for disk control with snapshot feature including read-write snapshot half
US6353878B1 (en)1998-08-132002-03-05Emc CorporationRemote control of backup media in a secondary storage subsystem through access to a primary storage subsystem
US6434681B1 (en)1999-12-022002-08-13Emc CorporationSnapshot copy facility for a data storage system permitting continued host read/write access
US6460054B1 (en)1999-12-162002-10-01Adaptec, Inc.System and method for data storage archive bit update after snapshot backup
US20030041220A1 (en)2001-08-232003-02-27Pavel PeleskaSystem and method for establishing consistent memory contents in redundant systems
US6564301B1 (en)1999-07-062003-05-13Arm LimitedManagement of caches in a data processing apparatus
US6591351B1 (en)2000-05-252003-07-08Hitachi, Ltd.Storage system making possible data synchronization confirmation at time of asynchronous remote copy
US6785789B1 (en)*2002-05-102004-08-31Veritas Operating CorporationMethod and apparatus for creating a virtual data copy

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JPH0743676B2 (en)1988-03-111995-05-15株式会社日立製作所 Back-up data dump control method and device
US5263154A (en)1992-04-201993-11-16International Business Machines CorporationMethod and system for incremental time zero backup copying of data
US6269431B1 (en)1998-08-132001-07-31Emc CorporationVirtual storage and block level direct access of secondary storage for recovery of backup data
US6757797B1 (en)1999-09-302004-06-29Fujitsu LimitedCopying method between logical disks, disk-storage system and its storage medium
US6792518B2 (en)2002-08-062004-09-14Emc CorporationData storage system having mata bit maps for indicating whether data blocks are invalid in snapshot copies

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5506580A (en)1989-01-131996-04-09Stac Electronics, Inc.Data compression apparatus and method
US5532694A (en)1989-01-131996-07-02Stac Electronics, Inc.Data compression apparatus and method using matching string searching and Huffman encoding
US5155824A (en)1989-05-151992-10-13Motorola, Inc.System for transferring selected data words between main memory and cache with multiple data words and multiple dirty bits for each address
US5497483A (en)1992-09-231996-03-05International Business Machines CorporationMethod and system for track transfer control during concurrent copy operations in a data processing storage subsystem
US5574874A (en)1992-11-031996-11-12Tolsys LimitedMethod for implementing a checkpoint between pairs of memory locations using two indicators to indicate the status of each associated pair of memory locations
US6085298A (en)1994-10-132000-07-04Vinca CorporationComparing mass storage devices through digests that are representative of stored data in order to minimize data transfer
US5835953A (en)1994-10-131998-11-10Vinca CorporationBackup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US6073222A (en)1994-10-132000-06-06Vinca CorporationUsing a virtual device to access data as it previously existed in a mass data storage system
US5649152A (en)1994-10-131997-07-15Vinca CorporationMethod and system for providing a static snapshot of data stored on a mass storage system
US5907672A (en)1995-10-041999-05-25Stac, Inc.System for backing up computer disk volumes with error remapping of flawed memory addresses
US5778395A (en)1995-10-231998-07-07Stac, Inc.System for backing up files from disk volumes on multiple nodes of a computer network
US6282610B1 (en)1997-03-312001-08-28Lsi Logic CorporationStorage controller providing store-and-forward mechanism in distributed data storage system
US6141734A (en)1998-02-032000-10-31Compaq Computer CorporationMethod and apparatus for optimizing the performance of LDxL and STxC interlock instructions in the context of a write invalidate protocol
US6189079B1 (en)1998-05-222001-02-13International Business Machines CorporationData copy between peer-to-peer controllers
US6353878B1 (en)1998-08-132002-03-05Emc CorporationRemote control of backup media in a secondary storage subsystem through access to a primary storage subsystem
US6564301B1 (en)1999-07-062003-05-13Arm LimitedManagement of caches in a data processing apparatus
US6434681B1 (en)1999-12-022002-08-13Emc CorporationSnapshot copy facility for a data storage system permitting continued host read/write access
US6341341B1 (en)1999-12-162002-01-22Adaptec, Inc.System and method for disk control with snapshot feature including read-write snapshot half
US6460054B1 (en)1999-12-162002-10-01Adaptec, Inc.System and method for data storage archive bit update after snapshot backup
US6591351B1 (en)2000-05-252003-07-08Hitachi, Ltd.Storage system making possible data synchronization confirmation at time of asynchronous remote copy
US20030041220A1 (en)2001-08-232003-02-27Pavel PeleskaSystem and method for establishing consistent memory contents in redundant systems
US6785789B1 (en)*2002-05-102004-08-31Veritas Operating CorporationMethod and apparatus for creating a virtual data copy

Cited By (45)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7010650B2 (en)*2001-11-202006-03-07Hitachi, Ltd.Multiple data management method, computer and storage device therefor
US20040236915A1 (en)*2001-11-202004-11-25Hitachi, Ltd.Multiple data management method, computer and storage device therefor
US7627727B1 (en)2002-10-042009-12-01Symantec Operating CorporatingIncremental backup of a data volume
US7308546B1 (en)*2002-12-202007-12-11Symantec Operating CorporationVolume restoration using an accumulator map
US7743227B1 (en)2002-12-202010-06-22Symantec Operating CorporationVolume restoration using an accumulator map
US20040220939A1 (en)*2003-03-262004-11-04Miller Wayne EugeneMethods and systems for management of system metadata
US7216253B2 (en)*2003-03-262007-05-08Pillar Data Systems, Inc.Methods and systems for management of systems metadata
US20070180316A1 (en)*2003-03-262007-08-02Miller Wayne EMethods and systems for management of system metadata
US7343517B2 (en)*2003-03-262008-03-11Pillar Data Systems, Inc.Systems for managing of system metadata and methods for recovery from an inconsistent copy set
US7302536B2 (en)*2003-06-172007-11-27Hitachi, Ltd.Method and apparatus for managing replication volumes
US20040260873A1 (en)*2003-06-172004-12-23Hitachi, Ltd.Method and apparatus for managing replication volumes
US7406487B1 (en)*2003-08-292008-07-29Symantec Operating CorporationMethod and system for performing periodic replication using a log
US7493458B1 (en)*2004-09-152009-02-17Emc CorporationTwo-phase snap copy
US7415488B1 (en)*2004-12-312008-08-19Symantec Operating CorporationSystem and method for redundant storage consistency recovery
US7617259B1 (en)*2004-12-312009-11-10Symantec Operating CorporationSystem and method for managing redundant storage consistency at a file system level
US20060161808A1 (en)*2005-01-182006-07-20Burkey Todd RMethod, apparatus and program storage device for providing intelligent copying for faster virtual disk mirroring
US8010758B1 (en)*2005-05-202011-08-30Symantec Operating CorporationSystem and method for performing secondary site synchronization based on a single change map
US7603581B2 (en)*2006-03-172009-10-13International Business Machines CorporationRemote copying of updates to primary and secondary storage locations subject to a copy relationship
US20070220223A1 (en)*2006-03-172007-09-20Boyd Kenneth WRemote copying of updates to primary and secondary storage locations subject to a copy relationship
US7650533B1 (en)*2006-04-202010-01-19Netapp, Inc.Method and system for performing a restoration in a continuous data protection system
US20080288546A1 (en)*2007-05-162008-11-20Janet Elizabeth AdkinsMethod and system for handling reallocated blocks in a file system
US20100011035A1 (en)*2007-05-162010-01-14International Business Machines CorporationMethod and System for Handling Reallocated Blocks in a File System
US7702662B2 (en)*2007-05-162010-04-20International Business Machines CorporationMethod and system for handling reallocated blocks in a file system
US8190657B2 (en)2007-05-162012-05-29International Business Machines CorporationMethod and system for handling reallocated blocks in a file system
EP2211267A2 (en)2009-01-232010-07-28Infortrend Technology, Inc.Method and apparatus for performing volume replication using unified architecture
EP2434403A1 (en)2009-01-232012-03-28Infortrend Technology, Inc.Method and apparatus for performing volume replication using unified architecture
US20100191927A1 (en)*2009-01-232010-07-29Infortrend Technology, Inc.Method and Apparatus for Performing Volume Replication Using Unified Architecture
CN101826044B (en)*2009-01-232013-06-26普安科技股份有限公司 Method and device for duplicating data volumes under a single architecture
US8645648B2 (en)2009-01-232014-02-04Infortrend Technology, Inc.Method and apparatus for performing volume replication using unified architecture
EP2211267A3 (en)*2009-01-232010-10-13Infortrend Technology, Inc.Method and apparatus for performing volume replication using unified architecture
CN103336729B (en)*2009-01-232016-12-28普安科技股份有限公司 Method and device for duplicating data volumes under a single architecture
US9483204B2 (en)2009-01-232016-11-01Infortrend Technology, Inc.Method and apparatus for performing volume replication using unified architecture
US9304821B2 (en)2013-06-192016-04-05Hitachi Data Systems Engineering UK LimitedLocating file data from a mapping file
US9069784B2 (en)2013-06-192015-06-30Hitachi Data Systems Engineering UK LimitedConfiguring a virtual machine
US9110719B2 (en)2013-06-192015-08-18Hitachi Data Systems Engineering UK LimitedDecentralized distributed computing system
WO2015057240A1 (en)2013-10-182015-04-23Hitachi Data Systems Engineering UK LimitedTarget-driven independent data integrity and redundancy recovery in a shared-nothing distributed storage system
US9235581B2 (en)2013-10-182016-01-12Hitachi Data Systems Engineering UK LimitedData configuration and migration in a cluster system
EP2953025A1 (en)2013-10-182015-12-09Hitachi Data Systems Engineering UK LimitedTarget-driven independent data integrity and redundancy recovery in a shared-nothing distributed storage system
US9430484B2 (en)2013-10-182016-08-30Hitachi, Ltd.Data redundancy in a cluster system
EP2953026A1 (en)2013-10-182015-12-09Hitachi Data Systems Engineering UK LimitedTarget-driven independent data integrity and redundancy recovery in a shared-nothing distributed storage system
US9021296B1 (en)2013-10-182015-04-28Hitachi Data Systems Engineering UK LimitedIndependent data integrity and redundancy recovery in a storage system
JP2017182368A (en)*2016-03-302017-10-05日本電気株式会社Information processing system, storage device, information processing method, and program
US11782610B2 (en)*2020-01-302023-10-10Seagate Technology LlcWrite and compare only data storage
US12105982B1 (en)*2023-03-172024-10-01Dell Products L.P.Techniques for optimized data resynchronization between replication sites
US20240329870A1 (en)*2023-03-172024-10-03Dell Products L.P.Techniques for optimized data resynchronization between replication sites

Also Published As

Publication numberPublication date
US7089385B1 (en)2006-08-08

Similar Documents

PublicationPublication DateTitle
US6907507B1 (en)Tracking in-progress writes through use of multi-column bitmaps
US6938135B1 (en)Incremental backup of a data volume
US7743227B1 (en)Volume restoration using an accumulator map
US6880053B2 (en)Instant refresh of a data volume copy
US7634594B1 (en)System and method for identifying block-level write operations to be transferred to a secondary site during replication
US7194487B1 (en)System and method for recording the order of a change caused by restoring a primary volume during ongoing replication of the primary volume
Stonebraker et al.Distributed RAID: A New Multiple Copy Algorithm
US7143249B2 (en)Resynchronization of mirrored storage devices
US5497483A (en)Method and system for track transfer control during concurrent copy operations in a data processing storage subsystem
US5379412A (en)Method and system for dynamic allocation of buffer storage space during backup copying
EP0566966B1 (en)Method and system for incremental backup copying of data
US5241670A (en)Method and system for automated backup copy ordering in a time zero backup copy session
US6460054B1 (en)System and method for data storage archive bit update after snapshot backup
US20020194529A1 (en)Resynchronization of mirrored storage devices
US7266654B2 (en)Storage system, server apparatus, and method for creating a plurality of snapshots
US6912631B1 (en)Method and apparatus for restoring a corrupted data volume
US8615641B2 (en)System and method for differential backup
US8214685B2 (en)Recovering from a backup copy of data in a multi-site storage system
US20060136691A1 (en)Method to perform parallel data migration in a clustered storage environment
US20120117028A1 (en)Load balancing backup jobs in a virtualized storage system having a plurality of physical nodes
US7549032B1 (en)Using data copies for redundancy
US5504857A (en)Highly available fault tolerant relocation of storage with atomicity
US8140886B2 (en)Apparatus, system, and method for virtual storage access method volume data set recovery
US7165160B2 (en)Computing system with memory mirroring and snapshot reliability
US8151069B1 (en)Multiprotection for snapsnots

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:VERITAS SOFTWARE CORPORATION, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KISELEV, OLEG;KEKRE, ANAND A.;COLGROVE, JOHN A.;REEL/FRAME:014103/0689;SIGNING DATES FROM 20030514 TO 20030515

ASAssignment

Owner name:VERITAS OPERATING CORPORATION, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VERITAS SOFTWARE CORPORATION;REEL/FRAME:015924/0877

Effective date:20041013

STCFInformation on status: patent grant

Free format text:PATENTED CASE

CCCertificate of correction
ASAssignment

Owner name:SYMANTEC OPERATING CORPORATION, CALIFORNIA

Free format text:CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019899/0213

Effective date:20061028

Owner name:SYMANTEC OPERATING CORPORATION,CALIFORNIA

Free format text:CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019899/0213

Effective date:20061028

REMIMaintenance fee reminder mailed
FPAYFee payment

Year of fee payment:4

SULPSurcharge for late payment
FPAYFee payment

Year of fee payment:8

ASAssignment

Owner name:VERITAS US IP HOLDINGS LLC, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYMANTEC CORPORATION;REEL/FRAME:037697/0412

Effective date:20160129

ASAssignment

Owner name:BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text:SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0001

Effective date:20160129

Owner name:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CONNECTICUT

Free format text:SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0726

Effective date:20160129

Owner name:BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text:SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0001

Effective date:20160129

Owner name:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATE

Free format text:SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0726

Effective date:20160129

ASAssignment

Owner name:VERITAS TECHNOLOGIES LLC, CALIFORNIA

Free format text:MERGER AND CHANGE OF NAME;ASSIGNORS:VERITAS US IP HOLDINGS LLC;VERITAS TECHNOLOGIES LLC;REEL/FRAME:038455/0752

Effective date:20160329

FPAYFee payment

Year of fee payment:12

ASAssignment

Owner name:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT, DELAWARE

Free format text:SECURITY INTEREST;ASSIGNOR:VERITAS TECHNOLOGIES LLC;REEL/FRAME:054370/0134

Effective date:20200820

ASAssignment

Owner name:VERITAS US IP HOLDINGS, LLC, CALIFORNIA

Free format text:TERMINATION AND RELEASE OF SECURITY IN PATENTS AT R/F 037891/0726;ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:054535/0814

Effective date:20201127

ASAssignment

Owner name:ACQUIOM AGENCY SERVICES LLC, AS ASSIGNEE, COLORADO

Free format text:ASSIGNMENT OF SECURITY INTEREST IN PATENT COLLATERAL;ASSIGNOR:BANK OF AMERICA, N.A., AS ASSIGNOR;REEL/FRAME:069440/0084

Effective date:20241122

ASAssignment

Owner name:ARCTERA US LLC, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VERITAS TECHNOLOGIES LLC;REEL/FRAME:069548/0468

Effective date:20241206

ASAssignment

Owner name:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, MINNESOTA

Free format text:PATENT SECURITY AGREEMENT;ASSIGNOR:ARCTERA US LLC;REEL/FRAME:069585/0150

Effective date:20241209

Owner name:BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text:SECURITY INTEREST;ASSIGNOR:ARCTERA US LLC;REEL/FRAME:069563/0243

Effective date:20241209

ASAssignment

Owner name:VERITAS TECHNOLOGIES LLC, CALIFORNIA

Free format text:RELEASE BY SECURED PARTY;ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS NOTES COLLATERAL AGENT;REEL/FRAME:069634/0584

Effective date:20241209

ASAssignment

Owner name:VERITAS TECHNOLOGIES LLC (F/K/A VERITAS US IP HOLDINGS LLC), CALIFORNIA

Free format text:RELEASE BY SECURED PARTY;ASSIGNOR:ACQUIOM AGENCY SERVICES LLC, AS COLLATERAL AGENT;REEL/FRAME:069712/0090

Effective date:20241209


[8]ページ先頭

©2009-2025 Movatter.jp