Movatterモバイル変換


[0]ホーム

URL:


US7805401B2 - Method and apparatus for splitting a replicated volume - Google Patents

Method and apparatus for splitting a replicated volume
Download PDF

Info

Publication number
US7805401B2
US7805401B2US11/555,105US55510506AUS7805401B2US 7805401 B2US7805401 B2US 7805401B2US 55510506 AUS55510506 AUS 55510506AUS 7805401 B2US7805401 B2US 7805401B2
Authority
US
United States
Prior art keywords
volume
replicated instance
source
dfs
guid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US11/555,105
Other versions
US20090119344A9 (en
US20080104132A1 (en
Inventor
Stephen G. Toner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Novell Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/413,957external-prioritypatent/US7281014B2/en
Application filed by Novell IncfiledCriticalNovell Inc
Priority to US11/555,105priorityCriticalpatent/US7805401B2/en
Assigned to NOVELL, INC.reassignmentNOVELL, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: TONER, STEPHEN G.
Publication of US20080104132A1publicationCriticalpatent/US20080104132A1/en
Publication of US20090119344A9publicationCriticalpatent/US20090119344A9/en
Application grantedgrantedCritical
Publication of US7805401B2publicationCriticalpatent/US7805401B2/en
Assigned to CPTN HOLDINGS LLCreassignmentCPTN HOLDINGS LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: NOVELL, INC.
Assigned to ORACLE INTERNATIONAL CORPORATIONreassignmentORACLE INTERNATIONAL CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: CPTN HOLDINGS LLC
Adjusted expirationlegal-statusCritical
Expired - Lifetimelegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

At least two replicated instances of a source volume are split while allowing clients to access data moved during the split. Clients are redirected to the first replicated instance of the source volume. The first replicated instance is split by first moving files in a split path from the first replicated instance to the target volume. Then, after the files in the split path have been successfully moved to the target volume, a junction is inserted at the split directory to redirect clients to the target volume. After the first replicated instance is split, a second junction replaces the split path on the replicated instance of the first replicated instance.

Description

RELATED APPLICATION DATA
This application is a continuation-in-part of, commonly assigned, U.S. patent application Ser. No. 10/413,957, titled “METHOD AND APPARATUS FOR MOVING DATA BETWEEN STORAGE DEVICES,” filed Apr. 14, 2003 by the same inventor, and issued on Oct. 9, 2007 as U.S. Pat. No. 7,281,014.
FIELD OF THE INVENTION
This invention relates to moving data between storage devices in a computer system, and more particularly to moving data on a replicated storage device.
BACKGROUND OF THE INVENTION
Today's networked environment enables data storage to span multiple data volumes and multiple computers. A distributed file system (DFS) is one where multiple file systems, each residing on a different storage volume, are connected to one another. The different storage volumes can be included in the same computer or in different computers connected together using a network. The file systems on the different storage volumes could have once been part of a single file system on a single storage volume. For example, when an organization is just starting out, the data storage requirements for that organization might be modest, and the organization is able to store all data on a single volume. After a while, as the organization grows, the original volume reaches its maximum storage capacity. Instead of simply starting a new volume from scratch, the organization may wish to divide the volume, moving a subdirectory tree from the volume to the new volume, while appearing to the client as though only a single volume is in use.
While splitting a volume makes it easy for organization members to access data as they have always done, performing the volume split can be inconvenient for the organization members. As data is being moved to a new location, that data must first be taken off-line and made unavailable to users to prevent inconsistencies in the data.
In addition to using DFS to manage data storage, a system administrator can also use volume replication to replicate one or more of the volumes. Volume replication allows a file system that is on one volume to be copied and made available to clients on one or more other volumes; each volume is typically called a replicated instance of the volume. Volume replication has several advantages. One advantage is that one replicated instance can act as a data backup in the event that another replicated instance of the same volume goes down. Another advantage of volume replication is that data can be moved closer to where the user needs it, thus potentially providing performance improvements in accessing and downloading the data.
Using DFS in conjunction with volume replication introduces new complications to splitting a replicated volume. When splitting a replicated volume, each replicated instance of the volume must be taken off-line before moving the desired subdirectory tree to the new volume. Taking each replicated instance off-line removes some the advantages that volume replication specifically provides. With each replicated instance off-line, the volume is not available.
Another approach might be to take each volume off-line only as the volume split is being performed at each volume. This approach has the advantage that users can access data on one of the volumes: either the primary volume or the replicated instance of the primary volume. But if a replication method is used where there is a lag time between volume synchronization, then there is a possibility that the volume instances will have inconsistent data after the volume split occurs.
Accordingly, a need exists for a technique to split a replicated volume, while maintaining user access to the files being moved.
SUMMARY OF THE INVENTION
At least two replicated instances of a source volume are split while allowing clients to access data moved during the split. Clients are redirected to the first replicated instance of the source volume. The first replicated instance is split by first moving files in a split path from the first replicated instance to the target volume. Then, after the files in the split path have been successfully moved to the target volume, a junction is inserted at the split directory to redirect clients to the target volume. After the first replicated instance is split, a second junction replaces the split path on the replicated instance of the first replicated instance.
The foregoing and other features, objects, and advantages of the invention will become more readily apparent from the following detailed description, which proceeds with ten references to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a computer system configured to split a replicated volume while allowing clients to access the files that are moved from the volume, according to an embodiment of the invention.
FIG. 2 shows a file system contained on the first replicated instance and the corresponding file system copy on the second replicated instance shown inFIG. 1.
FIG. 3 shows entries of the volumes shown inFIG. 1 in the volume location database (VLDB).
FIG. 4 shows the first replicated instance ofFIG. 1 before the files in the split path are moved to the target volume.
FIG. 5 shows the temporary DFS GUID ofFIG. 4 added to the VLDB.
FIG. 6 shows a junction pointing to the split directory of the first replicated instance inserted at the split directory on the second replicated instance ofFIG. 1.
FIG. 7 shows the target volume and first replicated instance ofFIG. 1 after the contents of the split path are moved from the first replicated instance to the target volume.
FIG. 8 shows the second replicated instance ofFIG. 1 after the subdirectory tree is replaced with a junction to the target volume.
FIGS. 9A-9B show a flowchart of the process of splitting the replicated volume shown inFIG. 1.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
U.S. patent application Ser. No. 10/413,957, titled “METHOD AND APPARATUS FOR MOVING DATA BETWEEN STORAGE DEVICES,” (herein referred to as “the Moving Data application”), filed Apr. 14, 2003 by the same inventor, and hereby incorporated by reference, describes a means for splitting data off one volume and moving it to another storage volume, while allowing clients to access the data on the storage volume during the move. The technique described in the Moving Data application applies when there is a single instance of the source volume. When there are replicated instances of the volume, then changes made to a copy of a file on a replicated instance might not be reflected in the files on the new volume after the volume is split. U.S. patent application Ser. No. 10/283,960, title “AN APPARATUS FOR POLICY BASED STORAGE OF FILE DATA AND META-DATA CHANGES OVER TIME”, filed Oct. 29, 2002, now pending and incorporated by reference herein, describes a system and method for managing events.
FIG. 1 shows a computer system configured to split a replicated volume while allowing clients to access the files that are moved from the volume, according to an embodiment of the invention.Computer105,computer110, andcomputer115 connect to one another usingnetwork120.Computers105,110, and115 can be servers or other machines to store and process data.Computers105,110, and115 typically include a processor, memory such as random access memory (RAM), read-only memory (ROM), or other state preserving media, storages devices, and input/output interface ports not shown inFIG. 1. Note that althoughFIG. 1 shows three computers, a person skilled in the art will recognize that any number of computers can be used.
FIG. 1 shows two instances of a replicated volume. Any number of replicated instances can be used.Computer105 includes first replicatedinstance125 andtarget volume130.Computer110 includes second replicatedinstance135. In an embodiment of the invention, first replicatedinstance125 and second replicatedinstance135 are replicated instances of the same volume. First replicatedinstance125 and second replicatedinstance135 include file systems that are accessed by client computers acrossnetwork120. The volumes are stored on storage media and can span multiple physical storage devices if needed (for example, a storage area network (SAN)).
Not shown inFIG. 1 are client computers that interact withcomputers105,110, and115. Client computers can include desktop computer systems, including a computer, monitor, keyboard, and mouse. A person skilled in the art will recognize that client computers can take other forms, such as, among others, dumb terminals, Internet appliances, or handheld computing devices such as personal digital assistants (PDAs).
In an embodiment of the invention, because first replicatedinstance125 and second replicatedinstance135 contain copies of the same files, client computers can access either one ofcomputer105 orcomputer110. Considerations by the client computer as to which computer to connect to are addressed below with reference toFIG. 2.
Client computers connect tocomputers105,110 and115 acrossnetwork120.Network120 can be any variety of network including, among others, a local area network (LAN), a wide area network (WAN), a global network (such as the Internet), and a wireless network (for example, using Bluetooth or any of the IEEE 802.11 standards).
In an embodiment of the invention, a volume is split when some files are moved from the volume to a new volume while other files are retained at the original volume. Typically the files in a directory or subdirectory on the original volume are moved to the new volume. A split directory refers to the directory or subdirectory identifying where the volume split occurs. The files and directories nested in the split directory make up a subdirectory tree referred to as a split path. Directories and files that are not in the split path remain on the original volume after the volume split.
During the split operation, client computers can access files on the replicated volume, including files being moved to the new volume. Clients are able to perform all of the normal file system activities, including but not limited to creating, deleting, renaming, and modifying files. Building an apparatus that allows a system administrator to move data while at the same time permitting users to access the same data has inherent challenges. Some files might be open for writing by users and, as a result, possibly incapable of being accessed. Also, because users are able to modify file system data after a file is moved, those changes need to be logged to insure that they are accurately reflected on the destination volume. During the volume split a list of logged files is maintained so that the new volume can be updated with the modified files.
FIG. 1 showstarget volume130 included incomputer105.Target volume130 is the destination volume for data moved from the replicated volume as first replicatedinstance125 is split. Althoughtarget volume130 is shown as being part ofcomputer105, a person skilled in the art will recognize thattarget volume130 can be included in another computer connected tocomputer105 overnetwork120. In addition,target volume130 can itself be replicated with any number of instances. Iftarget volume130 is replicated, the replication level and location of the replicated instances are specified whentarget volume130 is created. This makes no difference to the split operation, astarget volume130 represents the instance where the files are moved.
Not shown inFIG. 1 is a replication manager responsible for maintaining consistency between replicated instances of a volume, such as first replicatedinstance125 and second replicatedinstance135. Also, iftarget volume130 is replicated, the replication manager is responsible for keeping the other instances oftarget volume130 in sync.
In an embodiment of the invention,computer105 includesvolume manager140.Volume manager140 performs the volume split of first replicatedinstance125. For example, a system administrator can send a request tovolume manager140 identifying a split path on first replicatedinstance125 to be moved to targetvolume130. The Moving Data application describes howvolume manager140 can split first replicatedinstance125 while allowing clients to access the moved file during the volume split. In addition,computer110 includesvolume manager175 that can split second replicatedinstance135.
Volume manager140 andvolume manager175 interface with volume location database (VLDB)145 stored oncomputer115. In an embodiment of the invention,VLDB145 associates volume names with a distributed file system (DFS) globally unique identifier (GUID) and the physical location of the volumes.VLDB145 is accessible from most of the computers in the network. A client computer can access a particular volume instance by looking up the volume inVLDB145 to resolve the physical location of the volume.VLDB145 is described in greater detail below with reference toFIGS. 3 and 5.
In an embodiment of the invention, clients seeking access to files in the split path of second replicatedinstance135 are redirected to first replicatedinstance125 as first replicatedinstance125 is being split. If there are additional replicated instances of the volume, the split paths of these instances are also redirected to first replicatedinstance125. In an embodiment of the invention, a junction identifying first replicatedinstance125 is inserted in the split path of second replicatedinstance135. The use of a junction is discussed in greater detail below with reference toFIG. 6. In another embodiment, a symbolic link is used to redirect clients.
Volume manager140 includesDFS GUID creator150,junction creator155, andfile verifier160 to insert a junction to redirect client access to files on the split path of second replicatedinstance135. Although not shown inFIG. 1,volume manager175 also includes these elements.DFS GUID creator150 creates a temporary DFS GUID to assign to first replicatedinstance125. In creating a temporary DFS GUID,DFS GUID creator150 looks for a unique identifier to be assigned to first replicatedinstance125. Then, if a client identifies a junction with the temporary DFS GUID, the client can look up the temporary DFS GUID inVLDB #145 and identify first replicatedinstance125 as the appropriate volume for redirection. If temporary DFS GUID is not unique, then the client might redirect to another volume in error.
AfterDFS GUID creator150 assigns a temporary DFS GUID to first replicatedinstance125,junction creator155 inserts a temporary junction at the split directory of second replicatedinstance135. A junction acts as a “link” between volumes, connecting two volumes using a DFS GUID in the junction to point from one volume to another volume. When encountering a junction, the client represents the junction as a subdirectory to the end user. In an embodiment of the invention, the inserted junction includes the temporary DFS GUID that is assigned to first replicatedinstance125. As the client encounters the junction on second replicatedinstance135, the client looks up the temporary DFS GUID inVLDB145 to identify the name and location of the volume assigned that DFS GUID. Inserting a junction with the temporary DFS GUID at the split directory of second replicatedinstance135 in effect takes the split path of second replicatedinstance135 off-line. Note that as the junction is in the split path of second replicatedinstance135, the benefits of volume replication are temporarily suspended for the files in the split path, with only first replicatedinstance125 accessible for those files. Finally, when inserting the junction with the temporary DFS GUID at the split path of a replicated instance other than the instance where the volume split occurs,volume manager140 notifies the replication manager to not replicate the temporary junction.
In an embodiment of the invention,file verifier160 verifies that each file copy in the split path of second replicatedinstance135 is closed.File verifier160 is discussed in greater detail below with reference toFIG. 6. Oncefile verifier160 verifies that all files in the split path are closed, first replicatedinstance125 is temporarily the sole volume available for client access for files in the split path.Volume manager140 then splits first replicatedinstance125. In an embodiment of the invention,subdirectory mover165 performs the split of first replicatedinstance125 while clients can access and modify files on first replicatedinstance125.
Aftervolume manager140 has successfully split first replicatedinstance125,junction remover170 removes the temporary junction from second replicatedinstance135.Volume manager140 then inserts a junction at the split directory of second replicated instance and deletes the file copies in the split path. This embodiment has as an advantage thatvolume manager140 knows when the volume split is successful and can insert the new junction on the replicated instances immediately.
In an embodiment of the invention, volume split of second replicatedinstance135 is performed during the normal process of replication. Using the standard replication process might take more time than usingvolume manager140. However, if time is not a big concern, then it makes sense to utilize the replication process that is already in place. Second replicated instance135 (and other replicated instances with the temporary junction), continue to operate fine, but with an extra level of delay. This step replaces the temporary junctions with junctions that point directly to the target volume.
Finally, althoughFIG. 1 showsDFS GUID creator150,file verifier160,subdirectory mover165,junction creator155, andjunction remover170 as being included involume manager140, in another embodiment, each of the modules interact withvolume manager140, while being distinct from the volume manager. In addition, these modules can each reside on a different computer from the computer withvolume manager140 and connect to the volume manager overnetwork120.
FIG. 2 shows a file system contained on the first replicated instance and the corresponding file system copy on the second replicated instance shown inFIG. 1. First replicatedinstance125 includesroot directory205. At the root level aredirectory210 “Dir_A”, file215 “File1”, anddirectory220 “Dir_B”.Directory210 stores file225 “File2” anddirectory230 “Dir_C”.Directory230, in turn, stores three files: file235 “File3”, file240 “File4” and file245 “File5”.
Second replicatedinstance135 includes a copy of the directory tree on first replicatedinstance125. Second replicatedinstance135 includesroot directory250. Likeroot directory205 on first replicatedinstance125,root directory250 stores three entries:directory copy255 is a copy of “Dir_A”,file copy260 is a copy of “File1”, anddirectory copy265 is a copy of “Dir_B”. In turn,directory copy255 stores filecopy270 anddirectory copy275. Finally,directory copy275 stores filecopy280 “File3”,file copy285 “File4”, andfile copy290 “File5”.
InFIG. 2, the directory tree and directory tree copy are in sync with each other. At other instances in time, a client can be updating data on either first replicatedinstance125 or second replicatedinstance135. For example, suppose a new file is created in thedirectory copy265. Immediately upon creation, that file might only exist on second replicatedinstance135. However, the replication process ensures that a copy of the new file is also added tocorresponding directory220 on first replicatedinstance125. The replication process also handles other file events, such as a move, delete, or modification of a file.
In an embodiment of the invention, second replicatedinstance135 can be used to provide backup to first replicatedinstance125. In this embodiment, a client might access files in the directory tree on first replicatedinstance125 if that volume is available. But ifcomputer105, storing first replicatedinstance125, is shut down or otherwise unavailable, then the client can access the file copies on second replicatedinstance135.
In another embodiment of the invention, second replicatedinstance135 is used to provide data storage at a particular location. Consider an organization with an office in Utah and an office in Massachusetts, and volumes in computers at the two different locations. The users in Utah might access data on the replicated instance in Utah, while the users in Massachusetts might access data on the geographically closer replicated instance of the volume. In an embodiment of the invention, client computers can be configured to connect to a preferred replication instance, such as one that is geographically close to the client. By enabling users to access data on a volume close to the user, time spent accessing and downloading the data can be improved. After the user has made changes to the data, then the replication process ensures that the data on the one volume is synchronized with the data on the other volume, with little inconvenience to the user.
In yet another embodiment of the invention, the client can select a replicated instance by pinging the different servers with the replicated instances. The server that responds to the ping in the least amount of time is a good candidate for client selection. A person skilled in the art will recognize that there are other ways a client can select a replicated instance of a volume to access.
FIG. 3 shows entries of the volumes shown inFIG. 1 in the volume location database (VLDB).VLDB145 stores DFS GUIDs along with corresponding volume names and locations. For example,entry305 shows that first replicatedinstance125 oncomputer105 ofFIG. 1 is assigned a DFS GUID of “17C2”.Entry310 shows that the same DFS GUID is also assigned to second replicatedinstance135 oncomputer110. In an embodiment of the invention, when a client requests access to a volume with the DFS GUID of “17C2”,VLDB145 returns both first replicatedinstance125 oncomputer105 and second replicatedinstance135 oncomputer110. The client then selects one of the returned volumes. The client might select the volume that is closest to the client, or the client might select a volume by nature of it being the primary volume as described above. The client can also select a volume arbitrarily or based on other considerations.
In another embodiment of the invention,VLDB145 returns a single volume location for the client using considerations similar to those considered by a client selecting a volume. In addition,VLDB145 can also return a volume location based on load considerations using information about how many clients are currently accessing a particular instance of a volume.
Althoughtarget volume130 initially stores no data,target volume130 can still be assigned a DFS GUID.Entry315 shows that a DFS GUID is assigned to targetvolume130 oncomputer105. After the volume split is successful (i.e., all data has been copied to target volume130), a junction pointing to DFS GUID “334D” attarget volume130 oncomputer105 can be inserted on first replicatedinstance125. As other volumes are added to the network, these additional volumes can also be assigned DFS GUID and stored inVLDB145. For example, iftarget volume130 is replicated, then an entry of the assignment of DFS GUID “334D” to the replicated instance of the target volume would be added toVLDB145.
Each entry inVLDB145 provides enough details for the client to access the particular volume of interest to the client. In other situations, more or less location information might be provided. For example, if there is only one volume per computer, then a client might be able to access a volume simply by knowing the computer name. Or each volume could have a unique name making identification and location simple based on the name.
FIG. 4 shows the first replicated instance ofFIG. 1 before the files in the split path are moved to the target volume. In an embodiment of the invention, before splitting first replicatedinstance125, clients accessing data insplit path415 on other replicated instances (such as second replicated instance135) are redirected to the split directory on first replicatedinstance125. In an embodiment of the invention, to minimize the inconvenience to clients as well as preserve data integrity,temporary DFS GUID405 “3E1A” is assigned to first replicatedinstance125. Note thatDFS GUID410 “17C2” remains assigned to first replicatedinstance125. In another embodiment of the invention, a symbolic link or other method can be used to redirect clients from other replicated instances to first replicatedinstance125.
Directories and files insplit path415 are shown with dotted lines. The files insplit path415 aredirectory210 “Dir_A”, file225 “File2”,directory230 “Dir_C”, file235 “File3”, file240 “File4”, and file245 “File5”. In addition,directory210 is the split directory as it is the root directory ofsplit path415.
FIG. 5 shows the temporary DFS GUID ofFIG. 4 added to the VLDB. Aftervolume manager140 assignstemporary DFS GUID405 ofFIG. 10 to first replicatedinstance125,entry505 is added toVLDB145.Entry505 shows that DFS GUID “3E1A” has been assigned to first replicatedinstance125 oncomputer105. By creatingentry505 with the assignment oftemporary DFS GUID405 to first replicatedinstance125 oncomputer105, it is possible to temporarily redirect clients attempting to access second replicatedinstance135 to first replicatedinstance125.
For example, ifVLDB145 receives a request for a volume with a DFS GUID of “17C2”,VLDB145 identifies two volumes that are assigned to that DFS GUID: first replicatedinstance125 and second replicatedinstance135. As discussed above with reference toFIG. 4, the client can then access one of these volumes. If the client selects first replicatedinstance125 to access, the client accesses the volume as usual. If the client selects second replicatedinstance135, then if the client accesses Dir_A, the client encounters the inserted junction and redirects the client to first replicatedinstance125. Note that client access of files on second replicatedinstance135 that are not in the Dir_A split path are handled without being redirected to first replicatedinstance125.
FIG. 6 shows a junction pointing to the split directory of the first replicated instance inserted at the split directory on the second replicated instance ofFIG. 1. In an embodiment of the invention, when a client accesses split directory “Dir_A” on second replicatedinstance135, the client encountersjunction605.Junction605 directs the client to Dir_A on the replicated instance that is assigned to the DFS GUID “3E1A”. Because the DFS GUID “3E1A” is assigned to first replicatedinstance125, clients access this volume instance.
Afterjunction605 is inserted atsplit directory255,file verifier160 verifies that each file insplit path610 is closed. If all files are closed whenjunction605 is inserted on second replicatedinstance135, then fileverifier160 can report this immediately. Recall thatjunction605 serves to redirect clients to Dir_A on first replicatedinstance125, thus copies of files that are closed whenjunction605 is inserted remain closed untiljunction605 is removed.
However, if any copies of files insplit path610 are open whenjunction605 is inserted in the volume,file verifier160 waits until the file copy is closed and then notifiesvolume manager140 once all file copies are closed. For example, supposefile copy270 “File2” andfile copy285 “File4” are open whenjunction605 is added to the volume. Users could be simply accessing the file copies or making changes to the file copies. Once the user is finished accessingfile copy270, then file verifier160 notices that the file copy is now closed. If the user tries to access the file copy again,junction605 redirects the user to file225 on first replicatedinstance125 rather than allowing the user to accessfile copy270 as done earlier.
Once each file insplit path610 is closed,file verifier160 notifiesvolume manager140 that first replicatedinstance125 can now be split. In an embodiment of the invention, first replicatedinstance125 is split while permitting users to access the files on first replicatedinstance125. The volume split can be performed as described in the Moving Data application.FIG. 7 shows the target volume and first replicated instance ofFIG. 1 after the files in the split path are moved from the first replicated instance to the target volume.Target volume130 now includesroot directory705 and the files in the split path: file715 “File2”,directory720 “Dir_C”, file725 “File3”, file730 “File4”, and file735 “File5”.
First replicatedinstance125 no longer includes corresponding versions of the files from the split path. Instead,root directory205 includesjunction740 named “Dir_A” (the split directory that was previously stored inroot directory205 of first replicated instance125). In an embodiment of the invention,junction740 appears to a client as if it isdirectory210 “Dir_A” that had been stored inroot directory205.Junction740 includes the DFS GUID “334D” identifying the location of the moved files. When a client seesjunction740 on first replicatedinstance125, the client can look up the DFS GUID identified in the junction to determine thattarget volume130 is assigned the appropriate DFS GUID.
A volume split is complete when all files in the split path are moved from first replicatedinstance125 to targetvolume130 and any changes occurring afterwards are reflected in the files on the target volume. In an embodiment of the invention, aftervolume manager140 successfully performs the volume split,temporary DFS GUID405 is unassigned from first replicated instance125 (as indicated by the dashed line).Temporary DFS GUID405 can then be removed from the VLDB, and the VLDB returns to containing the entries shown inFIG. 3.
FIG. 8 shows the second replicated instance ofFIG. 1 after the split directory is replaced with a junction to the target volume. Just as prior to the volume split, second replicatedinstance135 includesroot directory250storing file copy260 “File1” anddirectory copy265 “Dir_B”. In addition, second replicatedinstance135 also includes junction805 (named “Dir_A”) redirecting clients to targetvolume130, and the file copies from the split path are removed from replicated instance of the first replicated instance. To users,junction805 has the appearance of being Dir_A.
In an embodiment of the invention,volume manager140 replaces the split directory withjunction805 after the split operation is successful. In another embodiment of the invention, the standard replication process synchronizes second replicatedinstance135 with first replicatedinstance125 according the standard replication process. For example, if it is important to have the volume split reflected in second replicatedinstance135 as soon as possible (for maximum availability and to avoid the extra overhead of continuing to go through temporary junction605), thenvolume manager140 can createjunction805 immediately after the volume split of first replicatedinstance125 is complete. If it is acceptable for a period of time to occur before the propagation, then the split can be replicated using standard replication techniques.
FIGS. 9A-9B show a flowchart of the process of splitting the replicated instances of the volume shown inFIG. 1. In this discussion, both source volume and replicated instance refer to replicated instances of the same volume. The source volume only differs from the other replicated instances in that the source volume is the particular replicated instance where the volume split occurs.
Atstep905, the volume manager assigns a temporary DFS GUID to the source volume. Atstep910, the volume manager stores the temporary DFS GUID in the VLDB with the location of the source volume including the location of the source volume. Atstep915, the volume manager inserts a junction at the split directory on the replicated instance. As previously discussed with reference toFIG. 6, the junction is used to direct client requests for files in the split path in the replicated instance to the split path of the source volume, in effect taking the split path of the replicated instance off-line. In other words, while the volume split is in progress, the benefits of using replicated volumes are somewhat suspended, and client requests for files in the split path go to the source volume. However, client requests for files that are not in the split path stay at the replicated instance, maximally preserving the benefits of volume replication. But, by directing client requests for files in the split path to the single volume, the volume is able to be split while allowing clients to access the data on the volume. This is a benefit to users with a preference towards data access.
After the volume manager inserts the junction in the replicated instance, atstep920 the volume manager verifies that each file in the split path on the replicated instance is closed. Note that when the junction is inserted in the replicated instance of the source volume, it is possible that a client is in the process of accessing a file on the split path.
Atdecision block925, if there is another replicated instance, then the process returns tosteps915 and920. Once all replicated instances are temporarily redirected to the source volume (as indicated by step915), and each file in the split path of the replicated instances are closed (as indicated by step920), then the source volume can be split. Note that althoughFIG. 9 showssteps915 and920 occurring for a single volume instance at a time, in another embodiment of the invention, steps915 and920 are performed in parallel for each volume instance.
Atstep930, the volume manager copies the files in the split path on the source volume to the target volume while allowing clients to access to the files. After the files in the split path are successfully moved from the source volume to the target volume, atstep935 the volume manager replaces the split directory with a junction to the target volume. In an embodiment of the invention, the junction includes the DFS GUID of the target volume. As a client computer requests a file in the split path, the client encounters the junction including the DFS GUID. The client then looks up the DFS GUID in the VLDB, and identifies the location of the target volume. Then the client connects to the target volume.
At step940 (FIG. 9B), the volume manager deletes the moved subdirectory from the source volume. The deletion can be a background task that can be performed any time after the junction to the target volume is inserted on source volume. In an embodiment of theinvention step940 can also be performed in parallel withstep945. Atstep945, the volume manager replaces the temporary junction to the source volume on the replicated instance with a junction to the target volume, and clients access the files on target volume.
Atstep950, the files in the split path on the replicated instance are deleted. Note that althoughstep950 is shown as occurring afterstep945, in an embodiment of theinvention step950 can occur any time afterstep920. Atdecision block955, if there are additional replicated instances of the volume, then the process returns tosteps945 and950. In an embodiment of the invention, steps945 and950 can be performed at the same time for each replicated instance of the source volume.
In one embodiment of the invention, steps945 and950 are handled by the volume manager, which can insert a junction to the target volume and remove the copies of the moved files from the replicated instance(s) as soon as the volume split is completed on the source volume. This embodiment has as an advantage that the volume manager knows when the volume split is successful, and can propagate the split immediately.
In another embodiment of the invention, propagation of the volume split can be achieved by using the normal replication process. In this embodiment steps945 and950 are eliminated as the replication process handles the replacement of the temporary junction and the deletion of files. This embodiment does not require any further action by the volume manager, although using the normal replication process might mean that the propagation occurs on a replication schedule, and the split is not necessarily replicated immediately.
Finally, atstep960, the volume manager next removes the temporary DFS GUID from the VLDB. This step is performed after all other steps have completed successfully.
The following discussion is intended to provide a brief, general description of a suitable machine in which certain aspects of the invention may be implemented. Typically, the machine includes a system bus to which is attached processors, memory, e.g., random access memory (RAM), read-only memory (ROM), or other state preserving medium, storage devices, a video interface, and input/output interface ports. The machine may be controlled, at least in part, by input from conventional input devices, such as keyboards, mice, etc., as well as by directives received from another machine, interaction with a virtual reality (VR) environment, biometric feedback, or other input signal. As used herein, the term “machine” is intended to broadly encompass a single machine, or a system of communicatively coupled machines or devices operating together. Exemplary machines include computing devices such as personal computers, workstations, servers, portable computers, handheld devices, telephones, tablets, etc., as well as transportation devices, such as private or public transportation, e.g., automobiles, trains, cabs, etc.
The machine may include embedded controllers, such as programmable or non-programmable logic devices or arrays, Application Specific Integrated Circuits, embedded computers, smart cards, and the like. The machine may utilize one or more connections to one or more remote machines, such as through a network interface, modem, or other communicative coupling. Machines may be interconnected by way of a physical and/or logical network, such as an intranet, the Internet, local area networks, wide area networks, etc. One skilled in the art will appreciate that network communication may utilize various wired and/or wireless short range or long range carriers and protocols, including radio frequency (RF), satellite, microwave, Institute of Electrical and Electronics Engineers (IEEE) 802.11, Bluetooth, optical, infrared, cable, laser, etc.
The invention may be described by reference to or in conjunction with associated data including functions, procedures, data structures, application programs, etc. which when accessed by a machine results in the machine performing tasks or defining abstract data types or low-level hardware contexts. Associated data may be stored in, for example, the volatile and/or non-volatile memory, e.g., RAM, ROM, etc., or in other storage devices and their associated storage media, including hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, biological storage, etc. Associated data may be delivered over transmission environments, including the physical and/or logical network, in the form of packets, serial data, parallel data, propagated signals, etc., and may be used in a compressed or encrypted format. Associated data may be used in a distributed environment, and stored locally and/or remotely for machine access.
Having described and illustrated the principles of the invention with reference to illustrated embodiments, it will be recognized that the illustrated embodiments may be modified in arrangement and detail without departing from such principles. And although the foregoing discussion has focused on particular embodiments and examples, other configurations are contemplated. In particular, even though expressions such as “according to an embodiment of the invention” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the invention to particular embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments.
Consequently, in view of the wide variety of permutations to the embodiments described herein, this detailed description and accompanying material is intended to be illustrative only, and should not be taken as limiting the scope of the invention. What is claimed as the invention, therefore, is all such modifications as may come within the scope and spirit of the following claims and equivalents thereto.

Claims (23)

1. A system to move a subdirectory tree, comprising:
a computer;
a first replicated instance of a source volume;
a directory tree on the first replicated instance of the source volume, the directory tree including a split path, the split path including a split directory;
a second replicated instance of the source volume, the second replicated instance including a copy of the directory tree, the copy of the directory tree including a copy of the split path, the copy of the split path including a split directory;
a target volume;
a volume location database to store corresponding volume names and locations;
a volume manager to redirect a client from the second replicated instance of the source volume to the first replicated instance of the source volume using the volume location database;
a file verifier to verify that each file in the copy of the split path on the second replicated instance of the source volume is closed and to notify the volume manager when each file in the copy of the split path on the second replicated instance of the source volume is closed; and
a subdirectory mover to move each file in the split path on the first replicated instance of the source volume to the target volume responsive to verification that each file in the copy of the split path on the second replicated instance of the source volume is closed while allowing the client to access each file in the split path on the first replicated instance of the source volume, wherein the volume manager is operative to insert a junction in the second replicated instance of the source volume pointing to the target volume after the subdirectory mover moves each file in the split path on the first replicated instance of the source volume to the target volume and to delete each file in the split path on the first replicated instance of the source volume.
8. A computer-implemented method to move a subdirectory tree on a first replicated instance of a source volume to a target volume, comprising:
assigning to the first replicated instance of the source volume a first distributed file system globally unique identifier (DFS GUID);
inserting at a second replicated instance of the source volume a first junction pointing to the first DFS GUID;
verifying each file in a split path on the second replicated instance of the source volume is closed including verifying that each file in the split path on the second replicated instance of the source volume is not being currently accessed by a user, wherein the split path includes a split directory;
notifying a volume manager that each file in the split path on the second replicated instance of the source volume is closed;
copying each file in a corresponding split path on the first replicated instance of the source volume to the target volume responsive to the notifying that each file in the split path on the second replicated instance of the source volume is closed;
assigning to the target volume a second DFS GUID;
inserting at the split directory on the first replicated instance of the source volume a second junction pointing to the second DFS GUID;
deleting each file in the split path on the first replicated instance of the source volume;
removing from the second replicated instance of the source volume the first junction pointing to the first DFS GUID assigned to the first replicated instance of the source volume; and
updating the second replicated instance of the source volume.
13. A computer apparatus to move a subdirectory tree from a replicated storage volume to a target storage volume, comprising:
a volume locator database (VLDB) to store a first entry including a first assignment of a first distributed file system globally unique identifier (DFS GUID) to a first replicated instance of a source storage volume, a second entry including a second assignment of the first DFS GUID to a second replicated instance of the source storage volume, and a third entry including a third assignment of a second DFS GUID to the target storage volume;
a DFS GUID creator to create a fourth entry in the VLDB including a fourth assignment of a temporary DFS GUID to the first replicated instance of the source storage volume;
a volume manager including a junction creator to insert a first junction at the second replicated instance of the source storage volume pointing to the temporary DFS GUID assigned to the first replicated instance of the source storage volume; and
a file verifier to verify that each file in a split path on the second replicated instance of the source storage volume is closed and to notify the volume manager when each file in the split path on the second replicated instance of the source storage volume is closed.
19. An article, comprising a storage medium, said storage medium having stored thereon instructions, that, when executed by a machine, result in:
assigning to a first replicated instance of a source volume a first distributed file system globally unique identifier (DFS GUID);
inserting at a split directory of a second replicated instance of the source volume a first junction pointing to the first DFS GUID;
verifying each file in a split path on the second replicated instance of the source volume is closed including verifying each file in the split path on the second replicated instance of the source volume is not being currently accessed by a user;
notifying a volume manager that each file in the split path on the second replicated instance of the source volume is closed;
copying each file in the split path on the first replicated instance of the source volume to a target volume responsive to the notifying that each file in the split path on the second replicated instance of the source volume is closed;
assigning to the target volume a second DFS GUID;
inserting at the split directory on the first replicated instance of the source volume a second junction pointing to the second DFS GUID;
deleting each file in the split path on the first replicated instance of the source volume;
removing from the second replicated instance of the source volume the first junction pointing to the first DFS GUID assigned to the first replicated instance of the source volume; and
updating the second replicated instance of the source volume.
US11/555,1052003-04-142006-10-31Method and apparatus for splitting a replicated volumeExpired - LifetimeUS7805401B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US11/555,105US7805401B2 (en)2003-04-142006-10-31Method and apparatus for splitting a replicated volume

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US10/413,957US7281014B2 (en)2003-04-142003-04-14Method and apparatus for moving data between storage devices
US11/555,105US7805401B2 (en)2003-04-142006-10-31Method and apparatus for splitting a replicated volume

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US10/413,957Continuation-In-PartUS7281014B2 (en)2003-04-142003-04-14Method and apparatus for moving data between storage devices

Publications (3)

Publication NumberPublication Date
US20080104132A1 US20080104132A1 (en)2008-05-01
US20090119344A9 US20090119344A9 (en)2009-05-07
US7805401B2true US7805401B2 (en)2010-09-28

Family

ID=39331629

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US11/555,105Expired - LifetimeUS7805401B2 (en)2003-04-142006-10-31Method and apparatus for splitting a replicated volume

Country Status (1)

CountryLink
US (1)US7805401B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20080270697A1 (en)*2007-04-272008-10-30Hitachi, Ltd.Storage system and information transfer method for the same
US20110029587A1 (en)*2008-03-312011-02-03Fujii John MUpdating Retrieval Codes In Response To File Transfers
US20130132327A1 (en)*2011-11-232013-05-23Tata Consultancy Services LimitedSelf configuring knowledge base representation
US20140244606A1 (en)*2013-01-182014-08-28Tencent Technology (Shenzhen) Company LimitedMethod, apparatus and system for storing, reading the directory index
US9939272B1 (en)*2017-01-062018-04-10TCL Research America Inc.Method and system for building personalized knowledge base of semantic image segmentation via a selective random field approach

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7720869B2 (en)*2007-05-092010-05-18Illinois Institute Of TechnologyHierarchical structured abstract file system
US9058334B2 (en)*2010-02-112015-06-16Emc CorporationParallel file system processing
US10997132B2 (en)*2017-02-072021-05-04Oracle International CorporationSystems and methods for live data migration with automatic redirection
US10754876B2 (en)*2018-01-122020-08-25International Business Machines CorporationCloning of a system
US10997208B2 (en)*2019-02-132021-05-04Sap SeIn-memory database-managed container volume replication
US11403320B2 (en)2019-03-062022-08-02Sap SeElastic in-memory database provisioning on database-as-a-service
US11422973B2 (en)2019-03-062022-08-23Sap SePeer-to-peer delta image dispatch system
US11481206B2 (en)2019-05-162022-10-25Microsoft Technology Licensing, LlcCode update in system management mode
US11238063B2 (en)*2019-07-252022-02-01EMC IP Holding Company LLCProvenance-based replication in a storage system
US11385903B2 (en)*2020-02-042022-07-12Microsoft Technology Licensing, LlcFirmware update patch

Citations (46)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4467421A (en)1979-10-181984-08-21Storage Technology CorporationVirtual storage system and method
US4601012A (en)1983-03-111986-07-15International Business Machines CorporationZone partitioning in volume recovery system
US4853843A (en)1987-12-181989-08-01Tektronix, Inc.System for merging virtual partitions of a distributed database
US5060185A (en)1988-03-251991-10-22Ncr CorporationFile backup system
US5276867A (en)1989-12-191994-01-04Epoch Systems, Inc.Digital data storage system with improved data migration
US5367698A (en)1991-10-311994-11-22Epoch Systems, Inc.Network file migration system
US5423018A (en)1992-11-161995-06-06International Business Machines CorporationQueue time reduction in a data storage hierarchy using volume mount rate
US5537585A (en)1994-02-251996-07-16Avail Systems CorporationData storage management for network interconnected processors
US5555371A (en)1992-12-171996-09-10International Business Machines CorporationData backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage
US5671350A (en)1993-09-301997-09-23Sybase, Inc.Data backup system with methods for stripe affinity backup to multiple archive devices
US5812748A (en)1993-06-231998-09-22Vinca CorporationMethod for improving recovery performance from hardware and software errors in a fault-tolerant computer system
US5832274A (en)1996-10-091998-11-03Novell, Inc.Method and system for migrating files from a first environment to a second environment
US5832487A (en)*1994-12-151998-11-03Novell, Inc.Replicated object identification in a partitioned hierarchy
US5875479A (en)1997-01-071999-02-23International Business Machines CorporationMethod and means for making a dual volume level copy in a DASD storage subsystem subject to updating during the copy interval
EP0921466A1 (en)1995-03-231999-06-09Cheyenne Advanced Technology LimitedComputer backup system operable with open files
US5956718A (en)*1994-12-151999-09-21Novell, Inc.Method and apparatus for moving subtrees in a distributed network directory
US5960194A (en)*1995-09-111999-09-28International Business Machines CorporationMethod for generating a multi-tiered index for partitioned data
US5991771A (en)*1995-07-201999-11-23Novell, Inc.Transaction synchronization in a disconnectable computer and network
US6061770A (en)1997-11-042000-05-09Adaptec, Inc.System and method for real-time data backup using snapshot copying with selective compaction of backup data
US6101585A (en)1997-11-042000-08-08Adaptec, Inc.Mechanism for incremental backup of on-line files
US6105062A (en)*1998-02-262000-08-15Novell, Inc.Method and system for pruning and grafting trees in a directory service
US20020049718A1 (en)1993-06-032002-04-25Kleiman Steven R.File system image transfer
US6408298B1 (en)*1999-12-152002-06-18Microsoft CorporationMethods and systems for copying and moving across virtual namespaces
US6457011B1 (en)*1999-07-232002-09-24Microsoft CorporationMethod of updating a shared database in a computer network
US20030028737A1 (en)1999-09-302003-02-06Fujitsu LimitedCopying method between logical disks, disk-storage system and its storage medium
US6647393B1 (en)*1996-11-222003-11-11Mangosoft CorporationDynamic directory service
US6678700B1 (en)*2000-04-272004-01-13General AtomicsSystem of and method for transparent management of data objects in containers across distributed heterogenous resources
US20040205088A1 (en)*2003-04-142004-10-14Novell, Inc.Method and apparatus for moving data between storage devices
US6898609B2 (en)*2002-05-102005-05-24Douglas W. KerwinDatabase scattering system
US6925541B2 (en)*2002-06-122005-08-02Hitachi, Ltd.Method and apparatus for managing replication volumes
US6931410B2 (en)*2002-01-112005-08-16International Business Machines CorporationMethod, apparatus, and program for separate representations of file system locations from referring file systems
US6934723B2 (en)*1999-12-232005-08-23International Business Machines CorporationMethod for file system replication with broadcasting and XDSM
US6944621B1 (en)1999-04-212005-09-13Interactual Technologies, Inc.System, method and article of manufacture for updating content stored on a portable storage medium
US6957221B1 (en)*2002-09-052005-10-18Unisys CorporationMethod for capturing a physically consistent mirrored snapshot of an online database from a remote database backup system
US7032003B1 (en)*2001-08-132006-04-18Union Gold Holdings, Ltd.Hybrid replication scheme with data and actions for wireless devices
US7054910B1 (en)*2001-12-202006-05-30Emc CorporationData replication facility for distributed computing environments
US20060136443A1 (en)*2004-12-162006-06-22International Business Machines CorporationMethod and apparatus for initializing data propagation execution for large database replication
US7080102B2 (en)*2002-03-252006-07-18Emc CorporationMethod and system for migrating data while maintaining hard links
US7191298B2 (en)*2002-08-022007-03-13International Business Machines CorporationFlexible system and method for mirroring data
US20070192551A1 (en)*2006-02-142007-08-16Junichi HaraMethod for mirroring data between clustered NAS systems
US7290017B1 (en)*2001-09-202007-10-30Emc CorporationSystem and method for management of data replication
US7310644B2 (en)*2001-06-062007-12-18Microsoft CorporationLocating potentially identical objects across multiple computers
US7349913B2 (en)*2003-08-212008-03-25Microsoft CorporationStorage platform for organizing, searching, and sharing data
US7370025B1 (en)*2002-12-172008-05-06Symantec Operating CorporationSystem and method for providing access to replicated data
US7389393B1 (en)*2004-10-212008-06-17Symantec Operating CorporationSystem and method for write forwarding in a storage environment employing distributed virtualization
US7475199B1 (en)*2000-10-192009-01-06Emc CorporationScalable network file system

Patent Citations (46)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4467421A (en)1979-10-181984-08-21Storage Technology CorporationVirtual storage system and method
US4601012A (en)1983-03-111986-07-15International Business Machines CorporationZone partitioning in volume recovery system
US4853843A (en)1987-12-181989-08-01Tektronix, Inc.System for merging virtual partitions of a distributed database
US5060185A (en)1988-03-251991-10-22Ncr CorporationFile backup system
US5276867A (en)1989-12-191994-01-04Epoch Systems, Inc.Digital data storage system with improved data migration
US5367698A (en)1991-10-311994-11-22Epoch Systems, Inc.Network file migration system
US5423018A (en)1992-11-161995-06-06International Business Machines CorporationQueue time reduction in a data storage hierarchy using volume mount rate
US5555371A (en)1992-12-171996-09-10International Business Machines CorporationData backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage
US20020049718A1 (en)1993-06-032002-04-25Kleiman Steven R.File system image transfer
US5812748A (en)1993-06-231998-09-22Vinca CorporationMethod for improving recovery performance from hardware and software errors in a fault-tolerant computer system
US5671350A (en)1993-09-301997-09-23Sybase, Inc.Data backup system with methods for stripe affinity backup to multiple archive devices
US5537585A (en)1994-02-251996-07-16Avail Systems CorporationData storage management for network interconnected processors
US5832487A (en)*1994-12-151998-11-03Novell, Inc.Replicated object identification in a partitioned hierarchy
US5956718A (en)*1994-12-151999-09-21Novell, Inc.Method and apparatus for moving subtrees in a distributed network directory
EP0921466A1 (en)1995-03-231999-06-09Cheyenne Advanced Technology LimitedComputer backup system operable with open files
US5991771A (en)*1995-07-201999-11-23Novell, Inc.Transaction synchronization in a disconnectable computer and network
US5960194A (en)*1995-09-111999-09-28International Business Machines CorporationMethod for generating a multi-tiered index for partitioned data
US5832274A (en)1996-10-091998-11-03Novell, Inc.Method and system for migrating files from a first environment to a second environment
US6647393B1 (en)*1996-11-222003-11-11Mangosoft CorporationDynamic directory service
US5875479A (en)1997-01-071999-02-23International Business Machines CorporationMethod and means for making a dual volume level copy in a DASD storage subsystem subject to updating during the copy interval
US6061770A (en)1997-11-042000-05-09Adaptec, Inc.System and method for real-time data backup using snapshot copying with selective compaction of backup data
US6101585A (en)1997-11-042000-08-08Adaptec, Inc.Mechanism for incremental backup of on-line files
US6105062A (en)*1998-02-262000-08-15Novell, Inc.Method and system for pruning and grafting trees in a directory service
US6944621B1 (en)1999-04-212005-09-13Interactual Technologies, Inc.System, method and article of manufacture for updating content stored on a portable storage medium
US6457011B1 (en)*1999-07-232002-09-24Microsoft CorporationMethod of updating a shared database in a computer network
US20030028737A1 (en)1999-09-302003-02-06Fujitsu LimitedCopying method between logical disks, disk-storage system and its storage medium
US6408298B1 (en)*1999-12-152002-06-18Microsoft CorporationMethods and systems for copying and moving across virtual namespaces
US6934723B2 (en)*1999-12-232005-08-23International Business Machines CorporationMethod for file system replication with broadcasting and XDSM
US6678700B1 (en)*2000-04-272004-01-13General AtomicsSystem of and method for transparent management of data objects in containers across distributed heterogenous resources
US7475199B1 (en)*2000-10-192009-01-06Emc CorporationScalable network file system
US7310644B2 (en)*2001-06-062007-12-18Microsoft CorporationLocating potentially identical objects across multiple computers
US7032003B1 (en)*2001-08-132006-04-18Union Gold Holdings, Ltd.Hybrid replication scheme with data and actions for wireless devices
US7290017B1 (en)*2001-09-202007-10-30Emc CorporationSystem and method for management of data replication
US7054910B1 (en)*2001-12-202006-05-30Emc CorporationData replication facility for distributed computing environments
US6931410B2 (en)*2002-01-112005-08-16International Business Machines CorporationMethod, apparatus, and program for separate representations of file system locations from referring file systems
US7080102B2 (en)*2002-03-252006-07-18Emc CorporationMethod and system for migrating data while maintaining hard links
US6898609B2 (en)*2002-05-102005-05-24Douglas W. KerwinDatabase scattering system
US6925541B2 (en)*2002-06-122005-08-02Hitachi, Ltd.Method and apparatus for managing replication volumes
US7191298B2 (en)*2002-08-022007-03-13International Business Machines CorporationFlexible system and method for mirroring data
US6957221B1 (en)*2002-09-052005-10-18Unisys CorporationMethod for capturing a physically consistent mirrored snapshot of an online database from a remote database backup system
US7370025B1 (en)*2002-12-172008-05-06Symantec Operating CorporationSystem and method for providing access to replicated data
US20040205088A1 (en)*2003-04-142004-10-14Novell, Inc.Method and apparatus for moving data between storage devices
US7349913B2 (en)*2003-08-212008-03-25Microsoft CorporationStorage platform for organizing, searching, and sharing data
US7389393B1 (en)*2004-10-212008-06-17Symantec Operating CorporationSystem and method for write forwarding in a storage environment employing distributed virtualization
US20060136443A1 (en)*2004-12-162006-06-22International Business Machines CorporationMethod and apparatus for initializing data propagation execution for large database replication
US20070192551A1 (en)*2006-02-142007-08-16Junichi HaraMethod for mirroring data between clustered NAS systems

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Campbell, Richard "Managing AFS, The Andrew File System" Prentice Hall, PTR Upper Saddle River, New Jersey 07458, http://www.phptr.com, 1998, pp. 103-106.
Cordrey, et al., "Moving Large Filesystems On-Line, Including Exiting HSM Filesystems," 1999 Lisa XIII, Seattle, WA, Nov. 7-12, 1999.
Zayas, Edward R. "AFS-3 Programmer's Reference: Architectural Overview", Version 1.0 of 2, Sep. 1991, FS-00-D160, p. 14.

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20080270697A1 (en)*2007-04-272008-10-30Hitachi, Ltd.Storage system and information transfer method for the same
US8046392B2 (en)*2007-04-272011-10-25Hitachi, Ltd.Storage system and information transfer method for the same
US20110029587A1 (en)*2008-03-312011-02-03Fujii John MUpdating Retrieval Codes In Response To File Transfers
US20130132327A1 (en)*2011-11-232013-05-23Tata Consultancy Services LimitedSelf configuring knowledge base representation
US9104966B2 (en)*2011-11-232015-08-11Tata Consultancy Services LimitedSelf configuring knowledge base representation
US20140244606A1 (en)*2013-01-182014-08-28Tencent Technology (Shenzhen) Company LimitedMethod, apparatus and system for storing, reading the directory index
US9939272B1 (en)*2017-01-062018-04-10TCL Research America Inc.Method and system for building personalized knowledge base of semantic image segmentation via a selective random field approach

Also Published As

Publication numberPublication date
US20090119344A9 (en)2009-05-07
US20080104132A1 (en)2008-05-01

Similar Documents

PublicationPublication DateTitle
US7805401B2 (en)Method and apparatus for splitting a replicated volume
US8046555B2 (en)Method of mirroring data between clustered NAS systems
US8572136B2 (en)Method and system for synchronizing a virtual file system at a computing device with a storage device
US10209893B2 (en)Massively scalable object storage for storing object replicas
US8700573B2 (en)File storage service system, file management device, file management method, ID denotative NAS server and file reading method
US6311213B2 (en)System and method for server-to-server data storage in a network environment
US6922761B2 (en)Method and system for migrating data
US8510267B2 (en)Synchronization of structured information repositories
US8176008B2 (en)Apparatus and method for replicating data in file system
EP1325409B1 (en)A shared file system having a token-ring style protocol for managing meta-data
CN101272313B (en)Intermediate device for achieving virtualization of file level, file server system and relay method
US20030236850A1 (en)Storage system for content distribution
US9122397B2 (en)Exposing storage resources with differing capabilities
US20070143286A1 (en)File management method in file system and metadata server therefor
EP1480130B1 (en)Method and apparatus for moving data between storage devices
US8332351B2 (en)Method and system for preserving files with multiple links during shadow migration
US7080102B2 (en)Method and system for migrating data while maintaining hard links
US9727588B1 (en)Applying XAM processes
US8667034B1 (en)System and method for preserving symbolic links by a storage virtualization system
US20060129616A1 (en)System and method for synchronizing computer files between a local computer and a remote server
US6952699B2 (en)Method and system for migrating data while maintaining access to data with use of the same pathname
KR20220052811A (en)Apparatus and method for storage management for sharing large-scale data

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:NOVELL, INC., UTAH

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TONER, STEPHEN G.;REEL/FRAME:018462/0931

Effective date:20061030

STCFInformation on status: patent grant

Free format text:PATENTED CASE

ASAssignment

Owner name:ORACLE INTERNATIONAL CORPORATION, CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CPTN HOLDINGS LLC;REEL/FRAME:027147/0396

Effective date:20110909

Owner name:CPTN HOLDINGS LLC, WASHINGTON

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOVELL, INC.;REEL/FRAME:027147/0151

Effective date:20110427

FPAYFee payment

Year of fee payment:4

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment:8

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:12


[8]ページ先頭

©2009-2025 Movatter.jp