The present application is a continuation of application Ser. No. 10/766,022, filed Jan. 29, 2004, the contents of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION This invention relates to the remote copy of data between two storage systems that are situated at a geographic distance from, and coupled to, each other. When the data of one storage system is updated, the updated contents are transferred, or remotely copied, to the other storage system so that both systems have the same data. More specifically, this invention relates to a technique for effecting the copying of data by a remote copy function in a file system.
Methods for effecting remote copy of data between storage systems are known (see, for example, U.S. Pat. No. 6,442,551 and Japanese Unexamined Patent Publication No. 2003-76592). According to these methods, when the data of a disk drive at a certain location (a local site) is updated, the updated contents are transferred to a disk drive at another location (a remote site) so that the two disk drives have the same data.
According to the method disclosed in U.S. Pat. No. 6,442,551 and Japanese Unexamined Patent Publication No. 2003-76592, the storage system at a remote site is used as a standby system; i.e., when the local site becomes inaccessible, the storage system at the remote site is used as a file system.
SUMMARY OF THE INVENTION The data stored in a storage system at a remote site is inaccessible unless fail-over (the handing over of duties from the local site to the remote site) takes place due to trouble at the local site, or data transfer between the local and remote sites is stopped (execution of a split or cancellation of pairing). U.S. Pat. No. 6,442,551 discloses a system wherein two or more disk drives, serving as a mirror, store the same data and are accessible only after the mirror is canceled. According to the system disclosed in Japanese Unexamined Patent Publication No. 2003-76592, pair volumes are established between storage devices with the function of remote-copy, and one upper layer device possesses the pair volumes exclusively and rejects update requests from another upper layer device. Thus, the pair volumes are recognized as one volume by the storage systems.
The reason why a “split” is necessary, as described in U.S. Pat. No. 6,442,551, is that, if the disk drive is mounted at the remote site while the data transfer between the local site and the remote site continues, the mounted disk drive becomes inaccessible because of the problems indicated below.
The first problem is as follows. If the user data of the local disk is transferred to the remote disk, the local file system caches metadata (which is file-management information to be discussed later in more detail), and the metadata is not written into the storage device at the local site, if the file system is in the process of journaling; therefore, under these circumstances, the contents of the update at the local site are not reflected at the remote site.
The second problem is as follows. The file system at the remote site has its own cache memory. If the contents of the disk drive at the remote site are updated, the contents of cache memory at the remote site are not updated; accordingly, the latest file data is not referred to when the cache is accessed. If the cache memory of the file system at the remote site stores pre-update data, the file system uses the pre-update data, with the result that the pre-update file data is referred to instead of the latest file data.
In light of the foregoing problems, a storage system is provided in accordance with the present invention wherein, when the data of a file system at a local site is updated, the updated contents are sent to a file system at a remote site in such a way that the latest file data can be referred to at the remote site.
This storage system comprises (i) a disk device, (ii) a file server, and (iii) interfaces for sending and receiving data to and from the disk devices of other storage systems through communication links. The disk device includes at least one disk drive to store data, a disk-control unit to control the writing and reading of data into and from the disk drive or drives, and a disk cache for transmitting and receiving data to and from the disk drive or drives. The file server includes a CPU for performing various kinds of processing, a main memory to store programs and data for the CPU, and a network interface to be connected to clients through a network. The main memory includes a file system-processing unit and a file-system cache. The file system-processing unit carries out various kinds of processing of the file system, which manages the areas of the disk drive or drives, so that the files are correlated with the data locations in the disk drive or drives. The file-system cache is a buffer to be used by the file system.
The disk-control unit at a remote site receives the updated contents and historical information about management of a file in the disk device at a local site through a communication link and stores the updated contents and the historical information in the disk device at the remote site. The disk-control unit at the remote site refers to the history of the file-management information in the disk device at the remote site and updates the information in the file-system cache at the remote site in accordance with the update of the file at the local site.
When a client issues a read request at the remote site, the disk-control unit at the remote site refers to the file-management information in the file-system cache at the remote site and makes it possible for the updated contents of the file to be transferred to the client.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of the storage system in accordance with a preferred embodiment of the present invention;
FIG. 2 is a block diagram which shows a plurality of storage systems of the type shown inFIG. 1, which are coupled together for effecting remote copy of data;
FIG. 3 is a diagram which illustrates the process of data transfer between two storage systems of the type shown inFIG. 1, one situated at a local site and the other at a remote site, and processing of a reference to file data at the remote site;
FIG. 4 is a diagram which illustrates an example of the configuration of the file system-processing unit of the file server of the storage system ofFIG. 1;
FIG. 5 is a flowchart of the processing of a client's read request as performed by the file system-processing unit of the storage system ofFIG. 1 at a local site;
FIG. 6 is a flowchart of the processing of remote copy of data by the disk-control unit of the storage system, when data is written into the disk device of the storage system ofFIG. 1 at a local site;
FIG. 7 is a flowchart of the processing by the file system-processing unit of the storage system ofFIG. 1 at the corresponding remote site when a file is updated at a local site;
FIG. 8 is a diagram which illustrates an example of the configuration of data for remote copy to be transferred to the remote site; and
FIG. 9 is a diagram which illustrates information to be stored in the journal-log areas in the disk drives of the disk device of the storage system ofFIG. 1.
DESCRIPTION OF THE PREFERRED EMBODIMENTS Referring to the drawings, a preferred embodiment of the storage system of the present invention will be described in detail. However, this invention is not limited to the embodiments described below.
InFIG. 1, thenumeral1 indicates a storage system, which is connected to a network, and comprises (i) afile server2 which mainly manages files, (ii) a disk, or storage,device3 which processes the file server's requests for input and output of data and stores the file data, (iii) a remote-link initiator (RI)4 which serves as an interface to mainly send data to anotherstorage system1, and (vi) a remote-link target (RT)5 which serves as an interface to receive data from anotherstorage system1.
Although thefile server2 is included in thestorage system1 inFIG. 1, the former may be placed outside the latter and connected to the latter through an interface, such as a fiber optic channel.
Thefile server2 is a computer comprising a network interface (NI)12 for effecting connection to the network, aCPU11 to carry out various kinds of processing, and amain memory13 for storing programs and data for use by theCPU11. Themain memory13 stores anOS16 for use by theCPU11 and comprises a file system-processing unit (FS-processing unit)17 to carry out various kinds of processing of the file system and a file-system cache (FS cache)18 or a buffer to be used by the file system. TheFS cache18 temporarily stores data read from thedisk device3 and data inputted by aclient6 through the network. In other words, the FS cache18 stores the contents of a file (user data), as well as metadata about the file, which constitutes data for file management (for example, the file name, file size, data-storage location, and dates and times of update of the file), a journal log which contains a history of the update of the metadata (time-series historical information about metadata), and so on.
The file system described above is designed to allow access to data as a file by managing the disks. There are two types of access: write and read. In the case of writing, the file system determines which area of which disk the data should be written into and writes the data in that area. If the remaining space of the area allocated to the file is too small, another area is allocated to the file and data is written into the file in that area. In the case of reading, the file system finds which area of which disk the contents of the file are stored in and reads the data from that area. Thus, allowing access to data as a file involves the need to correspond the contents of files to locations on the disks.
Thedisk device3 comprises (i)disk drives23 which include magnetic media and which store data, such as the contents of files, (ii) a disk-control unit21 which controls thedisk drives23, and (iii) adisk cache22 which is controlled by the disk-control unit21 and is used for transmitting and receiving data to and from thedisk drives23. A plurality of physical disk drives, such as a disk array of the RAID (Redundant Arrays of Inexpensive Disks) type, may be used instead of a single physical disk drive.
Thedisk cache22 comprises a nonvolatile memory with a battery so that the data stored in it will not be lost even if the power supply is disturbed. According to the input and output of data from thefile server2, data-storing areas (cache entries) are allocated in thedisk cache22, and the data received from thefile server2, as well as the data read from thedisk drives23, are temporarily stored in such areas. Besides, thedisk cache22 carries out the preparation of data for remote copy according to the writing from thefile server2 and the temporary storage of data for remote copy received from anotherstorage system1 through the remote-link target (RT)5.
With the above configuration, access to a certain file in thedisk device3 is accomplished by reading the file's metadata, which is file-management information and includes the data-storing location, from thedisk device3 into thedisk cache22 and referring to the metadata.
In the integrated system ofFIG. 2, astorage system1 receives a request for processing from aclient6 that is connected through thenetwork7 to a network interface (NI)12. The remote-link initiator (RI)4 of thestorage system1 is connected to the remote-link target5 of anotherstorage system1 located at a geographic distance through acommunication link8, such as a dedicated line, by way of a fiber channel. As shown inFIG. 2, astorage system1 may be provided with a plurality of remote-link initiators (RI)4, and it may be coupled to a plurality ofother storage systems1; or, each of thestorage systems1 may be provided with a remote-link initiator (RI)4 and a remote-link target (RT)5, and thestorage systems1 may be connected in series. Each disk drive23 (hereinafter “initiator disk drive23”) of thedisk device3 of astorage system1 with a remote-link initiator (RI)4 is connected to a disk drive23 (hereinafter “target disk drive23”) in thedisk device3 of anotherstorage system1 with a remote-link target (RT)5, both systems being mutually connected so as to constitute a pair. When data is entered into aninitiator disk drive23, the same data is transferred to its counterpart, atarget disk drive23, so that the twodisk drives23 in a pair have the same data.
Remote copy may be carried out by a synchronous method or an asynchronous method. According to the synchronous method, the entry of update data into adisk drive23 at a local site and the transfer of the same data to adisk drive23 at a remote site take place simultaneously. The update processing at the local site is finished when the transfer of the update data to the remote site is completed. According to the asynchronous method, the update processing at a local site is finished without waiting for the transfer of the update data to a remote site to be completed. In either case, update data is transferred to the remote site and the remote site is updated in the order of update at the local site.
Referring toFIG. 3, an outline of data transfer between a local site and a remote site and reference to the latest file data at the remote site will now be described. InFIG. 3, two storage systems A and B, which are located at a geographic distance from each other, are connected through a remote-link initiator R1 and a remote-link target RT. The data flow will be described on the assumption that aclient37, who is connected to the storage system A through a network, writes data into the storage system A, and then anotherclient38, who is connected to the storage system B through a network, reads data from the storage system B.
Theclient37 at the local site issues a write request to the file server of the storage system A, and update data is transferred from theclient37 to the storage system A (Step S1). Then, the FS-processing unit in the storage system A updates themetadata40, theuser data41, and thejournal log42 in the FS cache33 (Step S2) at the local site.
The updateduser data43 and the updatedjournal log44 of theFS cache18 are synchronously written into thedisk cache35 in the storage device (Step S3). Then, the remote-copy unit prepares data for remote copy and transfers the data to the storage system B.
The data transferred from the storage system A is reflected in thedisk cache36 of the storage system B, and theuser data45 and thejournal log46 in thedisk cache36 of the storage system B are updated so that their contents are the same as those of theuser data43 and thejournal log44 of the storage system A (Step S4). When thejournal log46 in thedisk cache36 is updated, a metadata-update monitor detects the update (refer to the following explanation with reference toFIG. 4) and a metadata-updating unit reads thejournal log46 into the FS cache34 (Step S5). The metadata-updating unit updates themetadata47 in theFS cache34 by using thejournal log49 thus read out (Step S6). Then the metadata is updated, an FS-cache purger discards theuser data48 in theFS cache34 corresponding to the pre-update metadata.
Then aclient38 at the remote site issues a read request to the storage system B, theuser data45 is read from the disk device based on the updated metadata and stored into the FS cache34 (Step S7). Then, theuser data48 is transferred to theclient38 as a response to the read request from the client38 (Step S8). Thus, theclient38 at the remote site can refer to the contents of the file written by theclient37 at the local site.
Again, referring toFIG. 3, the outline of a data transfer between a local site and a remote site and a reference to the latest file data at the remote site will be described. Access to a file in the disk device of a storage system is made by reading metadata, or data for file management, into the FS cache, referring to the metadata thus read out, finding the location of the data of the file, and making access to the file. If (i) a client makes access to the storage system at the remote site after user data and a journal log, or a history of file-management information, have been transferred from a storage system at a local site to a storage system at a remote site, and (ii) the metadata for old user data still remains in the FS cache of the storage system at the remote site, the FS (File System) processing unit of the storage system at the remote site will refer to the old metadata and will fail to make access to the new user data that has been transferred from the local site (because the old metadata includes the storage location of the old user data, access to the new user data cannot be accomplished by referring to the old metadata).
To solve the above-described problem, new metadata is stored in the FS cache of the storage system at the remote site by using the journal log or the history of file-management information which, together with the user data, was sent from the storage system at the local site. If the old user data still remains in the FS cache at the remote site, the old user data will be read from the FS cache in response to a client's read request; therefore, the old user data in the FS cache at the remote site must be discarded. Thus, when a client at the remote site issues a read request, reference is made to new metadata in the FS cache, whereby access is made to the file of new user data.
Now the functions and tasks of each unit of each storage system during data transfer from the local site to the remote site will be described.FIG. 8 shows an example of the structure of remote-copy data to be prepared at the local site and transferred to the remote site. Thesequential number entry81 is the serial number of the update at the local site. The data of the storage system at the remote site is updated in the order of the sequential number to assure that the update order at the remote site is the same as the update order at the local site. The data-storage location entry82 contains information to identify the target disk drive at the remote site and information about the data-storage location in the target disk drive. Thedata83 represents the contents of update data at the local site and which is to be stored in the data-storage location82 at the remote site.
FIG. 5 shows the flow of the processing carried out by the FS (File System)-processingunit17 of astorage system1 in response to a client's write request. The FS-processingunit17 receives a write request from aclient6 who is connected to astorage system1 through a network7 (Step101). InStep102, it is checked to determine whether there is metadata of the file to be processed in theFS cache18. If not, the process goes to Step103 to read the metadata from thedisk device3 into theFS cache18.
In order for the FS-processingunit17 to process a file, the necessary data (user data and metadata) have to be in theFS cache18. If not, the FS-processingunit17 reads the necessary data from thedisk device3 into theFS cache18 as described above. The data thus read into theFS cache18 is not discarded after the intended processing is finished, but is kept in thecache18. Thus, if necessary, any of the data in theFS cache18 can be used again without reading the same from thedisk device3 into thecache18. Thus, the efficiency of processing is raised.
After reading necessary metadata from thedisk device3 into theFS cache18 inStep103, the FS-processingunit17 updates the metadata in theFS cache18 inStep104. At the same time, the FS-processingunit17 prepares a journal log corresponding to the contents of the update and writes the journal log into the disk device3 (Step105).
A journal log consists of log information (information about the update history of metadata) to be stored in a journal-log area90 (seeFIG. 9) of adisk drive23. The contents of the update of metadata by the FS-processingunit17 are recorded as log information in the order of update. The recording of a new journal log is started at the position indicated by anend pointer92, and the position indicated by theend pointer92 is moved to a position next to the recorded location. Astart pointer91 indicates the start position of a journal log, including metadata whose update is not yet completed in thedisk device3. The FS-processingunit17 writes the metadata of theFS cache18 into thedisk device3 as the need arises and moves the position of thestart pointer91 ahead. In other words, once the metadata of theFS cache18 has been timely written into a disk drive, the position of the start pointer can be moved ahead. After reaching the end of the log-data area93 in the journal-log area90, the positions indicated by the start andend pointers91 and92 are moved to the head. With this wraparound movement, they indicate positions within the log-data area93.
The journal log in the log-data area93, defined by the positions indicated by the start andend pointers91 and92, indicates the region in which a journal log corresponding to metadata, which has not been stored in thedisk device3 yet, is stored. In other words, once metadata reflecting the contents of an update are stored into a disk drive, it is unnecessary to define the journal log corresponding to the metadata with the start and end pointers.
By writing the journal log into thedisk device3, it becomes unnecessary for the FS-processingunit17 to write the updated contents of metadata into thedisk device3 before finishing the processing for theclient6. This is because the data can be restored based on the journal log if the data in theFS cache18 is discarded due to trouble.
If trouble, such as power failure, occurs, the updated contents of metadata, which is in theFS cache18, but has not yet been written into thedisk device3, are lost in theFS cache17. After restoration of the power supply, the metadata in thedisk device3 may be read to find that they are not updated. Therefore, the FS-processingunit17 reads the journal log from thedisk device3 and updates the contents of metadata by using the contents of the journal log in the area defined by the start andend pointers91 and92. Thus, the metadata in theFS cache18 is restored to the latest pre-trouble state.
After writing the journal log into thedisk device3 inStep105 ofFIG. 5, the disk-control unit21 allocates an area in theFS cache18 as the need arises and reads the user data from thedisk device3 into theFS cache18. Then, the disk-control unit21 updates the user data received from theclient6 in the FS cache18 (Step106), writes the updated user data into the disk device3 (Step107), and informs theclient6 of the completion of update processing (Step108).
As described above, in response to a client's write request, the FS-processingunit17 updates the metadata, prepares a journal log, and updates the user data in theFS cache18. The journal log thus prepared and the user data thus updated are written into thedisk device3 before the client is informed of the completion of update processing. This is called “synchronous writing.” On the other hand, the updated metadata in theFS cache18 may be written into thedisk device3, if necessary, but independent of the processing of the client's write request (“asynchronous writing”).
The flowchart ofFIG. 5 represents a case in which the user data is written into the disk device3 (step107 inFIG. 5) synchronously with the client's write request. However, in the case of some file systems, the user data in theFS cache18 is updated in response to a client's write request, and the updated user data is written into thedisk device3 only when the FS-processingunit17 receives a commit request for theclient6. In such a case, the updated user data is written into thedisk device3 asynchronously with the client's write request and synchronously with the client's commit request.
Now, the process of remote copy by the disk-control unit21 will be described.FIG. 6 is a flowchart of the process of remote copy by the disk-control unit21. The disk-control unit21 receives a write request from the FS-processingunit17 inStep111 and writes the data into thedisk cache22 inStep112. Then the data has been written, the remote-copy unit26 of the disk-control unit21 prepares data for remote copy in thedisk cache22 inStep113 and transfers the data to anotherstorage system1 at a remote site through the remote-link initiator (RI)4 and the remote-link target (RT)5 inStep114. The remote-copy unit26 receives an acknowledgement from the storage system at the remote site inStep115 and informs the FS-processingunit17 of the completion of the processing of the write request inStep116.
The storage system at the remote site receives the remote-copy data through its remote-link target (RT)5 and reflects in itself the update data included in the remote-copy data. When thefile server2 of thestorage system1 at the remote site receives a read request (a client, who is coupled to thestorage system1 at the remote site, issues a read request through the file server2), the updated data is sent to thefile server2. The reflection of update data to the storage system at the remote site is carried out in thedisk cache22. The disk-control unit21 calculates a storage location from the data-storage location82 in the remote-copy data, which is not received through thefile server2, but is received through the remote-link target (RT)5. Entry to the storage location is allocated in thedisk cache22, and new data is written there. In this way, the contents of the remote-copy data are reflected one after another in thedisk device3 of the storage system at the remote site so, that the user data in the storage system at the remote site is the same as the user data in the storage system at the local site.
As described above, the user data and the metadata received through the remote-link target (RT)5 and written into thedisk device3 are not passed through thefile server2; therefore, the data of theFS cache18 of the file server of the storage system at the remote site has to be updated so that the client at the remote site can refer to the updated user data. Thefile servers2 of storage systems at the local and remote sites haverespective FS caches18, which have respective data. In the case of conventional storage systems, therefore, the FS-processingunit17 at the remote site, will refer to the old data before update, thereby failing to process the read request ofclient6 correctly.
To solve the above-described problem, the FS-processingunit17 of thestorage system1 according to the present invention comprises a metadata-update monitor51, a metadata-updatingunit52, and a FS-cache purger53, as shown inFIG. 4.
The metadata-update monitor51 detects an update of files in thedisk device3 at the remote site. The detection of an update can be made by, for example, monitoring the writing of data into the journal-log area in thedisk device3. As shown inFIG. 9, the journal log uses a certain wraparound log-data area93; accordingly, there is anend pointer92 which indicates where to write the journal log next. The update of a file, or the update of metadata, can be detected by reading theend pointer92 regularly and detecting a change in its value.
Then the metadata-update monitor51 detects the update of a file, or the update of metadata in thedisk device3, the metadata-updatingunit52 updates the metadata of the file in theFS cache18 in accordance with the update in thedisk device3. As shown by the flow of processing inFIG. 5, the update of metadata in thedisk device3 is not carried out synchronously with the write request of theclient6. Therefore, if metadata in thedisk device3 were read at the remote site, the old metadata before update would be read out. Accordingly, the metadata-updatingunit52 updates the metadata by using a journal log. The contents of the update of metadata at the local site are recorded in the journal log. Therefore, it is possible to update the contents of metadata at the remote site by using such a journal log.
The FS-cache purger53 discards the user data in theFS cache18. A file corresponding to the metadata updated by the metadata-updatingunit52 is the file to which data is written at the local site, and the user data of the file in theFS cache18 may be of the value before update. The FS-cache purger53 discards the pre-update data in theFS cache18, which makes it possible, upon request for reference by theclient6 at the remote site, to read updated user data from thedisk device3 into theFS cache18 and refer to the new user data.
FIG. 7 shows the flow of processing executed, when a file is updated, by the above three components (the metadata-update monitor, the metadata-updating unit, and the FS-cache purger) in the FS-processingunit17 at the remote site to reflect the contents of theFS cache18 correctly. First, inStep121, the metadata-update monitor51 monitors the update of metadata. When an update of the metadata is detected, the process advances fromStep122 to Step123. InStep123, in order for the updated contents of the metadata to be reflected in theFS cache18, the metadata-updatingunit52 reads an updated journal log. Then, inStep124, the metadata-updatingunit52 updates the contents of the metadata according to the contents stored in the journal log. Further, inStep125, the FS cache-purger53 identifies a user-data area of the updated file from the updated metadata. InStep126, when a cache entry corresponding to the area exists in theFS cache18, such a cache entry is discarded.
The metadata updated inStep124 has to be managed as metadata which is altered in theFS cache18 at the remote site and to be held by theFS cache18. This is because the metadata has not been updated in thedisk device3 at the remote site. If the metadata in theFS cache18 is made invalid, the old data before update may be read from thedisk device3 and used. Further, in order to have its data match that of the local site, thedisk unit3 at the remote site is sometimes write-protected. In such a case, the contents of the metadata updated inStep124 cannot be written into thedisk device3 by the FS-processingunit17 of the remote site. Therefore, the metadata is held in theFS cache18 until the metadata is updated in thedisk device3 at the local site, and it is stored in thedisk device3 at the remote site.
It is possible to detect the update of the metadata in thedisk device3 by using thestart pointer91 of the journal-log area90. While the journal data on which the update of the metadata is based is stored in an area between positions designated by thestart pointer91 and theend pointer93, the metadata may not have been stored in thedisk device3. When the position indicated by thestart pointer91 is renewed and the journal data which has caused the update of the metadata is out of a region defined by thestart pointer91 and theend pointer93, the metadata has been written into thedisk device3 at the local site before the renewal of the position indicated by thestart pointer91, and theFS cache18 can release the metadata.
Even if the cache entry is discarded inSteps125 and126, when theclient6 at the remote site requests a reference before an update of the user data at the remote site, there is a possibility that the user data before update is read into theFS cache18 again. In order to prevent the data before update from being read out, it is necessary to startSteps125 and126 after confirming that the user data has been updated to read data there until the update of the user data has been completed. The journal log is used to confirm the completion of the update of the user data. In this case, the FS-processingunit17 has to write log data to the journal log indicating the completion of the update of the user data.
Further, in the case of a file system which accompanies a commit request, Steps125 and126 executed by the FS-cache purger53 can be carried out using a journal log corresponding to the commit processing.
Also, inStep126 ofFIG. 7, the cache entry in theFS cache18 is discarded. However, the user data remote-copied from the local site is stored in the disk cache at the remote site. When user data of a file corresponding to the updated metadata exists in the FS cache, in stead of discarding such user data, the FS-cache purger may read the user data of the file from a disk cache and store it in the FS cache.
The example of the file system that is processed by the FS-processingunit17 as described so far is a journaling file system using journal logs. However, the system processed by the FS-processingunit17 is not limited to a journaling file system. In such a case, the metadata-update monitor51 in thestorage system1 at the remote site detects an update of the metadata by monitoring the update of data in the disk drive. There are methods conceivable for detecting an update of the metadata, such as a method in which the remote-copy unit26 in the disk-control unit notifies the FS-processingunit17 by interruption, etc., and a method in which the remote-copy units26 writes into anotherdisk drive23 in thedisk device3 the information that the update took place and a storage location of the updated data and, further, the FS-processingunit17 reads them regularly and their contents are updated so that the update of the metadata is detected.
The metadata-updatingunit52 only has to discard the updated metadata in theFS cache18. In a case where the file system processed by the FS-processingunit17 is one not using journals, the FS-processingunit17 writes metadata into thedisk device3 synchronously with the request for writing from theclient6. This is because it becomes possible to refer to the metadata after update by discarding the data in theFS cache18 and reading such data from thedisk device3 as needed. Further, the FS-cache purger53 only has to discard user data, in theFS cache18, corresponding to the metadata discarded by the metadata-updatingunit52.
As described above, in the storage system according to the invention, the file system at the remote site comprises the update monitor which monitors file updates or metadata updates, the updating unit which update the metadata, and the purger which discards data in the FS cache corresponding to a file where the update took place, thereby enabling the updated contents to be reflected in the file system at the remote site in real time in accordance with the update at the local site and making it possible to refer to the latest file data at the remote site.
Therefore, with regard to the storage system where remote copy is carried out, in accordance with the update at the local site, the contents of the update are reflected in real time in the file system at the remote site and the latest file data can be referred to at the remote site.