CROSS REFERENCE TO RELATED APPLICATION This is a continuation of U.S. application Ser. No. 10/892,187, filed Jul. 16, 2004. The entirety of the contents and subject matter of all of the above is incorporated herein by reference.
CLAIM OF PRIORITY The present application claims priority from Japanese application P2004-144687 filed on May 14, 2004, the content of which is hereby incorporated by reference into this application.
BACKGROUND The present invention relates to a storage system which stores files used by computers. More specifically, the invention relates to a storage system capable of saving the resource while exerting a WORM function.
When holding electronic data in a memory device such as a storage, there are problems in that as a retention period of the electronic data becomes longer and an amount of data increases, a demanded capacity of the storage also increases, requiring higher costs. Therefore, more important data is recorded on a high-speed storage with a large transfer bandwidth, and less important data is recorded on a relatively low-speed storage. Such a high-speed storage with the large transfer bandwidth costs high, and such a relatively low-speed storage costs low. Further, based on the date when the data is saved (i.e., the archive date), fresh data (data for which not much time has elapsed since being archived) is recorded on the high-speed storage, and old data (data for which a given duration of time has elapsed since being archived) is recorded on the low-speed storage. This type of data management method is called Data Lifecycle Management (DLCM).
One of known file archive mechanisms which are designed by taking into account long-term file retention in this data lifecycle management is Write Once Read Many (WORM) archives, which manages modification, and erasure of files whose archive period has not expired yet. In order to apply WORM to a memory device, a storage device or a controller has to have a WORM function.
An example of a WORM function for Network Attached Storage (NAS) is a technology proposed to make any erasure, modification□or such on a file that is stored under a specific directory impossible by setting the mode bit of the file to “Read Only” through a NAS server (see, for example, the following internet article:
- Network Appliance, “NetApp NearStore”, [online], <URL:http ://www.netapp.com/products/nearstore/>).
SUMMARY According to the article cited above, the WORM function is obtained by setting “writable” or “unwritable” for each file on the file system level. The WORM function given in this manner to a storage device does not work when a SAN client or the like directly accesses a volume in the storage device, allowing data alteration. The cited technique also takes up a large part of the resource of the NAS server since every volume of the storage device has to be recognized by the NAS server.
It is therefore an object of the present invention to provide a storage system capable of cutting back the resource while obtaining a WORM function on a volume level of a storage device.
In a storage system according to the present invention, a controller obtains from the memory module information on one of logical devices that is a target of a write request is made by a computer, and when the logical device is set to unwritable, informs the fact to the computer. An interface references meta-information held in the disk drive to obtain a list of files that are not accessed for a predetermined period, secures a logical device that is capable of storing the files on the obtained list of files, and causes the controller to store, in the logical device, the files on the list of files and sets to unwritable the logical device that stores the files in a manner that makes information stored in the memory module.
The present invention enables a storage system to obtain a WORM function on the logical device level and to cut back the resources of an interface (NAS head) and a RAID device.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram showing a configuration of a storage system according to a first embodiment of the present invention.
FIG. 2 is an explanatory diagram showing an example of a configuration of an MNT Table2032 according to the first embodiment of the present invention.
FIG. 3 is an explanatory diagram showing an example of a configuration of a MAP Table2042 according to the first embodiment of the present invention.
FIG. 4 is an explanatory diagram showing an example of a configuration of an LU Table1401 according to the first embodiment of the present invention.
FIG. 5 is an explanatory diagram showing an example of a configuration of an LDEV Table1042 according to the first embodiment of the present invention.
FIG. 6 is an explanatory diagram showing an example of a configuration of Meta info according to the first embodiment of the present invention.
FIG. 7 is a flow chart showing processing which is executed, upon an OPEN request made in a file read mode, in aNAS head200 according to the first embodiment of the present invention.
FIG. 8 is a flow chart showing processing which is executed, upon an OPEN request made in a file write mode, in the NAShead200 according to the first embodiment of the present invention.
FIG. 9 is a flow chart showing LE allocating processing which is executed in theNAS head200 according to the first embodiment of the present invention.
FIG. 10 is a flow chart showing WORM file creating processing which is executed in the NAShead200 according to the first embodiment of the present invention.
FIG. 11 is a block diagram showing a configuration of a storage system according to a modified example of the first embodiment of the present invention.
FIG. 12 is an explanatory diagram showing an example of a configuration of an LDEV Table1042 according to a second embodiment of the present invention.
FIG. 13 is an explanatory diagram showing an example of a configuration of an MAP Table2042 according to the second embodiment of the present invention.
FIG. 14 is a flow chart showing WORM file creating processing which is executed in aNAS head200 according to the second embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Storage systems according to embodiments of the present invention will be described below with reference to the accompanying drawings.
FIG. 1 is a block diagram showing a configuration of a storage system according to a first embodiment of the present invention.
ARAID device100 is connected to a NAS (Network Attached Storage)head200 and to aSAN client350 through a SAN (Storage Area Network)150. The NAShead200 and eachNAS client300 are connected to one another through a LAN (Local Area Network)250.
In theRAID device100, acontroller110, a sharedmemory140, and adisk drive130 are connected to one another through aninternal network120.
Thecontroller110 receives file data sent from the NAShead200 or theSAN client350 through the SAN150, and sends file data of theRAID device100 to the NAShead200 or theSAN client350 through the SAN150. Thecontroller110 also receives a control command sent from the NAShead200 or the SANclient350 through the SAN150, and carries out processing based on the received control command. Thecontroller110 is equipped with a control device (CPU)1101 and amemory1102. Thememory1102 stores an LDEVguard program1112 and an LDEVchanger program1122. These programs are, as will be described later, executed by the CPU based on a control command received by thecontroller110, thereby starting given processing. Thememory1102 is comprised a memory module that is semiconductor memory, magnetic disk or the like.
Thedisk drive130 is composed of logical devices (LDEVs) obtained by logically dividing plural hard disks which take the RAID configuration. Thedisk drive130 is connected to theinternal network120 through adisk adaptor131. Each LDEV has an LDEV number (LDEV num) unique throughout theRAID device100, and is registered in an LDEV Table1402.
Thedisk drive130 breaks into a meta file system (Meta FS)1301, a data file system (Data FS)1302, and a temporary file system (Temp FS)1303 in keeping with file systems provided by the NAShead200. The Meta FS1301 holds meta-information of every file that is stored in thedisk drive130. The Data FS1302 holds file data. The Temp FS1303 holds file data similar to the Data FS1302, but is set in a manner that allows the file system to be mountable and unmountable to a file system of the NAS head.
The sharedmemory140 is a memory device for storing configuration information or the like of thedisk drive130. The sharedmemory140 holds the one LDEV Table1402, which covers theentire RAID device100, and an LU Table1401, which is provided for eachcontroller110. The LDEV Table1402 holds information for managing the LDEVs of the disk drive130 (management information). The management information includes LDEV numbers, which are unique to the respective LDEVs throughout theRAID device100, the types and sizes of the LDEVs, and the like. The LU Table1401 holds LU (Logical Unit) numbers (LU nums) each of which is associated with an LDEV number. The LU numbers are identifiers used by theNAS head200 to read/write a block through thecontroller110. The sharedmemory140 is comprised a memory module that is semiconductor memory, magnetic disk or the like.
TheNAS head200 functions as an interface that provides theNAS client300, through a file system, with the data stored in theRAID device100.
TheNAS head200 is equipped with a control device (CPU)2001 and amemory2002. Thememory2002 holds a WORM file creatingprocessing program2012, an LU allocatingprocessing program2002, a mount Table (MNT Table)2032, a MAP Table2042, and a WORM buffer (WORM buf)2052. Thememory2002 is comprised a memory module that is semiconductor memory, magnetic disk or the like.
The WORM file creatingprocessing program2012 and the LU allocatingprocessing program2022 are executed, as will be described later, by theCPU2001 upon request from the NAS client300 (a file read/write request or the like), thereby starting given processing.
The MNT Table2032 holds the association between the LU numbers mounted on theNAS head200 as a file system and their mount points. The MAP Table2042 holds the association between every LU number that is managed by theNAS head200 and the LDEV number allocated to the LU number in question.
The WORM buf2052 temporarily holds data created as an image in the UDF (Universal Disk Format) format when a file is stored as a WORM file in an LDEV upon request from theNAS client300. It is therefore unnecessary to set an area of thememory2002 aside for the WORM buf2052 all the time. The WORM buf2052 may be set in thedisk drive130 when the UDF image has large data size.
In theRAID device100, a command device is defined for eachcontroller110. TheNAS head200 or theSAN client350 issues a given command (for example, to secure an LDEV, to change which LU number is associated with which LDEV number, or to change the type of an LDEV) by informing the command to a defined command device.
TheTemp FS1303, of thedisk drive130 is not always mounted to a file system provided by theNAS head200. TheTemp FS1303 holds WORM files which might not be accessed for a long period (several tens years, for instance), while theMeta FS1301 and theData FS1302 hold files that are frequently accessed. Keeping every LDEV mounted to the file system is not practical since it is a waste of resources of theNAS head200 and of theRAID device100.
LU mount/unmount processing takes time and is a heavy load on the system, which makes it desirable to keep mounted an LU that has a frequently accessed file. On the other hand, in theTemp FS1303 which stores WORM files that might not be accessed for a long period, an LU is mounted only when a WORM file stored therein is to be accessed and is kept unmounted for the rest of the time. In this way, it is possible to take full advantage of the resources of theNAS head200 and theRAID device100.
Of the LUs in theTemp FS1303 ofFIG. 1, crosshatched ones represent LUs that are mounted, while outlined ones represent LUs that are unmounted.
The LUs may be mounted/unmounted dynamically as the need arises. The LU mount/unmount processing will be discussed later.
FIG. 2 is an explanatory diagram of the MNT Table2032 held in thememory2002 of theNAS head200.
TheNAS head200 is an interface which provides a file system to theNAS client300. The MNT Table2032 is used for management of directory information and directory attributes (write/read and the like) in order to provide a file system.
The MNT Table2032 is composed of the name of a device mounted (dev_name), the mount point (mnt_point), the file system type (fs_Type), the mode (mode), the last access time (A_time), and others.
A “dev_name” is expressed by attaching an LU number to “/dev/lu”. For “fs_Type” data, the type of a file system is stored. There are three file system types: meta, data, and temp; “meta” indicates a meta file system which stores a directory tree and file attributes such as the location to store a file are stored, “data” indicates a file system type which stores a file that can be updated, and “temp” indicates a file system in which an LU storing a file that cannot be updated is mounted temporarily.
The “mode” represents an attribute of the mount point in question. The symbol “r” denotes readable, “w” denotes writable, and “rw” denotes readable/writable.
For the “last access time”, the time at which the mounted LU is last accessed (read or write, for example) is recorded.
FIG. 3 is an explanatory diagram of the MAP Table2042 held in thememory2002 of theNAS head200.
The MAP Table2042 is used to manage the association between every LU and LDEV that are managed by theNAS head200.
The MAP Table2042 is composed of an LDEV number (LDEV num), the type of an LDEV (Type), an LU number (LU num) to which the LDEV is allocated, and others.
Examples of the LDEV Type include “cmd”, “regular”, and “WORM”. The symbol “cmd” denotes a command device used as the target of a command issued from a client. Commands directed to theRAID device100 to secure an LDEV, associate an LDEV with an LU (mapping), change the LDEV type, and the like are sent from theNAS head200 to an LDEV of “cmd” Type. An LDEV denoted as “regular” is one that allows a client to read and write a file. The Type “WORM” indicates that a file in an LDEV can be read but not written.
When the LU num is “−1” (in the example ofFIG. 3, the LU num associated with anLDEV num500 is “−1”), it means that the LDEV has no LU allocated thereto.
FIG. 4 is an explanatory diagram showing an example of the LU Table1401 in the sharedmemory140 of theRAID device100.
The LU Table1401 is made for eachcontroller110 provided in theRAID device100, and is stored in the sharedmemory140. In the configuration example ofFIG. 1, there are two of thecontroller110 and accordingly the sharedmemory140 stores two of the LU Table1401, each of which is for one of the two controllers. Each LU Table1401 is composed of an LU number (LU num), the number of the LDEV (LDEV num) allocated to the LU that is denoted by the LU number, and others.
Upon receiving a block read/write request from theNAS head200, thecontroller110 obtains the LU number contained in the request. Then thecontroller110 looks up its own LU Table1401 for the LDEV number that is associated with the obtained LU number. Thus the block read/write processing is performed on the LDEV that is specified by the obtained LDEV number.
FIG. 5 is an explanatory diagram showing an example of the LDEV Table1402 in the sharedmemory140 of theRAID device100.
There is only one LDEV Table1402 throughout theentire RAID device100, which is held in the sharedmemory140. The contents of the LDEV Table1402 define configurations of LDEVs in thedisk drive130.
The LDEV Table1402 is composed of an LDEV number (LDEV num) unique to an LDEV throughout theRAID device100, the type of the LDEV (type), the size of the LDEV (size), and others. There are three LDEV types as has been described referring toFIG. 3. The size means the maximum storage capacity of the LDEV in question.
FIG. 6 shows an example of meta-information (meta-data) stored in the meta file system (Meta FS)1301 of thedisk drive130.
Meta-information is information on files kept in thedisk drive130 and is stored for each of the files in theMeta FS1301.
Each piece of meta-information is composed of the name of a file (name), the size (size) of the file, the number of the LDEV that stores the file (LDEV num), the local path name (LDEV path) of the local file that stores the file in the LDEV, the time at which the file is last accessed (last access), the type of the file (type), and others.
As has been described, file type examples include a file type which allows read and write (regular) and a file type which allows read but not write (WORM).
Referencing such meta-information yields the name of a file, the size of the file, the number of the LDEV where the file is stored, the name of the directory path storing the file, the time the file is last accessed, and the type of the file.
Described next is the operation of the storage system of this embodiment.
FIG. 7 is a flow chart showing processing executed by theNAS head200 when a file is opened on a read mode upon request from theNAS client300.
TheNAS client300 sends to the NAS head200 a request to open a file on the read mode. Receiving the request from theNAS client300, theNAS head200 obtains a file name contained in the request. TheNAS head200 then references theMeta FS1301 of thedisk drive130 in theRAID device100 to obtain meta-information (FIG. 6) associated with the file name obtained (Step S1001).
Next, the objective LDEV number is pulled out of the obtained meta-information. The MAP Table2402 is then looked up for the LU number that is associated with the obtained LDEV number (Step S1002).
Referring to the LU number retrieved, theNAS head200 judges whether an LU is allocated or not (Step S1003). Specifically, when the retrieved LU number is “−1”, it is judged that no LU is allocated to the LDEV specified by the obtained LDEV number and the process proceeds to processing for allocating an LU to the LDEV denoted by that LDEV number (LU allocating processing) (Step S1004). As an LU is allocated to the LDEV through the LU allocating processing, which will be described later with reference toFIG. 7, the process proceeds to Step S1005.
On the other hand, when it is judged in Step S1003 that an LU has already been allocated, the process proceeds to Step S1005 skipping the LU allocating processing of Step S1004.
In Step S1005, the MNT Table2032 is referenced to judge whether the LU that is specified by the LU number is mounted or not. When the LU is judged to be not mounted, the LU is mounted (Step S1006) and then the process proceeds to Step S1007.
In the case where the LU is judged to be already mounted, on the other hand, the process proceeds to Step S1007 skipping Step S1006.
In Step S1007, the MNT Table2032 is referenced to pull out the mount point associated with the obtained number of the LU that has been mounted.
Then a relative path contained in the obtained meta-information is used as a relative path from the obtained mount point to perform the read OPEN processing on the requested file. The result of the read OPEN processing is sent to the NAS client300 (Step S1008).
This series of processing enables theNAS client300 to perform the file open processing on the read mode through a file system of theNAS head200.
FIG. 8 is a flow chart showing processing executed by theNAS head200 when a file is opened on a write mode upon request from theNAS client300.
TheNAS client300 sends to the NAS head200 a request to open a file on the write mode. Receiving the file OPEN request from theNAS client300, theNAS head200 obtains a file name contained in the request. TheNAS head200 then references theMeta FS1301 of thedisk drive130 in theRAID device100 to obtain meta-information (seeFIG. 6) on the obtained file name (Step S2001).
Next, the Type of the file is pulled out of the obtained meta-information to check whether the Type is “WORM” or not (Step S2002). When the file Type is found to be “WORM”, the file cannot be opened on the write mode and an error is sent to theNAS client300 that has made the request to end the processing (Step S2007).
In the case where the file Type is found out to be other than “WORM”, the objective LDEV number is picked up from the obtained meta-information. The MAP Table2402 is then looked up for the LU number that is associated with the obtained LDEV number (Step S2003).
Referring to the LU number retrieved, theNAS head200 judges whether an LU is allocated or not (Step S2004). When the retrieved LU number is “−1”, it is judged that no LU is allocated to the LDEV specified by the obtained LDEV number. In this case, an error is sent to theNAS client300 that has made the request and the processing is ended (Step S2007). In the system of this embodiment, as described above, a file OPEN request on the write mode for an LDEV to which no LU is allocated is unacceptable since any LDEV that has no LU allocated thereto constitutes theTemp FS1303, which stores WORM files.
On the other hand, when it is judged in Step S2004 that an LU has already been allocated, the MNT Table2032 is referenced to pull out the mount point associated with the obtained number of the LU that has been mounted (Step S2005).
Then a relative path contained in the obtained meta-information is used as a relative path from the obtained mount point to perform the write OPEN processing on the requested file. The result of the write OPEN processing is sent to the NAS client300 (Step S2006).
This series of processing enables theNAS client300 to open a file on the write mode through a file system of theNAS head200.
It should be noted that, in the case of an LU WRITE request made by theSAN client350 to theRAID device100 without the intermediary of theNAS head200, the WORM function is obtained through control by thecontroller110. Specifically, as a write request is made by theSAN client350, theLDEV guard program1112 of thecontroller110 looks up the LDEV Table for the Type of the LDEV associated with the LU in question. When the retrieved Type is “regular”, the LDEV grants the write request. On the other hand, when the retrieved Type is “WORM”, the write request on the LDEV is denied and an error is sent to theSAN client350.
FIG. 9 is a flow chart showing the LU allocating processing (Step S1004 inFIG. 7) executed by theNAS head200.
First, the MAP Table2042 is looked up for an LU that is not allocated to an LDEV (Step S3001). TheNAS head200 then judges whether or not the search for an unallocated LU has succeeded, in other words, whether there is an unallocated LU or not (Step S3002). When it is judged that there is an unallocated LU, the process proceeds to Step S3005.
When it is judged that there is no LU left unallocated, the MNT Table2032 is looked up for the LU that has not been accessed for the longest time out of the LUs whose Type is “Temp” and whose mode is “r” (read) by checking the A_Time (Step S3003). The thus retrieved LU is unmounted (Step S3004).
Then an LU allocating command is issued to theRAID device100 in order to allocate the retrieved LU to an LDEV (Step S3005).
Specifically, theNAS head200 issues an LU allocating command having as parameters the LDEV num of the LDEV to which the LU is to be allocated and the LU num, to the LU number of “cmd” (for control) defined in the MAP Table2042. The LU allocating command issued is received by thecontroller110 in theRAID device100 which exchanges data with theNAS head200. TheLDEV changer program1122 of thecontroller110 that has received the command changes, based on the parameters of the LU allocating command, the association between LUs and LDEVs in the LU Table1401 held in the sharedmemory140 which is associated with thecontroller110 that has received the command.
TheNAS head200 then sets the number of the allocated LU in the entry of the corresponding LDEV of the MAP Table2042 which is held in thememory2002.
An LU is allocated to an LDEV that has not been assigned an LU through this series of processing.
When there is no LU unallocated, one that has not been accessed longest out of the LUs mounted to the file system is unmounted to be allocated to the LDEV. In Step S3003, an LU whose Type is “Temp” and whose mode is “r”, namely, an LU set as WORM is unmounted.
WORM file creating processing is discussed next.
In this embodiment, file data is usually stored in the Data FS of thedisk drive130. Of the files stored in the Data FS, theNAS head200 looks up for those that have not been accessed for a given period (several months to several years, for example) and moves file data of such files as WORM files to the Temp FS.
FIG. 10 is a flow chart showing the WORM file creating processing executed by the WORM file creatingprocessing program2012 held in thememory2002 of theNAS head200.
The WORM file creatingprocessing program2012 searches theMeta FS1301 in thedisk drive130 of theRAID device100 for a file that has not been accessed for a given period (Step S4001). Each piece of Meta info (seeFIG. 6) held in theMeta FS1301 keeps the date and time of the last access to respective files stored in thedisk drive130. TheNAS head200 obtains the last access date and time and, when there is any that is over the given period, information on the file in question is obtained and listed for the record.
Then whether a file that has not been accessed for a given period is found in Step S4001 or not is checked (Step S4002). When the check finds no file that has not been accessed for the given period, the WORM file creating processing is ended since there is no file to be turned into a WORM file.
On the other hand, when a file that has not been accessed for the given period is found as a result of the checking, the whole data of the file is read out of theData FS1302 first to create a UDF (Universal Disk Format) image from the read file. The created UDF image is stored in the WORM buf2052 of the memory2002 (Step S4003).
Then an LDEV is secured in order to store the UDF image created (Step S4004). Specifically, an LDEV securing command which specifies the size required to store the UDF image is sent to theRAID device100. Thecontroller110 of theRAID device100 secures an LDEV whose Type is “regular” and which meets the specified size based on the command received.
As the LDEV is secured, the WORM file creatingprocessing program2012 registers information of the secured LDEV in the MAP Table2032 held in thememory2002.
Next, the LU allocating processing (seeFIG. 9) is performed on the secured LDEV (Step S4005). The UDF image stored in the WORM buf2052 is written in the LU that is allocated to the secured LDEV through the LU allocating processing (Step S4006).
Then the Type of the LDEV in which the UDF image is written is set to “WORM” (Step S4007). In this processing, a command for setting the LDEV to WORM is sent to theRAID device100. Thecontroller110 of theRAID device100 receives the command and sets the Type of the LDEV of the LDEV Table1402 held in the sharedmemory140 to “WORM”. Meanwhile, theNAS head200 sets the Type of the corresponding LDEV of the MAP Table2042 held in thememory2002 to “WORM”.
Lastly, Meta info of every file that has been turned into a WORM file is updated and data of the files is deleted from the Data FS1302 (Step S4008). Specifically, the file data on theData FS1302 is deleted with the use of the LDEV path of Meta info of each file that has been turned into a WORM file. Thereafter, the number of the LDEV in which the UDF image is written is set as the LDEV num in the Meta info, the path name in the UDF is set as the LDEV path in the Meta info, and “WORM” is set as the Type in the Meta info.
Through the above processing, a file that has not been accessed for a given period can be turned into a WORM file.
Although in the processing example shown inFIG. 10, theNAS head200 reads a file to be turned into a WORM file, creates a UDF image from the file, stores the image in the buffer, and writes the image in theRAID device100, theRAID device100 may move the file following instructions of theNAS head200. In this case, thecontroller110 needs to have the function of a file system. Upon receiving an instruction from theNAS head200, thecontroller110 converts a file specified in the instruction into a UDF image, which is then stored in theTemp FS1303. In this way, processing on the part of theNAS head200 is reduced as well as data communication between theNAS head200 and theRAID device100.
In the processing shown inFIG. 10, files that have not been accessed for a given period are uniformly searched for turning the files into WORM files. Alternatively, the time period for turning a file into a WORM file may vary from one file to another. In this case, information on how long a file has to wait before being turned into a WORM file is contained in Meta info of the file. A time period of one year, for instance, is set counting from the day the file is stored as data file in theRAID device100. This information is pulled out of the Meta info of the file in Step S4001 ofFIG. 10 and, if it is found that the set time period (one year) has expired, the file is turned into a WORM file. The subsequent processing is as described above.
In the thus structured first embodiment of the present invention, Type is set by the LDEV Table1402 for each LDEV that constitutes thedisk drive130 of theRAID device100. This makes it possible to, for instance, permit read alone in an LDEV that is set to “WORM”, thereby realizing the WORM function on the LDEV level of theRAID device100. In addition, since theTemp FS1303 which holds WORM files is mounted/unmounted to a file system by theNAS head200 as the need arises, it is possible to save the resources of theNAS head200 which provides the file system and of theRAID device100.
The storage system shown inFIG. 1 may be modified as shown inFIG. 11 in which thecontroller110 is given a function of aNAS head210 and acontroller115 constructed as one module is provided in theRAID device100. This configuration makes theRAID device100, instead of the NAS head, the one that provides theNAS client300 with a file system and at the same time makes it possible to use WORM also to limit access from theSAN client350 via theSAN150.
A storage system according to a second embodiment of the present invention is described next.
The first embodiment shows a method in which data stored in an LDEV that has not been accessed for a given period is automatically turned into WORM data by theNAS head200. The first embodiment is contrasted by the second embodiment where theNAS client300 gives an instruction to turn a file into a WORM file for each file. TheNAS head200 turns the relevant file into a WORM file with the instruction from theNAS client300 as the trigger. Components in the second embodiment that are identical to those in the first embodiment are denoted by the same reference symbols, and descriptions on such components are omitted here.
In the second embodiment, conducting the WORM file creating processing for each file upon instruction from theNAS client300 is made possible by setting an LDEV for storing a WORM file such that a file can additionally be written in the LDEV. Specifically, a field indicating the head block of a writable area is added to the LDEV Table1402, so that an area that is turned into WORM and a writable area can be managed for each LDEV. In the WORM file creating processing, a UDF image in an LDEV a part of which has already been turned into WORM is read and sent to the WORM buf2052 and, after creating a UDF image to which a file to be turned into a WORM file is added, the added portion alone is written in the LDEV.
FIG. 12 is an explanatory diagram showing the LDEV Table1402 in the second embodiment.
The LDEV Table1402 of this embodiment has a “wbegin” field in addition to the contents of the LDEV Table1402 shown inFIG. 5. The wbegin field indicates a writable area of an LDEV whose Type is “WORM”. In other words, the area preceding the block number indicated by wbegin is an unwritable area, while the area following the block number of wbegin is an area in which data can be written only once.
In the first embodiment described above, files turned into WORM files are written at once in an LDEV and then the Type of the LDEV is set to “WORM” in the LDEV Table1402. Once the LDEV Type is set to “WORM”, writing in the LDEV is prohibited and no additional data can be written in the LDEV.
In the second embodiment, on the other hand, the Type in the LDEV Table1402 is set to “WORM” and the wbegin field (FIG. 12) is set to “0” when an LDEV is secured. When a WORM file (data in the UDF format) is written in this LDEV thereafter, the write process is concluded by setting as wbegin the value “the last block of the written data +1”. With this processing, only the blocks that precede the block number of wbegin are made unwritable.
FIG. 13 shows the MAP Table2042 in the second embodiment. The MAP Table2042 of this embodiment has “size” and “wbegin” fields in addition to the contents of the MAP Table2042 shown inFIG. 3. The “size” field indicates the size of an LDEV. The “wbegin” field indicates the first block of a writable area of the LDEV as has been described referring toFIG. 12.
FIG. 14 is a flow chart showing WORM file creating processing in the second embodiment which is conducted by theNAS head200 by executing the WORM file creatingprocessing program2012 held in thememory2002 of theNAS head200.
A NAS client specifies a file and sends, via theNAS client300, an instruction to turn the file into a WORM file. TheNAS head200 receives the instruction and carries out the processing ofFIG. 14.
Upon receiving a WORM file creating instruction from theNAS client300, theNAS head200 first looks up the Meta FS in thedisk drive130 of theRAID device100 for meta-information of the file in question (Step S5001).
Then it is judged whether or not there is an LDEV that can store the specified file (Step S5002). Specifically, the MAP Table2042 is referenced to judge whether or not there is an LDEV having a capacity large enough to store the file among LDEVs whose Type is “WORM”. The LDEV size required to store a file is obtained by subtracting the value of the wbegin field in the MAP Table2042 from the value of the size field in the MAP Table2042.
When storing a file in an LDEV whose Type is “WORM”, it is necessary to store meta-information of the file as well as the file data. Accordingly, the chosen LDEV has to have a capacity large enough to store the file data and the meta-information both.
In the case where an LDEV capable of storing the specified file is found, the LU allocating processing (seeFIG. 8) for allocating an LU is performed on the LDEV (Step S5007). Next, directory information of this LDEV is read into the WORM buf2052 of the memory2002 (Step S5008).
When it is judged in Step S5002 that there is no LDEV capable of storing the file, an LDEV whose Type is “WORM” is secured (Step S5003). Specifically, an LDEV securing command which specifies the capacity required to store the specified file is sent to theRAID device100. Thecontroller110 of theRAID device100 secures an LDEV whose Type is “WORM” based on the received command.
Then the number of the secured LDEV (LDEV num) is registered in the MAP Table2042 (Step S5004). The LU allocating processing (seeFIG. 8) for allocating an LU is performed on the secured LDEV. After the LU is allocated to the LDEV, the WORM buf2052 of thememory2002 is initialized (Step S5006). Specifically, an empty UDF image whose capacity is large enough to store the specified file is created. The process then proceeds to Step S5009.
In Step S5009, user data of the specified file is read out of the Data FS in thedisk drive130 of theRAID device100. The read file is added as a UDF image to the WORM buf2052 to create a UDF image that is to be written in the LDEV.
When writing the UDF image in the LDEV, meta-information such as directory information is written along with the user data in the LDEV (Step S5010). A write start block of the LDEV can be known from the value indicated by the wbegin of the LDEV in the MAP Table2042. Thecontroller110 is instructed to write in the LDEV by specifying the number of the LU that is allocated to the LDEV.
Next, the wbegin value in the MAP Table2042 is changed to one to which the capacity of data written in the LDEV is added (Step S5011). At this point, theNAS head200 instructs thecontroller110 of theRAID device100 to update the wbegin value of the LDEV Table1042 as well.
Through the above processing, a file can be turned into a WORM file upon instruction from theNAS client300.
The thus structured second embodiment of the present invention is, in addition to having the effects of the first embodiment, capable of turning any arbitrary file into a WORM file as the need arises since a WORM file can be created upon instruction from theNAS client300. Furthermore, data of a WORM file is additionally written in an LDEV and therefore LDEVs can be used efficiently, which is a great help to cutting back the resource of theRAID device100.
A period in which no file is allowed to be updated may be set for an LDEV that stores a file turned into a WORM file. For instance, a no-file-update period is set for each LDEV in the LDEV Table1402. File data stored in the corresponding LDEV cannot be updated or deleted until the no-file-update period is over (this makes the file a WORM file). After the no-file-update period passes, it is allowed to, for example, delete the file in order to reduce the disk capacity. A user specifies a no-file-update period via theNAS client300. TheNAS head200 instructs thecontroller110 of theRAID device100 to set the specified no-file-update period in the LDEV Table1402 before the WORM file creating processing is started.
The above first and second embodiments take as an example a case in which theNAS client300 gives instructions on the unit of conversion to WORM, how long it takes to start conversion to WORM, and timing of conversion to WORM. Instead of theNAS client300, a storage management server may give those instructions. This facilitates storage management since the storage management server can also manage WORM files in a centralized manner.
While the present invention has been described in detail and pictorially in the accompanying drawings, the present invention is not limited to such detail but covers various obvious modifications and equivalent arrangements, which fall within the purview of the appended claims.