RELATED APPLICATIONSThis application is a continuation of and claims the priority benefit of U.S. application Ser. No. 12/901,276 filed Oct. 8, 2010, which is a continuation of and claims the priority benefit of U.S. application Ser. No. 10/837,120 filed Apr. 30, 2004 (U.S. Pat. No. 7,814,131), which claims the benefit of Provisional U.S. Patent Application No. 60/541,512 filed on Feb. 2, 2004.
BACKGROUNDFeatures generally relate to the field of computers, and, more particularly, to aliasing of exported paths in a storage system.
Modern computer networks can include various types of storage servers. Storage servers can be used for many different purposes, such as to provide multiple users with access to shared data or to back up mission-critical data. A file server is a type of storage server which operates on behalf of one or more clients to store and manage shared files in a set of mass storage devices, such as magnetic or optical storage based disks or tapes. The mass storage devices are typically organized into one or more volumes of Redundant Array of Independent (or Inexpensive) Disks (RAID). A single physical file server may implement multiple independent file systems, sometimes referred to as “virtual filers”.
One configuration in which a file server can be used is a network attached storage (NAS) configuration. In a NAS configuration, a file server can be implemented in the form of an appliance, called a filer, that attaches to a network, such as a local area network (LAN) or a corporate intranet. An example of such an appliance is any of the Filer products made by Network Appliance, Inc. in Sunnyvale, Calif.
A storage server can also be employed in a storage area network (SAN). A SAN is a highly efficient network of interconnected, shared storage devices. In a SAN, the storage server (which may be an appliance) provides a remote host with block-level access to stored data, whereas in a NAS configuration, the storage server provides clients with file-level access to stored data. Some storage servers, such as certain Filers from Network Appliance, Inc. are capable of operating in either a NAS mode or a SAN mode, or even both modes at the same time. Such dual-use devices are sometimes referred to as “unified storage” devices. A storage server such as this may use any of various protocols to store and provide data, such as Network File System (NFS), Common Internet File system (CIFS), Internet SCSI (ISCSI), and/or Fibre Channel Protocol (FCP).
A storage server may use any of various protocols to communicate with its clients, such as Network File System (NFS) and/or Common Internet File system (CIFS). The use of CIFS “shares” allows an advertised resource (e.g., one or more files or a portion thereof) to be moved to a new location on a file server. The client does not have to be informed that the resource has moved, and no state has to change on the client. Hence, if a virtual filer is moved from one location to a new one, it is not necessary to visit each and every client to make changes to the imported resources.
NFS exports, however, as used by the commonly deployed versions of the NFS protocol, NFSversions 2 and 3, do not allow for the moving of an advertised resource to a new location on a file server. Clients must be informed that the resource has been moved, and state must be changed on the client. Consequently, when a virtual filer is moved to a new location, it is necessary to visit each and every client to make changes to the imported resources.
SUMMARYA request is received, from a client by a storage server, to access a resource stored in the storage server based on a filehandle for the resource. A determination is made of whether an entry of a plurality of entries in an exports table has a filehandle that matches the filehandle for the resource. The entry includes a physical path of the resource that is different than an advertised path of the resource, in response to the filehandle in the entry retrieved using the physical path. In response to determining that the filehandle in the entry matches the filehandle for the resource, a determination is made of whether a pathname in the entry matches a pathname for the resource. In response to determining that the pathname in the entry matches the pathname for the resource, a determination is made of whether the client has permission to access the resource. In response to determining that the client has permission to access the resource, the request to access the resource is executed.
A request is received from a client to access a resource based on a filehandle for the resource. A determination is made of whether an entry of a plurality of entries in an exports table has a filehandle that matches the filehandle for the resource. The entry includes a physical path of the resource that is different than an advertised path of the resource, in response to the filehandle in the entry retrieved using the physical path. In response to a determination that the filehandle in the entry matches the filehandle for the resource, a determination is made of whether the client has permission to access the resource. In response to a determination that the client has permission to access the resource, the request to access the resource is executed.
BRIEF DESCRIPTION OF THE DRAWINGSThe present features may be better understood, and numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings.
FIG. 1 illustrates a network environment that includes storage systems;
FIG. 2 is a block diagram of a storage system;
FIG. 3 is a block diagram of the operating system kernel of a storage system;
FIG. 4 schematically illustrates a portion of a file system;
FIG. 5 illustrates a portion an exports table;
FIG. 6 shows a manner of using a pathname or filehandle to access entries in an exports table;
FIG. 7 is a flow diagram showing a process of building an exports table;
FIG. 8 is a flow diagram showing a process of processing a mount request; and
FIG. 9 is a flow diagram showing a process of processing an NFS request.
DETAILED DESCRIPTIONA method and apparatus for aliasing exported paths in a storage system are described. The description that follows includes example systems, methods, techniques, instruction sequences and computer program products that embody techniques of the features described herein. However, it is understood that the described features may be practiced without these specific details. In other instances, well-known instruction instances, protocols, structures and techniques have not been shown in detail in order not to obfuscate the description.
References to “NFS” in this description may be assumed to be referring toNFS version 2 or 3, unless otherwise specified. With NFS, a file server makes available (i.e., “exports”) certain stored resources (e.g., files) to certain clients. A particular client can request to “mount” an exported resource (i.e., request to be granted the ability to access an exported resource) in NFS by sending a “mount request” to file server. If the client has permission to access the resource, the file server allows the client to “mount” the resource by returning the filehandle of the resource to the client in response to mount request. As used in this description, a filehandle is a compact key, usually with a fixed length and representable as an integer, which a server returns to a client to describe a pathname.
In accordance with some features, and as described in greater detail below, an NFS-compliant storage system such as a filer can export a stored resource to clients by advertising to the clients a different pathname than the actual pathname of the resource. What is meant by “advertising” is that the pathname of a resource available for mounting is made known to clients. This enables the storage system to redirect resource locations. As a result, an administrator can make changes to the client over time and remove the indirection at a later date.
According to some features, this indirection is created by providing a new option that can be used in an export rule for purposes of exporting NFS resources to clients; the option is referred to herein as an “-actual” directive. The “-actual” directive enables an alias to be provided for an exported pathname. Hence, the basic format of an export rule is as follows:
advertised_path -actual=physical_path, [attributes]
For example, the export rule
/engineering -actual=/vol/vol1/engineering, rw=.eng.netapp.com, anon=0
states that the physical storage path of /engineering is /vol/vol1/engineering (the “rw” attribute indicates the host(s) (client(s)) with read/write privileges for the pathname). The clients would reference /engineering, but the file server would translate this to /vol/vol1/engineering.
The “-actual” directive allows migration of storage both on a filer and a virtual filer without reconfiguring all clients which have pathnames coded into automounter and /etc/[v]fstab maps. (An automounter is a client based scheme which automatically scans the list of advertised exports from a host and mounts them, without the user having to manually specify to the client which exports to load. The reference to /etc/[v]fstab is a Sun Microsystems reference to a location where all static mount points to be loaded at boot time are stored in recent versions of Sun's Solaris operating System.) One benefit is that any potential downtime is reduced, because administrators do not have to coordinate changes on multiple clients.
If a virtual filer is migrated from one physical filer to another, the underlying backing storage may change, i.e., it might be stored in filerA:/vol/vol17/vfiler13 and moved to filerB:/vol/vol3/vfiler13a on another filer. It might be that there is no corresponding volume or the path is already in use. Once “-actual” path directives have been added, the advertised paths can stay the same even after the virtual filer has been moved.
Note that a filer, as described herein, differs from traditional server architectures, such as servers running the UNIX operating system, when they are used to perform file-serving. Whereas a traditional server associates file systems with regions of a local file system name hierarchy for the purpose of making files accessible by user programs running locally on the server, a filer has no such user programs, and thus, has no need to maintain data structures to provide such local access. Within such a traditional server architecture, this redirection problem may be solved by the creation of a symbolic link from the physical storage path to the advertised path.
The use of the “-actual” directive has two main advantages over that approach:
1) The advertised path does not have to exist anywhere in the physical storage of the filer's file system. That is, with this approach the advertised path does not consume an Mode.
2) Each time a redirection is encountered via the “-actual” directive, it is not necessary to load an Mode file and perform the redirection. All processing can be handled in the mounted thread layer and bypass potential disk accessing.
FIG. 1 illustrates an example of a network environment in accordance with some features. The system ofFIG. 1 includes a number ofstorage servers2, each coupled locally to a set of mass storage devices4, and through an interconnect3 to a set ofclients1. A path aliasing technique in accordance with some features can be implemented in each of thestorage servers2. Eachstorage server2 may be, for example, an NFS-enabled filer, as is henceforth assumed in this description. Eachfiler2 receives various read and write requests from theclients1 and accesses data stored in the mass storage devices4 to service those requests.
Each of theclients1 may be, for example, a conventional personal computer (PC), workstation, or the like. The mass storage devices4 may be, for example, conventional magnetic tapes or disks, optical disks such as CD-ROM or DVD based storage, magneto-optical (MO) storage, or any other type of non-volatile storage devices suitable for storing large quantities of data, or a combination thereof. The mass storage devices4 may be organized into one or more volumes of Redundant Array of Independent Disks (RAID). The interconnect3 may be essentially any type of computer network, such as a local area network (LAN), a wide area network (WAN), metropolitan area network (MAN) or the Internet, and may implement the Internet Protocol (IP).
FIG. 2 is a block diagram showing the architecture of afiler2, according to some features. Certain standard and well-known components which are not germane to some features may not be shown. Thefiler2 includes one ormore processors21 andmemory22 coupled to abus system23. Thebus system23 is an abstraction that represents any one or more separate physical buses and/or point-to-point connections, connected by appropriate bridges, adapters and/or controllers. Thebus system23, therefore, may include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (sometimes referred to as “Firewire”).
Theprocessors21 are the central processing units (CPUs) of thefiler2 and, thus, control the overall operation of thefiler2. According to some features, theprocessors21 accomplish this by executing software stored inmemory22. Aprocessor21 may be, or may include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.
Thememory22 represents any form of random access memory (RAM), read-only memory (ROM), flash memory, or the like, or a combination of such devices.Memory22 stores, among other things, theoperating system24 of thefiler2, in which the techniques introduced herein can be implemented.
Also connected to theprocessors21 through thebus system23 are one or more internal mass storage devices25, astorage adapter26 and anetwork adapter27. Internal mass storage devices25 may be or include any conventional medium for storing large volumes of data in a non-volatile manner, such as one or more disks. Thestorage adapter26 allows thefiler2 to access the external mass storage devices4 and may be, for example, a Fibre Channel adapter or a SCSI adapter. Thenetwork adapter27 provides thefiler2 with the ability to communicate with remote devices such as theclients1 over a network and may be, for example, an Ethernet adapter.
Memory22 and mass storage devices25 store software instructions and/or data which may include instructions and/or data used to implement the techniques introduced herein. For example, these instructions and/or data may be implemented as part of theoperating system24 of thefiler2.
A shown inFIG. 3, theoperating system24 of thefiler2 has akernel30 that includes several modules, or layers. These layers include afile system31, which executes read and write operations on the mass storage devices4 in response to client requests, maintains directories of stored data, etc. Thekernel30 further includes a mount deamon (“mountd”)module36, which implements the mount deamon processes for NFS. Themountd module36 may also implement the resource exportation algorithms and path aliasing algorithms described further below.
“Under” the file system31 (logically), thekernel30 also includes anetwork layer32 and an associatedmedia access layer33, to allow thefiler2 to communicate over a network (e.g., with clients1). Thenetwork access32 layer may implement any of various protocols, such as NFS, CIFS and/or HTTP. Themedia access layer33 includes one or more drivers which implement one or more protocols to communicate over the network, such as Ethernet. Also logically under thefile system31, thekernel30 includes astorage access layer34 and an associateddriver layer35, to allow thefiler2 to communicate with external mass storage devices4. Thestorage access layer34 implements a disk storage protocol such as RAID, while thedriver layer35 implements a lower-level storage device access protocol, such as Fibre Channel Protocol (FCP) or SCSI. The details of the above-mentioned layers of thekernel30 are not necessary for an understanding of some features and, hence, need not be described herein.
There are three main processes of afiler2 which should now be considered with respect to some features:
1) building an exports table
2) processing a mount request from a client
3) processing in NFS request from a client
The list of exports (exported resources) advertised by afiler2 are typically stored in an online exports table. An exports table includes permissions for the various clients, with respect to the export points (exported resources) provided by afiler2. In addition, as described further below, an exports table includes a mapping of filehandle to the advertised pathname and the -actual pathname (if different from advertised pathname), for each export point. An exports table is used by thefiler2 to process mount requests fromclients1, i.e., to retrieve the filehandle corresponding to the advertised pathname specified in each mount request. The filehandles are subsequently included in NFS (read/write) requests submitted byclients1 to thefiler2.
The “-actual” directive is used when constructing an exports table, i.e., when an export rule is loaded into the exports table. At that time, the filehandle of the physical storage path is mapped to the filehandle that will be returned to the client in response to a mount request. The filehandle which will be returned to the client is stored in the exports table in association with the advertised pathname and the -actual (physical) pathname. If a virtual filer is then migrated or the underlying path to the physical storage is changed, the filehandle cannot change.
There are two keys to an export point when using NFS: the pathname and filehandle. The pathname is used to process mount requests, and the filehandle is used to process NFS (read/write) requests. The basic process is for aclient1 to supply a pathname in the mount request and for thefiler2 then to return the filehandle for use in NFS requests. The filehandle is unique over the lifetime of the export point, but the pathname can be changed. If the pathname is changed, then all clients which do not have the export mounted will have to their automounter maps and static /etc/[v]fstab maps updated to reflect the new value.
As noted above, pathname indirection is created in accordance with some features by providing a new option that can be used when exporting NFS resources to clients, referred to herein as the “-actual” directive. The “-actual” directive maps a virtual export point to a physical export point. Hence, in the export rule
/vol/vol9/vf1 -actual=/vol/vol3/vf1, rw=host2, anon=0
the advertised storage path /vol/vol9/vf1 is bound to the physical storage path /vol/vol3/vf1. Note that if the path /vol/vol9/vf1 exists, it is not accessible from outside thefiler2.
When anNFS client1 mounts a resource from afiler2, a mount request is sent to themountd module36 on thefiler2. The request provides a pathname that is desired to be shared by theclient1. This pathname can be determined either by a previous request to enumerate the advertised exports (via a show mount command) or by running the export command on thefiler2 to manually determine the set of exports. If the export rule does not contain an “-actual” directive, then the advertised pathname is the physical storage path. For example, in the export rule
/vol/vol8/vf2 -rw=host1, anon=0
/vol/vol8/vf2 is both the advertised path and the physical storage path. However, if the “-actual” directive is used, then the pathname specified by the -actual directive is the physical storage path. Thus, in response to a mount request, if the pathname provided in the request corresponds to an export which has an “-actual” directive, then the filehandle for the physical storage path determined by the pathname supplied to the “-actual” directive is returned to the client.
The exports table is indexed by two keys: the filehandle associated with the advertised path and the advertised path. When implementing the “-actual” directive, care should be taken to ensure that the filehandle corresponds to the physical storage path and not the advertised path. When filling the exports table (either from an exports file, typically /etc/exports, or via user-initiated non-persistent exports), thefiler2 determines that the “-actual” directive is in use and retrieves the filehandle associated with that path.
This mapping is needed because NFS requests strictly utilize the filehandle and not the path. When the filehandle from an NFS request is used to index to the corresponding export entry, the operation must yield the filehandle associated with the physical storage. Thus, the mapping is done for NFS request as the export is loaded into memory, avoiding any translations when processing a request.
The mapping of the second index occurs when mount requests are sent to thefiler2. In that case, the requestingclient1 only knows the advertised path and probably has no knowledge that the underlying physical storage has changed. Once the advertised path has been used to access the correct export entry, thefiler2 returns the filehandle associated with the “-actual” directive as the filehandle of the mount point.
Refer now toFIG. 4 which shows a simple example of a portion of a file system in afiler2. Assume for this example that there are four export points in the file system, having the following four advertised paths, respectively:
| |
| /vol/vol0 |
| /vol/vol0/home |
| /vol/vol1 |
| /foo |
| |
The export rules for these export points may be initially obtained from an export file (e.g., /etc/exports), for example, which also specifies permissions of the various clients with respect to these export points. For example, the export file may specify the following information (and other information not shown):
| |
| /vol/vol0 -rw=host1 |
| /vol/vol0/home -rw=host1 |
| /vol/vol1 -rw=host2 |
| /foo -actual=/vol/vol2, -rw=host1,host2 |
| |
Note that the advertised pathname /foo in this example actually corresponds to the physical pathname /vol/vol1, as specified by the “-actual” option in the last export rule.
FIG. 5 shows a simplified example of an exports table corresponding to the above-described export file (with non-germane information not shown). The exports table50 includes an entry for each export point, wherein each entry includes (among other information): the advertised pathname of the export point, the -actual pathname of the export point (if any), the filehandle of the export point, and any client permissions applicable to the export point.
An exports table is used to look up filehandles and to determine client permissions in response to both mount requests and NFS requests, as further illustrated inFIG. 6 according to some features. In response to a mount request from a client, the pathname in the mount request is input to a pathname hash unit61, which applies a hashing function to the pathname to generate a short key. The short key is an index, in a corresponding hash table62, of a pointer that refers to one or more entries in an exports table66. The hashing function and short key are used to reduce the amount of storage needed in thefiler2, since multiple pathnames can be associated with the same short key. Any ambiguity thereby introduced is easily removed by subsequently comparing the pathname in the request with the pathnames of any entries in the exports table66 that correspond to the short key.
Similarly, in response to NFS (read/write) request from aclient1, the filehandle in the NFS request is input to afilehandle hash unit63, which applies a hashing function to the filehandle to generate a short key. The short key in this instance is an index, in a corresponding hash table64, of a pointer that refers to one or more entries in the exports table66.
Refer now toFIG. 7, which shows the process of building an exports table in afiler2, according to some features. It is assumed, to facilitate description, that the export rules are initially obtained from an export file (e.g., /etc/exports). Initially, the process reads the first rule in the export file at701. Next, the process determines at702 whether the rule includes an -actual directive. If the rule does not include an -actual directive, the process branches to710.
At710, the process determines whether the pathname of an entry in the exports table matches an -actual pathname loaded into the exports table from a prior export rule. It is necessary to ensure that the advertised path and filehandles are unique. If they are not unique, access permissions cannot be deterministically checked on every request. Hence, if there is such a match at710, then the process generates an error message at711, and the process then ends. If there is no such match, then the process uses the pathname to make a call to the file system at712 to look up the filehandle corresponding to the pathname. The process then proceeds to707, as described below.
Referring again to702, if the rule includes an “-actual” directive, then the process determines at703 whether the pathname referenced by the -actual directive exists. As a byproduct, this check disallows nested loops of the form:
| |
| /vol/volA -actual=/vol/volB |
| /vol/volB -actual=/vol/volC |
| /vol/volC -actual=/vol/volA |
| |
when /vol/volA is not a real storage path. If the pathname referenced by the actual directive does not exist, then the process generates an error message at711, and the process then ends.
If the pathname does exist, then the process determines at704 whether the -actual pathname corresponds to the advertised pathname of any other resource. Consider the following:
| |
| /vol/volX | -rw |
| /mapped | -actual=/vol/volX,ro |
| |
The advertised path and filehandle are unique in the exports tables, so that access permissions can be deterministically checked on every request. If the actual path corresponds to that of an export already loaded, the duplication rules in force by the operating system are used, which may mean overwriting the export already loaded. Referring back toFIG. 7, therefore, if the -actual pathname is determined at704 to correspond to the advertised pathname of another resource, then the process generates an error message at711, and the process then ends.
If the -actual pathname does not correspond to the advertised pathname of any other resource at704, then the process determines at705 whether any loops would be formed if the -actual directive is used. Consider the following example:
| |
| /vol/volY | -actual=/vol/volZ,rw |
| /vol/volZ | -actual=/vol/volY,ro |
| |
The first export rule should be loaded successfully, because there is no loop. When the second export rule is loaded, the operation should fail because it sets up a loop.
With just one layer of indirection and the rule that the “-actual” directive must point to real storage, a loop such as above is only confusing for administrators and users; it does not pose any problems in code. As shown earlier, the loop
| |
| /vol/volA | -actual=/vol/volB |
| /vol/volB | -actual=/vol/volC |
| /vol/volC | -actual=/vol/volA |
| |
is also problematic only for human understanding, if the “-actual” directive is restricted to real storage and all exports are not considered when doing the determination of real storage for /vol/volC. In other words, if all export rules are first loaded and then an attempt is made to determine storage paths based on all available mappings, an infinite loop could result.
To avoid this problem, only one level of indirection is evaluated, and all target paths of the “-actual” directive are forced to have corresponding physical storage.
Referring again toFIG. 7, therefore, if any loops would be formed by using the -actual directive (705), then the process generates an error message at711, and the process then ends. If no loops would be formed, then at706 the process makes a call to thefile system31 to look up the filehandle corresponding to the -actual pathname.
At707, after the filehandle is received from the file system, the process stores the filehandle in the entry in the exports table for the particular export point, in association with the advertised pathname and the -actual pathname (if any). Following707, if there are additional rules in the export file remaining to be processed (709), then the process accesses the next rule in the export file at708, and the entire process repeats, until all of the export rules have been processed.
The second major process performed by afiler2 is the process of responding to a mount request from a client. Refer now toFIG. 8, which shows an example of such process according to some features. Initially, a mount request is received from a client at801. The process then hashes the pathname in the mount request at802 to produce a short key. The process then determines at803 whether any entry in the exports table matches the short key. If there is no matching entry, then the process returns an error message to the client at815, and the process then ends.
If there is a matching entry, then at804 the process determines which of the export points that correspond to the generated short key (if any) has the longest pathname prefix matching the pathname in the mount request. Some additional explanation may be useful here. Assume the export points in a file system are:
| |
| /vol/volX |
| /vol/volY |
| /vol/volZ |
| |
Assume further that a client requests to mount /vol/volY/dir1, which is not an export point. In this example, /vol/volY is the export point that has the longest matching pathname prefix. (Of course, in some cases an exported pathname may exactly match the pathname in the mount request in its entirety; in that event, the entire pathname of the export is considered to be the matching “prefix” for purposes of804.) Accordingly, in cases where the pathname in the mount request only partially matches the pathname of any export point (i.e., a matching prefix), the access permissions for that request will be determined from the permissions of the export point that has the longest matching pathname prefix, if any.
Referring again toFIG. 8, therefore, if no pathname in the exports table has a pathname prefix that matches the pathname in the request (804), then the process returns an error message to the client at815, and the process then ends. If there is a matching prefix in the exports table, then the process continues from806.
At806 the process looks up the permissions for the requesting client for the export point with the longest matching prefix. The process then determines at807 whether the client has permissions to access the export point. If the client does not have permission to access the export point, then the process returns an error message to the client at815, and the process ends. If the client does have permission, than the process next determines at808 whether the pathname in the request exactly matches the pathname of the corresponding export point in the export of the table. If there is an exact match, then at809 the process returns the filehandle of the export point (as specified in the exports table) to the requesting client, and the process ends.
If, however, the pathname in the request does not exactly match that the pathname of the export point in the exports table, then the process continues with810, as described below. First, though, some additional explanation may be useful. Assume, for example, the following set of exports and permissions:
| |
| /vol/volX | -actual=/vol/volY/flow, rw=h1:h2 |
| /vol/volX/dir1 | -actual=/vol/volY/flow/dir1, rw=h3:h4 |
Assume further that a host, h1, requests to mount the following paths:
| 1 | /vol/volX | 67 |
| 2 | /vol/volX/dir1 | 79 |
| 3 | /vol/volX/dir2 | 73 |
| 4 | /vol/volX/dir1/sub1 | 1033 |
|
The first two cases above result in exact matches in808, and the corresponding filehandles are returned to the client at809. In case3, however, there is a match on the prefix “/vol/volX”, but if the file system is asked to return a filehandle for the entire pathname, i.e. “/vol/volX/dir2”, the result would be either the wrong filehandle in the case of an existing /vol/volX or a “ENOENT” (“no entry”) error if the path does not exist. Therefore, the longest matching prefix is replaced with the -actual path, yielding “/vol/volY/flow/dir2”. The resulting filehandle that is returned is, therefore,73. The same procedure is used in case4 (after adding host h1 to the permissions list), yielding a path of “/vol/volY/flow/dir1/sub1” and a filehandle of1033.
Referring again toFIG. 8, therefore, this sub-process is carried out as follows. After determining that there is no exact match with the pathname in the exports table (808), the process determines at810 whether the export point has an -actual path specified in the exports table. If the export point does not have an -actual path in the exports table, then the process next assigns a pathname variable, tpath, the pathname in the mount request at814. The process then requests the filehandle for tpath from the file system at812, and returns that filehandle to the requesting client at813, after which the process ends.
If, however, the export point does have an -actual path (810) specified in the exports table, then at811 the process assigns tpath a value as follows:
tpath=actual path=(requested path−longest matching prefix)
The process then requests the filehandle for tpath from the file system at812, and returns that filehandle to the requesting client at813, after which the process ends.
The third major process performed by afiler2 is the process of responding to an NFS request (e.g., a read or write request) from a client. Refer now toFIG. 9, which shows an example of such process according to some features. Initially, at901 thefiler2 receives an NFS request from aclient1. The request includes a pathname and the filehandle of the resource which theclient1 is attempting to read or write. At902 the process hashes the filehandle request to produce a short key. If there is at least one entry in the exports table which matches the filehandle short key (903), the process continues from904. Otherwise, the process generates returns an error message to the client at908, and the process then ends.
If there is at least one matching entry, then the process next determines at904 whether any of the matching entries in the exports table has a pathname that matches the pathname in the NFS request. If there is no matching entry, the process returns an error message to the client at908, and the process ends. If there is a matching entry, then the process looks up the permissions for the matching entry in the exports table at905. If the requesting client has permission to access the resource, then the process causes the filer to execute the requested operation (read or write) at907, and the process then ends. Otherwise, the process returns an error message to the client at908, and the process then ends.
While the features are described with reference to various configurations and exploitations, it will be understood that these descriptions are illustrative and that the scope of the features is not limited to them. In general, techniques for atomic operations using write-back caching in a distributed storage system as described herein may be implemented with facilities consistent with any hardware system or hardware systems. Many variations, modifications, additions, and improvements are possible.
Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the features described herein. In general, structures and functionality presented as separate components in the example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the features described herein.