The method for organizing of client directory buffer memory in a kind of distributed file systemTechnical field
The present invention relates to directory entry management in the distributed file system, specifically, relate to the method for organizing of client directory buffer memory in a kind of distributed file system.
Background technology
The develop rapidly of Along with computer technology, various application are increasing for the demand of storage, and this is wherein typical with the application of network.The storage demand of network application roughly is divided into two kinds, and a kind of is that big file is main storage demand, uses like audio-video network, and the characteristics of this type application are that number of files is few, but the size of single file normally GB even TB rank; A kind of in addition is main storage demand with the small documents, and like online shopping mall, portal website etc., the characteristics of this type demand are that single file is little; But quantity of documents is huge; Deposit up to ten million files under the common single catalogue, and this class file only writes once usually, later on to be read as the master.
In most of network application, in order to satisfy the demand of storage, distributed file system is introduced in the diverse network application, and that this is wherein representative is NFS, lustre, GPFS etc.The characteristics of this type distributed file system are that the operation for big file has reasonable performance, if but the small documents of enormous amount is arranged under the single catalogue, then the efficient of the catalog item of this type of file system then is difficult to satisfactory.Therefore, a lot of network companies, like Taobao, Netease, Tengxun etc., in order to satisfy own demand, one after another to the small documents design Storage storage architecture of suitable own demand.
In the parallel file system of seeing at present that storage is optimized to small documents, the overwhelming majority adopts single group metadata framework, and client just goes to read on the meta data server when carrying out metadata access usually when needed.Like this; The delay of network will be shone into very big influence the response speed of client; And if the data that client need be visited are not in the internal memory of meta data server; Then also need visit disk, this has influenced the real-time of using with regard to making the access time of application program have a big chunk to be wasted in above the IO.
Summary of the invention
The present invention is intended to disclose the method for organizing of client directory item buffer memory in a kind of distributed file system, and this method can solve the following low problem of mass small documents access efficiency of monocular record in the network application effectively.
The method for organizing of client directory buffer memory in a kind of distributed file system,
Divide the catalogue subclass as required, the directory entry in the single catalogue is carried out Hash operation, store in each catalogue subclass, each catalogue subclass is distributed on the meta data server, and the directory entry buffer structure on the client is organized according to the catalogue subclass.
Preferably, when application need traveled through said catalogue, whether client was at first inquired about local cache and is existed, if exist, then directly returned to the client; If do not exist, then read to meta data server, read completion after, client leaves it in local cache, returns to application then.
Preferably, said reading adopts parallel mode to read.
Preferably, said client can be looked ahead to the file under this catalogue after reading catalogue for the first time.
Preferably, said strategy of looking ahead is: all directory entries under this catalogue are corresponded to read corresponding index node on the meta data server.
Preferably, said order of looking ahead can be sent by answering, and client is read back the index node of this batch file, and then removed the data server prefetch data when receiving prefetch request from meta data server.
In the present invention, distributed file system adopts the multivariate data server architecture, and promptly the distribution of content of single catalogue is on a plurality of meta data servers.Why selecting the framework of multivariate data for use, mainly is in order to disperse the pressure of metadata access, to improve concurrency.Write to network application and to read many characteristics less, the present invention keeps the content and the corresponding index node of directory entry in the buffer memory of client, and needs repeatedly communicated by letter with server when avoiding client repeatedly to read; Simultaneously; When catalogue of maiden visit, the directory entry that is distributed in this catalogue on the different meta data servers walked abreast read in advance, simultaneously; According to acquiescence read in advance that strategy or application program issue read strategy in advance, file inode and file content are read in advance.Like this, when application program needed certain file of access catalog item, metadata of this document and data possibly read in the client terminal local buffer memory in advance, thereby the execution speed of accelerating application greatly.
Embodiment
Elaborate below in conjunction with embodiment:
(1) among the present invention, the directory entry in the single catalogue carries out Hash according to its name earlier, is divided into some subclass, and each subclass is distributed on the meta data server.
(2) the directory entry buffer structure on the client is organized according to the directory entry subclass, promptly the directory entry that is distributed on each metadata is managed respectively, keeps independent each other.
(3) when certain catalogue of application need traversal, whether client inquiry local cache earlier exists, if exist, then directly returns to the user.If buffer memory does not exist, then need read to meta data server.When reading,, therefore adopt parallel mode to read in the invention, can quicken the speed that directory entry reads like this because all directory entries of single catalogue leave on the different meta data servers according to subclass.After directory entry read, client left it in local cache earlier, returns to application program then.
(4) final purpose of a catalogue of application access is the file of visiting under it usually, therefore after the directory entry traversal, and then can have the request of visiting each file under this catalogue to be handed down to the client of file system successively.In order to make full use of the professional time of self handling of application program, among the present invention, file system client can be looked ahead to the file under this catalogue after reading catalogue for the first time.The default policy of looking ahead is all directory entries under this catalogue to be corresponded to read corresponding index node information on the meta data server.Application program also can be according to the characteristics of self; Issue the strategy of reading in advance to client, read a certain batch file in advance like needs, client receive read strategy request in advance after; Can the index node of this batch file be read back from meta data server, and then remove prefetch data on the data server.Like this, when application program need be visited concrete file, the possible data that it needs were through reading to have entered into the local cache of client in advance, thereby can significantly reduce the response time of application program.