Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Google File System

From Wikipedia, the free encyclopedia
Distributed file system
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Google File System" – news ·newspapers ·books ·scholar ·JSTOR
(July 2016) (Learn how and when to remove this message)
Google File System
DeveloperGoogle
Operating systemLinux kernel
TypeDistributed file system
LicenseProprietary

Google File System (GFS orGoogleFS, not to be confused with theGFS Linux file system) is aproprietarydistributed file system developed byGoogle to provide efficient, reliable access to data using large clusters ofcommodity hardware. Google file system was replaced by Colossus in 2010.[1]

Design

[edit]
Google File System is designed for system-to-system interaction, and not for user-to-system interaction. The chunk servers replicate the data automatically.

GFS is enhanced for Google's core data storage and usage needs (primarily thesearch engine), which can generate enormous amounts of data that must be retained; Google File System grew out of an earlier Google effort, "BigFiles", developed byLarry Page andSergey Brin in the early days of Google, while it was still located inStanford. Files are divided into fixed-sizechunks of 64megabytes, similar to clusters or sectors in regular file systems, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read. It is also designed and optimized to run on Google's computing clusters, dense nodes which consist of cheap "commodity" computers, which means precautions must be taken against the high failure rate of individual nodes and the subsequent data loss. Other design decisions select for high datathroughputs, even when it comes at the cost oflatency.

A GFS cluster consists of multiple nodes. These nodes are divided into two types: oneMaster node and multipleChunkservers. Each file is divided into fixed-size chunks. Chunkservers store these chunks. Each chunk is assigned a globally unique 64-bit label by the master node at the time of creation, and logical mappings of files to constituent chunks are maintained. Each chunk is replicated several times throughout the network. At default, it is replicated three times, but this is configurable.[2] Files which are in high demand may have a higher replication factor, while files for which the application client uses strict storage optimizations may be replicated less than three times - in order to cope with quick garbage cleaning policies.[2]

The Master server does not usually store the actual chunks, but rather all themetadata associated with the chunks, such as the tables mapping the 64-bit labels to chunk locations and the files they make up (mapping from files to chunks), the locations of the copies of the chunks, what processes are reading or writing to a particular chunk, or taking a "snapshot" of the chunk pursuant to replicate it (usually at the instigation of the Master server, when, due to node failures, the number of copies of a chunk has fallen beneath the set number). All this metadata is kept current by the Master server periodically receiving updates from each chunk server ("Heart-beat messages").

Permissions for modifications are handled by a system of time-limited, expiring "leases", where the Master server grants permission to a process for a finite period of time during which no other process will be granted permission by the Master server to modify the chunk. The modifying chunkserver, which is always the primary chunk holder, then propagates the changes to the chunkservers with the backup copies. The changes are not saved until all chunkservers acknowledge, thus guaranteeing the completion andatomicity of the operation.

Programs access the chunks by first querying the Master server for the locations of the desired chunks; if the chunks are not being operated on (i.e. no outstanding leases exist), the Master replies with the locations, and the program then contacts and receives the data from the chunkserver directly (similar toKazaa and itssupernodes).

Unlike most other file systems, GFS is not implemented in thekernel of anoperating system, but is instead provided as auserspace library.[3]

Interface

[edit]

The Google File System does not provide aPOSIX interface.[4] Files are organized hierarchically in directories and identified by pathnames. The file operations such as create, delete, open, close, read, write are supported. It supports Record Append which allows multiple clients to append data to the same file concurrently and atomicity is guaranteed.

Performance

[edit]

Deciding from benchmarking results,[2] when used with relatively small number of servers (15), the file system achieves reading performance comparable to that of a single disk (80–100 MB/s), but has a reduced write performance (30 MB/s), and is relatively slow (5 MB/s) in appending data to existing files. The authors present no results on random seek time. As the master node is not directly involved in data reading (the data are passed from the chunk server directly to the reading client), the read rate increases significantly with the number of chunk servers, achieving 583 MB/s for 342 nodes. Aggregating multiple servers also allows big capacity, while it is somewhat reduced by storing data in three independent locations (to provide redundancy).

See also

[edit]

References

[edit]
  1. ^Ma, Eric (2012-11-29)."Colossus: Successor to the Google File System (GFS)". SysTutorials.Archived from the original on 2019-04-12. Retrieved2016-05-10.
  2. ^abcGhemawat, Gobioff & Leung 2003.
  3. ^Kyriazis, Dimosthenis (2013).Data Intensive Storage Services for Cloud Environments. IGI Global. p. 13.ISBN 9781466639355.
  4. ^Marshall Kirk McKusick; Sean Quinlan (August 2009)."GFS: Evolution on Fast-forward".ACM Queue.7 (7):10–20.doi:10.1145/1594204.1594206.

Bibliography

[edit]

External links

[edit]
  • "GFS: Evolution on Fast-forward",Queue, ACM.
  • "Google File System Eval, Part I",Storage mojo.
Disk and
non-rotating
Optical disc
Flash memory andSSD
host-sidewear leveling
Distributed parallel
NAS
Specialized
Pseudo
Encrypted
Types
Features
Access control
Interfaces
Lists
Layouts
a subsidiary ofAlphabet
Company
Divisions
Subsidiaries
Active
Defunct
Programs
Events
Infrastructure
People
Current
Former
Criticism
General
Incidents
Other
Software
A–C
D–N
O–Z
Operating systems
Machine learning models
Neural networks
Computer programs
Formats and codecs
Programming languages
Search algorithms
Domain names
Typefaces
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Y
Hardware
Pixel
Smartphones
Smartwatches
Tablets
Laptops
Other
Nexus
Smartphones
Tablets
Other
Other
Advertising
Antitrust
Intellectual
property
Privacy
Other
Related
Concepts
Products
Android
Street View coverage
YouTube
Other
Documentaries
Books
Popular culture
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Google_File_System&oldid=1314199829"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp