This articlehas an unclearcitation style. The references used may be made clearer with a different or consistent style ofcitation andfootnoting.(October 2025) (Learn how and when to remove this message) |
| Google Bigtable | |
|---|---|
| Developer | |
| Initial release | February 2005; 20 years ago (2005-02) |
| Written in | |
| Platform | Google Cloud Platform |
| Type | Cloud Storage |
| License | Proprietary |
| Website | cloud |
Bigtable is a fully managedwide-column andkey-valueNoSQL database service for large analytical and operational workloads as part of theGoogle Cloud portfolio.
Bigtable development began in 2004.[1] It is now used by a number of Google applications, such asGoogle Analytics,[2] web indexing,[3]MapReduce, which is often used for generating and modifying data stored in Bigtable,[4]Google Maps,[5]Google Books search, "My Search History",Google Earth,Blogger.com,Google Code hosting,YouTube,[6] andGmail.[7] Google's reasons for developing its own database include scalability and better control of performance characteristics.[8]
Apache HBase andCassandra are some of the best known open source projects that were modeled after Bigtable. Bigtable offersHBase andCassandra compatible APIs.
On May 6, 2015, a public version of Bigtable was made available as a part ofGoogle Cloud under the name Cloud Bigtable.[2]
As of April 2024, Bigtable manages over 10 Exabytes of data and serves more than 7 billion requests per second.[9] Since its launch, Google announced a number of updates to Bigtable, includingSQL support,incremental materialized views,global secondary indexes and automated scalability.[10]
Bigtable is one of the prototypical examples of awide-column store. It maps two arbitrary string values (row key and column key) and timestamp (hence three-dimensional mapping) into an associated arbitrary byte array. It is not a relational database and can be better defined as a sparse, distributed multi-dimensional sorted map.[3]: 1 It is built on Colossus (Google File System),Chubby Lock Service, SSTable (log-structured storage likeLevelDB) and a few otherGoogle technologies. Bigtable is designed to scale into thepetabyte range across "hundreds or thousands of machines, and to make it easy to add more machines [to] the system and automatically start taking advantage of those resources without any reconfiguration".[11] For example, Google's copy of the web can be stored in a bigtable where the row key is adomain-reversed URL, and columns describe various properties of a web page, with one particular column holding the page itself. The page column can have several timestamped versions describing different copies of the web page timestamped by when they were fetched. Each cell of a bigtable can have zero or more timestamped versions of the data. Another function of the timestamp is to allow for bothversioning andgarbage collection of expired data.
Tables are split into multipletablets – segments of the table are split at certain row keys so that each tablet is a few hundred megabytes or a few gigabytes in size. A bigtable is somewhat like a mapreduce worker pool in that thousands to hundreds of thousands of tablet shards may be served by hundreds to thousands of BigTable servers. When Table size threaten to grow beyond a specified limit, the tablets may be compressed using the algorithm BMDiff[12][13] and the Zippy compression algorithm[14] publicly known and open-sourced asSnappy,[15] which is a less space-optimal variation ofLZ77 but more efficient in terms of computing time. The locations in the GFS of tablets are recorded as database entries in multiple special tablets, which are called "META1" tablets. META1 tablets are found by querying the single "META0" tablet, which typically resides on a server of its own since it is often queried by clients as to the location of the "META1" tablet which itself has the answer to the question of where the actual data is located. Like GFS's master server, the META0 server is not generally abottleneck since the processor time and bandwidth necessary to discover and transmit META1 locations is minimal and clients aggressively cache locations to minimize queries.
First an overview. Bigtable has been in development since early 2004 and has been in active use for about eight months (about February 2005)..
There are currently around 100 cells for services such as Print, Search History, Maps, and Orkut.
Their new solution for thumbnails is to use Google's Bigtable, which provides high performance for a large number of rows, fault tolerance, caching, etc. This is a nice (and rare?) example of actual synergy in an acquisition..