Incomputer science,tabulation hashing is a method for constructinguniversal families of hash functions by combiningtable lookup withexclusive or operations. It was first studied in the form ofZobrist hashing for computer games; later work by Carter andWegman extended this method to arbitrary fixed-length keys. Generalizations of tabulation hashing have also been developed that can handle variable-length keys such as text strings.
Despite its simplicity, tabulation hashing has strong theoretical properties that distinguish it from some other hash functions. In particular, it is3-independent: every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values. However, it is not 4-independent. More sophisticated but slower variants of tabulation hashing extend the method to higher degrees of independence.
Because of its high degree of independence, tabulation hashing is usable with hashing methods that require a high-quality hash function, includinghopscotch hashing,cuckoo hashing, and theMinHash technique for estimating the size of set intersections.
The basic idea is as follows:
First, divide the key to be hashed into smaller "blocks" of a chosen length. Then, create a set oflookup tables, one for each block, and fill them with random values. Finally, use the tables to compute a hash value for each block, and combine all of these hashes into a final hash value using the bitwiseexclusive or operation.[1]
More formally:
Letp be the number ofbits in a key to be hashed, andq be the number of bits desired in an output hash function. Choose a block sizer ≤p; the choice of block size controls the tradeoff between time and memory usage, so it should be made so that the tables are not too large, e.g., so that the tables fit into the computer'scache memory.[2] Smaller blocks use less memory but slow down the hash function. Computet = ceil(p/r), the number ofr-bit blocks needed to represent a key.
Create a two-dimensional 2r ×t array,T, and fill it with randomq-bit numbers. NowT can be used to compute the hash valueh(x) of any given key x. To do so, partitionx intor-bit values, wherex0 consists of the lowestr bits ofx,x1 consists of the nextr bits, etc. For example, ifr = 8, thenxi is just theith byte ofx. Then, use theser-bit and position values as indices intoT, and combine the results using the exclusive or operation:[1]
Note that it is not valid to use the same table (e.g.T[0]) for eachxi, since then the hash function would not be able to distinguish between strings with the samexis, but permuted differently.
Code for a typical example withr = t = 8 andq = p = 64 is given below.
// Secret table of random numbersuint64_tT[8][256];for(inti=0;i<8;i++)for(intj=0;j<256;j++)T[i][j]=getRandomUInt64();// Simple Tabulation Hash functionuint64_thash(uint64_tx){uint64_tres=0;for(inti=0;i<8;i++)res^=T[i][(char)(x>>8*i)];returnres;}
The first instance of tabulation hashing isZobrist hashing, a method for hashing positions in abstract board games such aschess named afterAlbert Lindsey Zobrist, who published it in 1970.[3] In this method, a random bitstring is generated for each game feature such as a combination of a chess piece and a square of the chessboard. Then, to hash any game position, the bitstrings for the features of that position are combined by a bitwise exclusive or. The resulting hash value can then be used as an index into atransposition table. Because each move typically changes only a small number of game features, the Zobrist value of the position after a move can be updated quickly from the value of the position before the move, without needing to loop over all of the features of the position.[4]
Tabulation hashing in greater generality, for arbitrary binary values, was later rediscovered byCarter & Wegman (1979) and studied in more detail byPătraşcu & Thorup (2012).
Carter & Wegman (1979) define a randomized scheme for generating hash functions to beuniversal if, for any two keys, the probability that theycollide (that is, they are mapped to the same value as each other) is 1/m, wherem is the number of values that the keys can take on. They defined a stronger property in the subsequent paperWegman & Carter (1981): a randomized scheme for generating hash functions isk-independent if, for everyk-tuple of keys, and each possiblek-tuple of values, the probability that those keys are mapped to those values is 1/mk. 2-independent hashing schemes are automatically universal, and any universal hashing scheme can be converted into a 2-independent scheme by storing a random numberx as part of the initialization phase of the algorithm and addingx to each hash value. Thus, universality is essentially the same as 2-independence. However,k-independence for larger values ofk is a stronger property, held by fewer hashing algorithms.
AsPătraşcu & Thorup (2012) observe, tabulation hashing is 3-independent but not 4-independent. For any single keyx,T[x0,0] is equally likely to take on any hash value, and the exclusive or ofT[x0,0] with the remaining table values does not change this property. For any two keysx andy,x is equally likely to be mapped to any hash value as before, and there is at least one positioni wherexi ≠ yi; the table valueT[yi,i] is used in the calculation ofh(y) but not in the calculation ofh(x), so even after the value ofh(x) has been determined,h(y) is equally likely to be any valid hash value. Similarly, for any three keysx,y, andz, at least one of the three keys has a positioni where its valuezi differs from the other two, so that even after the values ofh(x) andh(y) are determined,h(z) is equally likely to be any valid hash value.[5]
However, this reasoning breaks down for four keys because there are sets of keysw,x,y, andz where none of the four has a byte value that it does not share with at least one of the other keys. For instance, if the keys have two bytes each, andw,x,y, andz are the four keys that have either zero or one as their byte values, then each byte value in each position is shared by exactly two of the four keys. For these four keys, the hash values computed by tabulation hashing will always satisfy the equationh(w) ⊕h(x) ⊕h(y) ⊕h(z) = 0, whereas for a 4-independent hashing scheme the same equation would only be satisfied with probability 1/m. Therefore, tabulation hashing is not 4-independent.[5]
Because tabulation hashing is a universal hashing scheme, it can be used in any hashing-based algorithm in which universality is sufficient. For instance, inhash chaining, the expected time per operation is proportional to the sum of collision probabilities, which is the same for any universal scheme as it would be for truly random hash functions, and is constant whenever the load factor of the hash table is constant. Therefore, tabulation hashing can be used to compute hash functions for hash chaining with a theoretical guarantee of constant expected time per operation.[6]
However, universal hashing is not strong enough to guarantee the performance of some other hashing algorithms. For instance, forlinear probing, 5-independent hash functions are strong enough to guarantee constant time operation, but there are 4-independent hash functions that fail.[7] Nevertheless, despite only being 3-independent, tabulation hashing provides the same constant-time guarantee for linear probing.[8]
Cuckoo hashing, another technique for implementinghash tables, guarantees constant time per lookup (regardless of the hash function). Insertions into a cuckoo hash table may fail, causing the entire table to be rebuilt, but such failures are sufficiently unlikely that the expected time per insertion (using either a truly random hash function or a hash function with logarithmic independence) is constant. With tabulation hashing, on the other hand, the best bound known on the failure probability is higher, high enough that insertions cannot be guaranteed to take constant expected time. Nevertheless, tabulation hashing is adequate to ensure the linear-expected-time construction of a cuckoo hash table for a static set of keys that does not change as the table is used.[8]
Although tabulation hashing as described above ("simple tabulation hashing") is only 3-independent, variations of this method can be used to obtain hash functions with much higher degrees of independence.Siegel (2004) uses the same idea of using exclusive or operations to combine random values from a table, with a more complicated algorithm based onexpander graphs for transforming the key bits into table indices, to define hashing schemes that arek-independent for any constant or even logarithmic value ofk. However, the number of table lookups needed to compute each hash value using Siegel's variation of tabulation hashing, while constant, is still too large to be practical, and the use of expanders in Siegel's technique also makes it not fully constructive.Thorup (2013) provides a scheme based on tabulation hashing that reaches high degrees of independence more quickly, in a more constructive way.He observes that using one round of simple tabulation hashing to expand the input keys to six times their original length, and then a second round of simple tabulation hashing on the expanded keys, results in a hashing scheme whose independence number is exponential in the parameterr, the number of bits per block in the partition of the keys into blocks.
Simple tabulation is limited to keys of a fixed length, because a different table of random values needs to be initialized for each position of a block in the keys.Lemire (2012) studies variations of tabulation hashing suitable for variable-length keys such as character strings. The general type of hashing scheme studied by Lemire uses a single tableT indexed by the value of a block, regardless of its position within the key.However, the values from this table may be combined by a more complicated function than bitwise exclusive or.Lemire shows that no scheme of this type can be 3-independent. Nevertheless, he shows that it is still possible to achieve 2-independence. In particular, a tabulation scheme that interprets the valuesT[xi] (wherexi is, as before, theith block of the input) as the coefficients of apolynomial over a finite field and then takes the remainder of the resulting polynomial modulo another polynomial, gives a 2-independent hash function.
Mixed Tabulation hashing (and the less general Twisted Tabulation) were introduced by Dahlgaard and Thorup[9] as a way to strengthen the properties of Tabulation hashing while keeping nearly the same performance. Mixed tabulation can be seen as a xor'ing a "Double Tabulation"Thorup (2013) hash function with a simple tabulation hash function.This turns out to have many nice properties even when parameters are chosen to make the mixed tabulation much faster than double tabulation[10]
The idea is to pick a number and hash to bits rather than just.This gives new "derived characters" which are hashed by a second hash function and the two values are xor'ed together.Formally we have and, both simple tabulation functions.If, then the mixed tabulation hash is defined to be
The following example shows the algorithm with, and:
intD=2;uint128_tT1[8][256];uint64_tT2[D][256];// Fill tables with random valuesfor(intj=0;j<256;j++){for(inti=0;i<8;i++)T1[i][j]=getRandomUInt128();for(inti=0;i<D;i++)T2[i][j]=getRandomUInt64();}// Compute Mixed Tabulation of x with D derived charactersuint64_thash(uint64_tx){uint128_tv1v2=0;for(inti=0;i<8;i++)v1v2^=T1[i][(char)(x>>8*i)];uint64_tv1=v1v2>>64;// Take v1 from low bitsuint64_th=(uint64_t)v1v2;// Take v2 from high bitsfor(inti=0;i<D;i++)h^=T2[i][(char)(v1>>8*i)];returnh;}
Mixed Tabulation was shown in 2016[11] to have strong concentration with regards tok-partitions, which are useful in algorithms for counting distinct elements, such as the classical method byFlajolet and Martin.