Movatterモバイル変換


[0]ホーム

URL:


Abstract

I offer you a new hash function for hash table lookup that isfaster and more thorough than the one you are using now. I also giveyou a way to verify that it is more thorough.

The Hash

Over the past two years I've built a general hash function for hashtable lookup. Most of the two dozen old hashes I've replaced havehad owners who wouldn't accept a new hash unless it was a plug-inreplacement for their old hash, and was demonstrably better than theold hash.

These old hashes defined my requirements:

Without further ado, here's the fastest hash I've been able todesign that meets all the requirements. The comments describe how touse it.

typedef  unsigned long  int  ub4;   /* unsigned 4-byte quantities */typedef  unsigned       char ub1;   /* unsigned 1-byte quantities */#define hashsize(n) ((ub4)1<<(n))#define hashmask(n) (hashsize(n)-1)/*--------------------------------------------------------------------mix -- mix 3 32-bit values reversibly.For every delta with one or two bits set, and the deltas of all three  high bits or all three low bits, whether the original value of a,b,c  is almost all zero or is uniformly distributed,* If mix() is run forward or backward, at least 32 bits in a,b,c  have at least 1/4 probability of changing.* If mix() is run forward, every bit of c will change between 1/3 and  2/3 of the time.  (Well, 22/100 and 78/100 for some 2-bit deltas.)mix() was built out of 36 single-cycle latency instructions in a   structure that could supported 2x parallelism, like so:      a -= b;       a -= c; x = (c>>13);      b -= c; a ^= x;      b -= a; x = (a<<8);      c -= a; b ^= x;      c -= b; x = (b>>13);      ...  Unfortunately, superscalar Pentiums and Sparcs can't take advantage   of that parallelism.  They've also turned some of those single-cycle  latency instructions into multi-cycle latency instructions.  Still,  this is the fastest good hash I could find.  There were about 2^^68  to choose from.  I only looked at a billion or so.--------------------------------------------------------------------*/#define mix(a,b,c) \{ \  a -= b; a -= c; a ^= (c>>13); \  b -= c; b -= a; b ^= (a<<8); \  c -= a; c -= b; c ^= (b>>13); \  a -= b; a -= c; a ^= (c>>12);  \  b -= c; b -= a; b ^= (a<<16); \  c -= a; c -= b; c ^= (b>>5); \  a -= b; a -= c; a ^= (c>>3);  \  b -= c; b -= a; b ^= (a<<10); \  c -= a; c -= b; c ^= (b>>15); \}/*--------------------------------------------------------------------hash() -- hash a variable-length key into a 32-bit value  k       : the key (the unaligned variable-length array of bytes)  len     : the length of the key, counting by bytes  initval : can be any 4-byte valueReturns a 32-bit value.  Every bit of the key affects every bit ofthe return value.  Every 1-bit and 2-bit delta achieves avalanche.About 6*len+35 instructions.The best hash table sizes are powers of 2.  There is no need to domod a prime (mod is sooo slow!).  If you need less than 32 bits,use a bitmask.  For example, if you need only 10 bits, do  h = (h & hashmask(10));In which case, the hash table should have hashsize(10) elements.If you are hashing n strings (ub1 **)k, do it like this:  for (i=0, h=0; i<n; ++i) h = hash( k[i], len[i], h);By Bob Jenkins, 1996.  bob_jenkins@burtleburtle.net.  You may use thiscode any way you wish, private, educational, or commercial.  It's free.See http://burtleburtle.net/bob/hash/evahash.htmlUse for hash table lookup, or anything where one collision in 2^^32 isacceptable.  Do NOT use for cryptographic purposes.--------------------------------------------------------------------*/ub4 hash( k, length, initval)register ub1 *k;        /* the key */register ub4  length;   /* the length of the key */register ub4  initval;  /* the previous hash, or an arbitrary value */{   register ub4 a,b,c,len;   /* Set up the internal state */   len = length;   a = b = 0x9e3779b9;  /* the golden ratio; an arbitrary value */   c = initval;         /* the previous hash value */   /*---------------------------------------- handle most of the key */   while (len >= 12)   {      a += (k[0] +((ub4)k[1]<<8) +((ub4)k[2]<<16) +((ub4)k[3]<<24));      b += (k[4] +((ub4)k[5]<<8) +((ub4)k[6]<<16) +((ub4)k[7]<<24));      c += (k[8] +((ub4)k[9]<<8) +((ub4)k[10]<<16)+((ub4)k[11]<<24));      mix(a,b,c);      k += 12; len -= 12;   }   /*------------------------------------- handle the last 11 bytes */   c += length;   switch(len)              /* all the case statements fall through */   {   case 11: c+=((ub4)k[10]<<24);   case 10: c+=((ub4)k[9]<<16);   case 9 : c+=((ub4)k[8]<<8);      /* the first byte of c is reserved for the length */   case 8 : b+=((ub4)k[7]<<24);   case 7 : b+=((ub4)k[6]<<16);   case 6 : b+=((ub4)k[5]<<8);   case 5 : b+=k[4];   case 4 : a+=((ub4)k[3]<<24);   case 3 : a+=((ub4)k[2]<<16);   case 2 : a+=((ub4)k[1]<<8);   case 1 : a+=k[0];     /* case 0: nothing left to add */   }   mix(a,b,c);   /*-------------------------------------------- report the result */   return c;}

Most hashes can be modeled like this:

  initialize(internal state)  for (each text block)  {    combine(internal state, text block);    mix(internal state);  }  return postprocess(internal state);

In the new hash, mix() takes 3n of the 6n+35 instructions needed tohash n bytes. Blocks of text are combined with the internal state(a,b,c) by addition. This combining step is the rest of the hashfunction, consuming the remaining 3n instructions. The onlypostprocessing is to choose c out of (a,b,c) to be the result.

Three tricks promote speed:

  1. Mixing is done on three 4-byte registers rather than on a 1-bytequantity.
  2. Combining is done on 12-byte blocks, reducing the loop overhead.
  3. The final switch statement combines a variable-length block with theregisters a,b,c without a loop.

The golden ratio really is an arbitrary value. Its purpose isto avoid mapping all zeros to all zeros.

The Hash Must Do a Good Job

The most interesting requirement was that the hash must be betterthan its competition. What does it mean for a hash to be good forhash table lookup?

A good hash function distributes hash values uniformly. If youdon't know the keys before choosing the function, the best you can do ismap an equal number of possible keys to each hash value. If keys weredistributed uniformly, an excellent hash would be to choose the firstfew bytes of the key and use that as the hash value. Unfortunately,real keys aren't uniformly distributed. Choosing the first few bytesworks quite poorly in practice.

The real requirement then is that a good hash function shoulddistribute hash values uniformly for the keys that users actually use.

How do we test that? Let's look at some typical user data. (Since Iwork at Oracle, I'll use Oracle's standard example: the EMP table.)The EMP table. Is this data uniformlydistributed?

EMPNOENAMEJOBMGRHIREDATESALCOMMDEPTNO
7369SMITHCLERK790217-DEC-8080020
7499ALLENSALESMAN769820-FEB-81160030030
7521WARDSALESMAN769822-FEB-81125050030
7566JONESMANAGER783902-APR-81297520
7654MARTINSALESMAN789828-SEP-811250140030
7698BLAKEMANAGER753901-MAY-81285030
7782CLARKMANAGER756609-JUN-81245010
7788SCOTTANALYST769819-APR-87300020
7839KINGPRESIDENT17-NOV-81500010
7844TURNERSALESMAN769808-SEP-81150030
7876ADAMSCLERK778823-MAY-871100020
7900JAMESCLERK769803-DEC-81 95030
7902FORDANALYST756603-DEC-81300020
7934MILLERCLERK778223-JAN-82130010

Consider each horizontal row to be a key. Some patterns appear.

  1. Keys often differ in only a few bits. For example, all the keys areASCII, so the high bit of every byte is zero.
  2. Keys often consist of substrings arranged in different orders. Forexample, the MGR of some keys is the EMPNO of others.
  3. Length matters. The only difference between zero and no value atall may be the length of the value. Also, "aa aaa" and "aaa aa"should hash to different values.
  4. Some keys are mostly zero, with only a few bits set. (That patterndoesn't appear in this example, but it's a common pattern.)

Some patterns are easy to handle. If the length is included in thedata being hashed, then lengths are not a problem. If the hash doesnot treat text blocks commutatively, then substrings are not aproblem. Strings that are mostly zeros can be tested by listing allstrings with only one bit set and checking if that set of stringsproduces too many collisions.

The remaining pattern is that keys often differ in only a few bits.If a hash allows small sets of input bits to cancel each other out,and the user keys differ in only those bits, then all keys will map tothe same handful of hash values.

A common weakness

Usually, when a small set of input bits cancel each other out, it isbecause those input bits affect only a smaller set of bits in theinternal state.

Consider this hash function:

  for (hash=0, i=0; i<hash; ++i)    hash = ((hash<<5)^(hash>>27))^key[i];  return (hash % prime);
This function maps the strings "EXXXXXB" and "AXXXXXC" to the same value.These keys differ in bit 3 of the first byte and bit 1 of the seventhbyte. After the seventh bit is combined, any further postprocessingwill do no good because the internal states are already the same.

Any time n input bits can only affect m output bits, and n > m, thenthe 2n keys that differ in those input bits can onlyproduce 2m distinct hash values. The same is true if ninput bits can only affect m bits of the internal state -- latermixing may make the 2m results look uniformly distributed,but there will still be only 2m results.

The function above has many sets of 2 bits that affect only 1 bitof the internal state. If there are n input bits, there are (n choose2)=(n*n/2 - n/2) pairs of input bits, only a few of whichmatch weaknesses in the function above. It is a common pattern for keys todiffer in only a few bits. If those bits match one of a hash's weaknesses,which is a rare but not negligible event, the hash will do extremelybad. In most cases, though, it will do just fine. (This allows afunction to slip through sanity checks, like hashing an Englishdictionary uniformly, while still frequently bombing on user data.)

In hashes built of repeated combine-mix steps, this is what usuallycauses this weakness:

  1. A small number of bits y of one input block are combined, affectingonly y bits of the internal state. So far so good.
  2. The mixing step causes those y bits of the internal state toaffect only z bits of the internal state.
  3. The next combining step overwrites those bits with z more inputbits, cancelling out the first y input bits.
When z is smaller than the number of bits in the output, then y+zinput bits have affected only z bits of the internal state, causing2y+z possible keys to produce at most 2z hashvalues.

The same thing can happen in reverse:

  1. Uncombine this block, causing y block bits to unaffect y bits of theinternal state.
  2. Unmix the internal state, leaving x bits unaffected by the y bitsfrom this block.
  3. Unmixing the previous block unaffects those x bits, cancelling out this block's y bits.
If x is less than the number of bits in the output, then the2x+y keys differing in only those x+y input bits canproduce at most 2x hash values.

(If the mixing function is not a permutation of the internal state,it is not reversible. Instead, it loses information about theearlier blocks every time it is applied, so keys differing only in thefirst few input blocks are more likely to collide. The mixingfunction ought to be a permutation.)

It is easy to test whether this weakness exists: if the mixing stepcauses any bit of the internal state to affect fewer bits of theinternal state than there are output bits, the weakness exists. Thistest should be run on the reverse of the mixing function as well. Itcan also be run with all sets of 2 internal state bits, or all sets of3.

Another way this weakness can happen is if any bit in the finalinput block does not affect every bit of the output. (The user mightchoose to use only the unaffected output bit, then that's 1 input bitthat affects 0 output bits.)

A Survey of Hash Functions

We now have a new hash function and some theory for evaluating hashfunctions. Let's see how various hash functions stack up.

Additive Hash

ub4 additive(char *key, ub4 len, ub4 prime){  ub4 hash, i;  for (hash=len, i=0; i<len; ++i)     hash += key[i];  return (hash % prime);}
This takes 5n+3 instructions. There is no mixing step. Thecombining step handles one byte at a time. Input bytes commute. Thetable length must be prime, and can't be much bigger than one bytebecause the value of variablehash is never much bigger thanone byte.

Rotating Hash

ub4 rotating(char *key, ub4 len, ub4 prime){  ub4 hash, i;  for (hash=len, i=0; i<len; ++i)    hash = (hash<<4)^(hash>>28)^key[i];  return (hash % prime);}
This takes 8n+3 instructions. This is the same as the additivehash, except it has a mixing step (a circular shift by 4) and thecombining step is exclusive-or instead of addition. The table size isa prime, but the prime can be any size.

Pearson's Hash

char pearson(char *key, ub4 len, char tab[256]){  char hash;  ub4  i;  for (hash=len, i=0; i<len; ++i)     hash=tab[hash^key[i]];  return (hash);}
This preinitializestab[] to an arbitrary permutation of 0.. 255. It takes 6n+2 instructions, but produces only a 1-byteresult. Larger results can be made by running it several times withdifferent initial hash values.

CRC Hashing

Universal Hashing

ub4 universal(char *key, ub4 len, ub4 mask, ub4 tab[MAXBITS]){  ub4 hash, i;  for (hash=len, i=0; i<(len<<3); i+=8)  {    register char k = key[i>>3];    if (k&0x01) hash ^= tab[i+0];    if (k&0x02) hash ^= tab[i+1];    if (k&0x04) hash ^= tab[i+2];    if (k&0x08) hash ^= tab[i+3];    if (k&0x10) hash ^= tab[i+4];    if (k&0x20) hash ^= tab[i+5];    if (k&0x40) hash ^= tab[i+6];    if (k&0x80) hash ^= tab[i+7];  }  return (hash & mask);}
This takes 52n+3 instructions. The size of tab[] is themaximum number of input bits. Values in tab[] are chosen at random.Universal hashing can be implemented faster by a Zobrist hash withcarefully chosen table values.

Zobrist Hashing

ub4 zobrist( char *key, ub4 len, ub4 mask, ub4 tab[MAXBYTES][256]){  ub4 hash, i;  for (hash=len, i=0; i<len; ++i)    hash ^= tab[i][key[i]];  return (hash & mask);}
This takes 10n+3 instructions. Thesize of tab[][256] is the maximum number of input bytes. Values oftab[][256] are chosen at random. This can implement universalhashing, but is more general than universal hashing.

Zobrist hashes are especially favored for chess, checkers, othello,and other situations where you have the hash for one state and youwant to compute the hash for a closely related state. You xor to theold hash the table values that you're removing from the state, thenxor the table values that you're adding. For chess, for example,that's 2 xors to get the hash for the next position given the hash ofthe current position.

My Hash

This takes 6n+35 instructions.

MD4

This takes 9.5n+230 instructions. MD4 is a hash designed forcryptography by Ron Rivest. It takes 420 instructions to hash a blockof 64 aligned bytes. I combined that with my hash's method of puttingunaligned bytes into registers, adding 3n instructions.MD4 is overkill for hash table lookup.

The table below compares all these hash functions.

NAME
is the name of the hash.
SIZE-1000
is the smallest reasonable hash table sizegreater than 1000.
SPEED
is the speed of the hash, measured in instructionsrequired to produce a hash value for a table with SIZE-1000 buckets.It is assumed the machine has a rotate instruction. These aren't veryaccurate measures ... I should really just do timings on a Pentium 4 orsuch.
FUNNEL-15
is the largest set of input bits affecting the smallest set ofinternal state bits when mapping 15-byte keys into a 1-byte result.
FUNNEL-100
is the largest set of input bits affecting the smallest set ofinternal state bits when mapping 100-byte keys into a 32-bit result.
COLLIDE-32
is the number of collisions found when adictionary of 38,470 English words was hashed into a 32-bit result.(The expected number of collisions is 0.2 .)
COLLIDE-1000
is a chi2 measure of how well the hash did at mapping the38470-word dictionary into the SIZE-1000 table. (A chi2 measuregreater than +3 is significantly worse than a random mapping; lessthan -3 is significantly better than a random mapping; in between isjust random fluctuations.)
Comparison of several hashfunctions
NAMESIZE-1000SPEEDINLINEFUNNEL-15FUNNEL-100COLLIDE-32COLLIDE-1000
Additive10095n+3n+215 into 2100 into 237006+806.02
Rotating10096n+32n+24 into 125 into 124+1.24
One-at-a-Time10249n+95n+8nonenone0-0.05
Bernstein10247n+33n+23 into 23 into 24+1.69
FNV1024??????
Pearson102412n+54n+3nonenone0+1.65
CRC10249n+35n+22 into 111 into 100+0.07
Generalized10249n+35n+2nonenone0-1.83
Universal102452n+348n+24 into 350 into 280+0.20
Zobrist102410n+36n+2nonenone1-0.03
Paul Hsieh's10245n+17N/A3 into 23 into 21+1.12
My Hash10246n+35N/Anonenone0+0.33
lookup3.c10245n+20N/Anonenone0-0.08
MD410249.5n+230N/Anonenone1+0.73

Conclusion

A common weakness in hash function is for a small set of input bits tocancel each other out. There is an efficient test to detect most suchweaknesses, and many functions pass this test. I gave code for thefastest such function I could find. Hash functions without thisweakness work equally well on all classes of keys.

Testimonials:


[8]ページ先頭

©2009-2026 Movatter.jp