TheHamming weight of astring is the number of symbols that are different from the zero-symbol of thealphabet used. It is thus equivalent to theHamming distance from the all-zero string of the same length. For the most typical case, a given set ofbits, this is the number of bits set to 1, or thedigit sum of thebinary representation of a given number and theℓ₁ norm of a bit vector. In this binary case, it is also called thepopulation count,[1]popcount,sideways sum,[2] orbit summation.[3]
| String | Hamming weight |
|---|---|
| 11101 | 4 |
| 11101000 | 4 |
| 00000000 | 0 |
| 678012340567 | 10 |

The Hamming weight is named after the American mathematicianRichard Hamming, although he did not originate the notion.[5] The Hamming weight of binary numbers was already used in 1899 byJames W. L. Glaisher to give a formula forthe number of odd binomial coefficients in a single row ofPascal's triangle.[6]Irving S. Reed introduced a concept, equivalent to Hamming weight in the binary case, in 1954.[7]
Hamming weight is used in several disciplines includinginformation theory,coding theory, andcryptography. Examples of applications of the Hamming weight include:
The population count of abitstring is often needed in cryptography and other applications. TheHamming distance of two wordsA andB can be calculated as the Hamming weight ofAxorB.[1]
The problem of how to implement it efficiently has been widely studied. A single operation for the calculation, or parallel operations on bit vectors areavailable on some processors. For processors lacking those features, the best solutions known are based on adding counts in a tree pattern. For example, to count the number of 1 bits in the 16-bit binary number a = 0110 1100 1011 1010, these operations can be done:
| Expression | Binary | Decimal | Comment | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
a | 01 | 10 | 11 | 00 | 10 | 11 | 10 | 10 | 27834 | The original number |
b0 = (a >> 0) & 01 01 01 01 01 01 01 01 | 01 | 00 | 01 | 00 | 00 | 01 | 00 | 00 | 1, 0, 1, 0, 0, 1, 0, 0 | Every other bit from a |
b1 = (a >> 1) & 01 01 01 01 01 01 01 01 | 00 | 01 | 01 | 00 | 01 | 01 | 01 | 01 | 0, 1, 1, 0, 1, 1, 1, 1 | The remaining bits from a |
c = b0 + b1 | 01 | 01 | 10 | 00 | 01 | 10 | 01 | 01 | 1, 1, 2, 0, 1, 2, 1, 1 | Count of 1s in each 2-bit slice of a |
d0 = (c >> 0) & 0011 0011 0011 0011 | 0001 | 0000 | 0010 | 0001 | 1, 0, 2, 1 | Every other count from c | ||||
d2 = (c >> 2) & 0011 0011 0011 0011 | 0001 | 0010 | 0001 | 0001 | 1, 2, 1, 1 | The remaining counts from c | ||||
e = d0 + d2 | 0010 | 0010 | 0011 | 0010 | 2, 2, 3, 2 | Count of 1s in each 4-bit slice of a | ||||
f0 = (e >> 0) & 00001111 00001111 | 00000010 | 00000010 | 2, 2 | Every other count from e | ||||||
f4 = (e >> 4) & 00001111 00001111 | 00000010 | 00000011 | 2, 3 | The remaining counts from e | ||||||
g = f0 + f4 | 00000100 | 00000101 | 4, 5 | Count of 1s in each 8-bit slice of a | ||||||
h0 = (g >> 0) & 0000000011111111 | 0000000000000101 | 5 | Every other count from g | |||||||
h8 = (g >> 8) & 0000000011111111 | 0000000000000100 | 4 | The remaining counts from g | |||||||
i = h0 + h8 | 0000000000001001 | 9 | Count of 1s in entire 16-bit word | |||||||
Here, the operations are as inC programming language, soX >> Y means to shift X right by Y bits, X & Y means the bitwise AND of X and Y, and + is ordinary addition. The best algorithms known for this problem are based on the concept illustrated above and are given here:[1]
//types and constants used in the functions below//uint64_t is an unsigned 64-bit integer variable type (defined in C99 version of C language)constuint64_tm1=0x5555555555555555;//binary: 0101...constuint64_tm2=0x3333333333333333;//binary: 00110011..constuint64_tm4=0x0f0f0f0f0f0f0f0f;//binary: 4 zeros, 4 ones ...constuint64_tm8=0x00ff00ff00ff00ff;//binary: 8 zeros, 8 ones ...constuint64_tm16=0x0000ffff0000ffff;//binary: 16 zeros, 16 ones ...constuint64_tm32=0x00000000ffffffff;//binary: 32 zeros, 32 onesconstuint64_th01=0x0101010101010101;//the sum of 256 to the power of 0,1,2,3...//This is a naive implementation, shown for comparison,//and to help in understanding the better functions.//This algorithm uses 24 arithmetic operations (shift, add, and).intpopcount64a(uint64_tx){x=(x&m1)+((x>>1)&m1);//put count of each 2 bits into those 2 bitsx=(x&m2)+((x>>2)&m2);//put count of each 4 bits into those 4 bitsx=(x&m4)+((x>>4)&m4);//put count of each 8 bits into those 8 bitsx=(x&m8)+((x>>8)&m8);//put count of each 16 bits into those 16 bitsx=(x&m16)+((x>>16)&m16);//put count of each 32 bits into those 32 bitsx=(x&m32)+((x>>32)&m32);//put count of each 64 bits into those 64 bitsreturnx;}//This uses fewer arithmetic operations than any other known//implementation on machines with slow multiplication.//This algorithm uses 17 arithmetic operations.intpopcount64b(uint64_tx){x-=(x>>1)&m1;//put count of each 2 bits into those 2 bitsx=(x&m2)+((x>>2)&m2);//put count of each 4 bits into those 4 bitsx=(x+(x>>4))&m4;//put count of each 8 bits into those 8 bitsx+=x>>8;//put count of each 16 bits into their lowest 8 bitsx+=x>>16;//put count of each 32 bits into their lowest 8 bitsx+=x>>32;//put count of each 64 bits into their lowest 8 bitsreturnx&0x7f;}//This uses fewer arithmetic operations than any other known//implementation on machines with fast multiplication.//This algorithm uses 12 arithmetic operations, one of which is a multiply.intpopcount64c(uint64_tx){x-=(x>>1)&m1;//put count of each 2 bits into those 2 bitsx=(x&m2)+((x>>2)&m2);//put count of each 4 bits into those 4 bitsx=(x+(x>>4))&m4;//put count of each 8 bits into those 8 bitsreturn(x*h01)>>56;//returns left 8 bits of x + (x<<8) + (x<<16) + (x<<24) + ...}
The above implementations have the best worst-case behavior of any known algorithm. However, when a value is expected to have few nonzero bits, it may instead be more efficient to use algorithms that count these bits one at a time. As Wegner described in 1960,[14] thebitwise AND ofx withx − 1 differs fromx only in zeroing out the least significant nonzero bit: subtracting 1 changes the rightmost string of 0s to 1s, and changes the rightmost 1 to a 0. Ifx originally hadn bits that were 1, then after onlyn iterations of this operation,x will be reduced to zero. The following implementation is based on this principle.
//This is better when most bits in x are 0//This algorithm works the same for all data sizes.//This algorithm uses 3 arithmetic operations and 1 comparison/branch per "1" bit in x.intpopcount64d(uint64_tx){intcount;for(count=0;x;count++)x&=x-1;returncount;}
Of interest here is theclose relationship between Popcount, FFS and CLZ.
If greater memory usage is allowed, we can calculate the Hamming weight faster than the above methods. With unlimited memory, we could simply create a large lookup table of the Hamming weight of every 64 bit integer. If we can store a lookup table of the hamming function of every 16 bit integer, we can do the following to compute the Hamming weight of every 32 bit integer.
staticuint8_twordbits[65536]={/* bitcounts of integers 0 through 65535, inclusive */};//This algorithm uses 3 arithmetic operations and 2 memory reads.intpopcount32e(uint32_tx){returnwordbits[x&0xFFFF]+wordbits[x>>16];}
//Optionally, the wordbits[] table could be filled using this functionintpopcount32e_init(void){uint32_ti;uint16_tx;intcount;for(i=0;i<=0xFFFF;i++){x=i;for(count=0;x;count++)// borrowed from popcount64d() abovex&=x-1;wordbits[i]=count;}}
A recursive algorithm is given in Donovan & Kernighan[15]
/* The weight of i can differ from the weight of i / 2only in the least significant bit of i */intpopcount32e_init(void){inti;for(i=1;sizeofwordbits/sizeof*wordbits>i;++i)wordbits[i]=wordbits[i>>1]+(1&i);}
Muła et al.[16] have shown that a vectorized version of popcount64b can run faster than dedicated instructions (e.g., popcnt on x64 processors).
The Harley–Seal algorithm[17] is one of the fastest that also only needs integer operations.[18]
Inerror-correcting coding, the minimum Hamming weight, commonly referred to as theminimum weightwmin of a code is the weight of the lowest-weight non-zero code word. The weightw of a code word is the number of 1s in the word. For example, the word 11001010 has a weight of 4.
In alinear block code the minimum weight is also theminimum Hamming distance (dmin) and defines the error correction capability of the code. Ifwmin = n, thendmin = n and the code will correct up todmin/2 errors.[19]
Some C compilers provide intrinsic functions that provide bit counting facilities. For example,GCC (since version 3.4 in April 2004) includes a builtin function__builtin_popcount that will use a processor instruction if available or an efficient library implementation otherwise.[20]LLVM-GCC has included this function since version 1.5 in June 2005.[21]
In theC++ Standard Library, the bit-array data structurebitset has acount() method that counts the number of bits that are set. InC++20, a new header<bit> was added, containing functionsstd::popcount andstd::has_single_bit, taking arguments of unsigned integer types.
In Java, the growable bit-array data structureBitSet has aBitSet.cardinality() method that counts the number of bits that are set. In addition, there areInteger.bitCount(int) andLong.bitCount(long) functions to count bits in primitive 32-bit and 64-bit integers, respectively. Also, theBigInteger arbitrary-precision integer class also has aBigInteger.bitCount() method that counts bits.
InPython, theint type has abit_count() method to count the number of bits set. This functionality was introduced in Python 3.10, released in October 2021.[22]
InCommon Lisp, the functionlogcount, given a non-negative integer, returns the number of 1 bits. (For negative integers it returns the number of 0 bits in 2's complement notation.) In either case the integer can be a BIGNUM.
Starting inGHC 7.4, theHaskell base package has apopCount function available on all types that are instances of theBits class (available from theData.Bits module).[23]
MySQL version ofSQL language providesBIT_COUNT() as a standard function.[24]
Fortran 2008 has the standard, intrinsic, elemental functionpopcnt returning the number of nonzero bits within an integer (or integer array).[25]
Some programmable scientific pocket calculators feature special commands to calculate the number of set bits, e.g.#B on theHP-16C.[3]
FreePascal implements popcnt since version 3.0.[26]
CXi.POPC instruction,[12][1] but most implementations do not implement it, requiring it be emulated by the operating system.[27]SADD instruction since 1999.SADD a,b,c counts all bits that are 1 in b and 0 in c and writes the result to a.CIX).ONES instruction to perform a 32-bit population count.[28]POPCNT instruction as part of theSSE4a extensions in 2007.POPCNT instruction with theSSE4.2instruction set extension, first available in aNehalem-basedCore i7 processor, released in November 2008.VCNT instruction as part of theAdvanced SIMD (NEON) extensions.CPOP instruction as part of the Bit Manipulation (B) extension.[29]Section 6.3: "In general, the number of fingers we need to follow will be the number of ones in the binary representation of the distance from node to query."