This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(March 2011) (Learn how and when to remove this message) |
TheJaccard index is astatistic used for gauging thesimilarity anddiversity ofsample sets.It is defined in general taking the ratio of two sizes (areas or volumes), the intersection size divided by the union size, also calledintersection over union (IoU).
It was developed byGrove Karl Gilbert in 1884 as hisratio of verification (v)[1] and now is often called thecritical success index in meteorology.[2] It was later developed independently byPaul Jaccard, originally giving the French namecoefficient de communauté (coefficient of community),[3][4] and independently formulated again by Taffee Tadashi Tanimoto.[5] Thus, it is also calledTanimoto index orTanimoto coefficient in some fields.
The Jaccard index measures similarity between finite non-empty sample sets and is defined as the size of theintersection divided by the size of theunion of the sample sets:
Note that by design, If the sets and have no elements in common, their intersection is empty, so and therefore The other extreme is that the two sets are equal. In that case so then The Jaccard index is widely used in computer science, ecology, genomics and other sciences wherebinary or binarized data are used. Both the exact solution and approximation methods are available for hypothesis testing with the Jaccard index.[6]
Jaccard similarity also applies to bags, i.e.,multisets. This has a similar formula,[7] but the symbols used represent bag intersection and bag sum (not union). The maximum value is 1/2.
TheJaccard distance, which measuresdissimilarity between sample sets, is complementary to the Jaccard index and is obtained by subtracting the Jaccard index from 1 or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union:
An alternative interpretation of the Jaccard distance is as the ratio of the size of thesymmetric difference to the union. Jaccard distance is commonly used to calculate ann ×n matrix forclustering andmultidimensional scaling ofn sample sets.
This distance is ametric on the collection of all finite sets.[8][9][10]
There is also a version of the Jaccard distance formeasures, includingprobability measures. If is a measure on ameasurable space, then we define the Jaccard index by
and the Jaccard distance by
Care must be taken if or, since these formulas are not well defined in these cases.
TheMinHash min-wise independent permutationslocality sensitive hashing scheme may be used to efficiently compute an accurate estimate of the Jaccard similarity index of pairs of sets, where each set is represented by a constant-sized signature derived from the minimum values of ahash function.
Given two objects,A andB, each withnbinary attributes, the Jaccard index is a useful measure of the overlap thatA andB share with their attributes. Each attribute ofA andB can either be 0 or 1. The total number of each combination of attributes for bothA andB are specified as follows:
A B | 0 | 1 |
|---|---|---|
| 0 | ||
| 1 |
Each attribute must fall into one of these four categories, meaning that
The Jaccard similarity index,J, is given as
The Jaccard distance,dJ, is given as
Statistical inference can be made based on the Jaccard similarity index, and consequently related metrics.[6] Given two sample setsA andB withn attributes, a statistical test can be conducted to see if an overlap isstatistically significant. The exact solution is available, although computation can be costly asn increases.[6] Estimation methods are available either by approximating amultinomial distribution or by bootstrapping.[6]
When used for binary attributes, the Jaccard index is very similar to thesimple matching coefficient. The main difference is that the SMC has the term in its numerator and denominator, whereas the Jaccard index does not. Thus, the SMC counts both mutual presences (when an attribute is present in both sets) and mutual absence (when an attribute is absent in both sets) as matches and compares it to the total number of attributes in the universe, whereas the Jaccard index only counts mutual presence as matches and compares it to the number of attributes that have been chosen by at least one of the two sets.
Inmarket basket analysis, for example, the basket of two consumers who we wish to compare might only contain a small fraction of all the available products in the store, so the SMC will usually return very high values of similarities even when the baskets bear very little resemblance, thus making the Jaccard index a more appropriate measure of similarity in that context. For example, consider a supermarket with 1000 products and two customers. The basket of the first customer contains salt and pepper and the basket of the second contains salt and sugar. In this scenario, the similarity between the two baskets as measured by the Jaccard index would be 1/3, but the similarity becomes 0.998 using the SMC.
In other contexts, where 0 and 1 carry equivalent information (symmetry), the SMC is a better measure of similarity. For example, vectors of demographic variables stored indummy variables, such as gender, would be better compared with the SMC than with the Jaccard index since the impact of gender on similarity should be equal, independently of whether male is defined as a 0 and female as a 1 or the other way around. However, when we have symmetric dummy variables, one could replicate the behaviour of the SMC by splitting the dummies into two binary attributes (in this case, male and female), thus transforming them into asymmetric attributes, allowing the use of the Jaccard index without introducing any bias. The SMC remains, however, more computationally efficient in the case of symmetric dummy variables since it does not require adding extra dimensions.
If and are two vectors with all real, then their Jaccard similarity index (also known then as Ruzicka similarity[citation needed]) is defined as
and Jaccard distance (also known then as Soergel distance)
With even more generality, if and are two non-negative measurable functions on a measurable space with measure, then we can define
where and are pointwise operators. Then Jaccard distance is
Then, for example, for two measurable sets, we have where and are the characteristic functions of the corresponding set.
The weighted Jaccard similarity described above generalizes the Jaccard Index to positive vectors, where a set corresponds to a binary vector given by theindicator function, i.e.. However, it does not generalize the Jaccard Index to probability distributions, where a set corresponds to a uniform probability distribution, i.e.
It is always less if the sets differ in size. If, and then

Instead, a generalization that is continuous between probability distributions and their corresponding support sets is
which is called the "Probability" Jaccard.[11] It has the following bounds against the Weighted Jaccard on probability vectors.
Here the upper bound is the (weighted)Sørensen–Dice coefficient.The corresponding distance,, is a metric over probability distributions, and apseudo-metric over non-negative vectors.
The Probability Jaccard Index has a geometric interpretation as the area of an intersection ofsimplices. Every point on a unit-simplex corresponds to a probability distribution on elements, because the unit-simplex is the set of points in dimensions that sum to 1. To derive the Probability Jaccard Index geometrically, represent a probability distribution as the unit simplex divided into sub simplices according to the mass of each item. If you overlay two distributions represented in this way on top of each other, and intersect the simplices corresponding to each item, the area that remains is equal to the Probability Jaccard Index of the distributions.

Consider the problem of constructing random variables such that they collide with each other as much as possible. That is, if and, we would like to construct and to maximize. If we look at just two distributions in isolation, the highest we can achieve is given by where is theTotal Variation distance. However, suppose we weren't just concerned with maximizing that particular pair, suppose we would like to maximize the collision probability of any arbitrary pair. One could construct an infinite number of random variables one for each distribution, and seek to maximize for all pairs. In a fairly strong sense described below, the Probability Jaccard Index is an optimal way to align these random variables.
For any sampling method and discrete distributions, if then for some where and, either or.[11]
That is, no sampling method can achieve more collisions than on one pair without achieving fewer collisions than on another pair, where the reduced pair is more similar under than the increased pair. This theorem is true for the Jaccard Index of sets (if interpreted as uniform distributions) and the probability Jaccard, but not of the weighted Jaccard. (The theorem uses the word "sampling method" to describe a joint distribution over all distributions on a space, because it derives from the use ofweighted minhashing algorithms that achieve this as their collision probability.)
This theorem has a visual proof on three element distributions using the simplex representation.
Various forms of functions described as Tanimoto similarity and Tanimoto distance occur in the literature and on the Internet. Most of these are synonyms for Jaccard similarity and Jaccard distance, but some are mathematically different. Many sources[12] cite an IBM Technical Report[5] as the seminal reference.
In "A Computer Program for Classifying Plants", published in October 1960,[13] a method of classification based on a similarity ratio, and a derived distance function, is given. It seems that this is the most authoritative source for the meaning of the terms "Tanimoto similarity" and "Tanimoto Distance". The similarity ratio is equivalent to Jaccard similarity, but the distance function isnot the same as Jaccard distance.
In that paper, a "similarity ratio" is given overbitmaps, where each bit of a fixed-size array represents the presence or absence of a characteristic in the plant being modelled. The definition of the ratio is the number of common bits, divided by the number of bits set (i.e. nonzero) in either sample.
Presented in mathematical terms, if samplesX andY are bitmaps, is theith bit ofX, and arebitwiseand,or operators respectively, then the similarity ratio is
If each sample is modelled instead as a set of attributes, this value is equal to the Jaccard index of the two sets. Jaccard is not cited in the paper, and it seems likely that the authors were not aware of it.[citation needed]
Tanimoto goes on to define a "distance" based on this ratio, defined for bitmaps with non-zero similarity:
This coefficient is, deliberately, not a distance metric. It is chosen to allow the possibility of two specimens, which are quite different from each other, to both be similar to a third. It is easy to construct an example which disproves the property oftriangle inequality.
Tanimoto distance is often referred to, erroneously, as a synonym for Jaccard distance. This function is a proper distance metric. "Tanimoto Distance" is often stated as being a proper distance metric, probably because of its confusion with Jaccard distance.[clarification needed][citation needed]
If Jaccard or Tanimoto similarity is expressed over a bit vector, then it can be written as
where the same calculation is expressed in terms of vector scalar product and magnitude. This representation relies on the fact that, for a bit vector (where the value of each dimension is either 0 or 1) then
and
This is a potentially confusing representation, because the function as expressed over vectors is more general, unless its domain is explicitly restricted. Properties of do not necessarily extend to. In particular, the difference function does not preservetriangle inequality, and is not therefore a proper distance metric, whereas is.
There is a real danger that the combination of "Tanimoto Distance" being defined using this formula, along with the statement "Tanimoto Distance is a proper distance metric" will lead to the false conclusion that the function is in fact a distance metric over vectors ormultisets in general, whereas its use in similarity search or clustering algorithms may fail to produce correct results.
Lipkus[9] uses a definition of Tanimoto similarity which is equivalent to, and refers to Tanimoto distance as the function. It is, however, made clear within the paper that the context is restricted by the use of a (positive) weighting vector such that, for any vectorA being considered, Under these circumstances, the function is a proper distance metric, and so a set of vectors governed by such a weighting vector forms ametric space under this function.
Inconfusion matrices employed forbinary classification, the Jaccard index can be framed in the following formula:
where TP are the true positives, FP the false positives and FN the false negatives.[14]