Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

UPGMA

From Wikipedia, the free encyclopedia
Agglomerative hierarchical clustering method

UPGMA (unweighted pair group method with arithmetic mean) is a simple agglomerative (bottom-up)hierarchical clustering method. It also has a weighted variant,WPGMA, and they are generally attributed toSokal andMichener.[1]

Note that the unweighted term indicates that all distances contribute equally to each average that is computed and does not refer to the math by which it is achieved. Thus the simple averaging in WPGMA produces a weighted result and the proportional averaging in UPGMA produces an unweighted result (see the working example).[2]

Algorithm

[edit]

The UPGMA algorithm constructs a rooted tree (dendrogram) that reflects the structure present in a pairwisesimilarity matrix (or adissimilarity matrix).At each step, the nearest two clusters are combined into a higher-level cluster. The distance between any two clustersA{\displaystyle {\mathcal {A}}} andB{\displaystyle {\mathcal {B}}}, each of size (i.e.,cardinality)|A|{\displaystyle {|{\mathcal {A}}|}} and|B|{\displaystyle {|{\mathcal {B}}|}}, is taken to be the average of all distancesd(x,y){\displaystyle d(x,y)} between pairs of objectsx{\displaystyle x} inA{\displaystyle {\mathcal {A}}} andy{\displaystyle y} inB{\displaystyle {\mathcal {B}}}, that is, the mean distance between elements of each cluster:

1|A||B|xAyBd(x,y){\displaystyle {1 \over {|{\mathcal {A}}|\cdot |{\mathcal {B}}|}}\sum _{x\in {\mathcal {A}}}\sum _{y\in {\mathcal {B}}}d(x,y)}

In other words, at each clustering step, the updated distance between the joined clustersAB{\displaystyle {\mathcal {A}}\cup {\mathcal {B}}} and a new clusterX{\displaystyle X} is given by the proportional averaging of thedA,X{\displaystyle d_{{\mathcal {A}},X}} anddB,X{\displaystyle d_{{\mathcal {B}},X}} distances:

d(AB),X=|A|dA,X+|B|dB,X|A|+|B|{\displaystyle d_{({\mathcal {A}}\cup {\mathcal {B}}),X}={\frac {|{\mathcal {A}}|\cdot d_{{\mathcal {A}},X}+|{\mathcal {B}}|\cdot d_{{\mathcal {B}},X}}{|{\mathcal {A}}|+|{\mathcal {B}}|}}}

The UPGMA algorithm produces rooted dendrograms and requires a constant-rate assumption - that is, it assumes anultrametric tree in which the distances from the root to every branch tip are equal. When the tips are molecular data (i.e.,DNA,RNA andprotein) sampled at the same time, theultrametricity assumption becomes equivalent to assuming amolecular clock.

Working example

[edit]

This working example is based on aJC69 genetic distance matrix computed from the5S ribosomal RNA sequence alignment of five bacteria:Bacillus subtilis (a{\displaystyle a}),Bacillus stearothermophilus (b{\displaystyle b}),Lactobacillus viridescens (c{\displaystyle c}),Acholeplasma modicum (d{\displaystyle d}), andMicrococcus luteus (e{\displaystyle e}).[3][4]

First step

[edit]
  • First clustering

Let us assume that we have five elements(a,b,c,d,e){\displaystyle (a,b,c,d,e)} and the following matrixD1{\displaystyle D_{1}} of pairwise distances between them :

abcde
a017213123
b170303421
c213002839
d313428043
e232139430

In this example,D1(a,b)=17{\displaystyle D_{1}(a,b)=17} is the smallest value ofD1{\displaystyle D_{1}}, so we join elementsa{\displaystyle a} andb{\displaystyle b}.

  • First branch length estimation

Letu{\displaystyle u} denote the node to whicha{\displaystyle a} andb{\displaystyle b} are now connected. Settingδ(a,u)=δ(b,u)=D1(a,b)/2{\displaystyle \delta (a,u)=\delta (b,u)=D_{1}(a,b)/2} ensures that elementsa{\displaystyle a} andb{\displaystyle b} are equidistant fromu{\displaystyle u}. This corresponds to the expectation of theultrametricity hypothesis.The branches joininga{\displaystyle a} andb{\displaystyle b} tou{\displaystyle u} then have lengthsδ(a,u)=δ(b,u)=17/2=8.5{\displaystyle \delta (a,u)=\delta (b,u)=17/2=8.5} (see the final dendrogram)

  • First distance matrix update

We then proceed to update the initial distance matrixD1{\displaystyle D_{1}} into a new distance matrixD2{\displaystyle D_{2}} (see below), reduced in size by one row and one column because of the clustering ofa{\displaystyle a} withb{\displaystyle b}.Bold values inD2{\displaystyle D_{2}} correspond to the new distances, calculated byaveraging distances between each element of the first cluster(a,b){\displaystyle (a,b)} and each of the remaining elements:

D2((a,b),c)=(D1(a,c)×1+D1(b,c)×1)/(1+1)=(21+30)/2=25.5{\displaystyle D_{2}((a,b),c)=(D_{1}(a,c)\times 1+D_{1}(b,c)\times 1)/(1+1)=(21+30)/2=25.5}

D2((a,b),d)=(D1(a,d)+D1(b,d))/2=(31+34)/2=32.5{\displaystyle D_{2}((a,b),d)=(D_{1}(a,d)+D_{1}(b,d))/2=(31+34)/2=32.5}

D2((a,b),e)=(D1(a,e)+D1(b,e))/2=(23+21)/2=22{\displaystyle D_{2}((a,b),e)=(D_{1}(a,e)+D_{1}(b,e))/2=(23+21)/2=22}

Italicized values inD2{\displaystyle D_{2}} are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster.

Second step

[edit]
  • Second clustering

We now reiterate the three previous steps, starting from the new distance matrixD2{\displaystyle D_{2}}

(a,b)cde
(a,b)025.532.522
c25.502839
d32.528043
e2239430

Here,D2((a,b),e)=22{\displaystyle D_{2}((a,b),e)=22} is the smallest value ofD2{\displaystyle D_{2}}, so we join cluster(a,b){\displaystyle (a,b)} and elemente{\displaystyle e}.

  • Second branch length estimation

Letv{\displaystyle v} denote the node to which(a,b){\displaystyle (a,b)} ande{\displaystyle e} are now connected. Because of the ultrametricity constraint, the branches joininga{\displaystyle a} orb{\displaystyle b} tov{\displaystyle v}, ande{\displaystyle e} tov{\displaystyle v} are equal and have the following length:δ(a,v)=δ(b,v)=δ(e,v)=22/2=11{\displaystyle \delta (a,v)=\delta (b,v)=\delta (e,v)=22/2=11}

We deduce the missing branch length:δ(u,v)=δ(e,v)δ(a,u)=δ(e,v)δ(b,u)=118.5=2.5{\displaystyle \delta (u,v)=\delta (e,v)-\delta (a,u)=\delta (e,v)-\delta (b,u)=11-8.5=2.5} (see the final dendrogram)

  • Second distance matrix update

We then proceed to updateD2{\displaystyle D_{2}} into a new distance matrixD3{\displaystyle D_{3}} (see below), reduced in size by one row and one column because of the clustering of(a,b){\displaystyle (a,b)} withe{\displaystyle e}. Bold values inD3{\displaystyle D_{3}} correspond to the new distances, calculated byproportional averaging:

D3(((a,b),e),c)=(D2((a,b),c)×2+D2(e,c)×1)/(2+1)=(25.5×2+39×1)/3=30{\displaystyle D_{3}(((a,b),e),c)=(D_{2}((a,b),c)\times 2+D_{2}(e,c)\times 1)/(2+1)=(25.5\times 2+39\times 1)/3=30}

Thanks to this proportional average, the calculation of this new distance accounts for the larger size of the(a,b){\displaystyle (a,b)} cluster (two elements) with respect toe{\displaystyle e} (one element). Similarly:

D3(((a,b),e),d)=(D2((a,b),d)×2+D2(e,d)×1)/(2+1)=(32.5×2+43×1)/3=36{\displaystyle D_{3}(((a,b),e),d)=(D_{2}((a,b),d)\times 2+D_{2}(e,d)\times 1)/(2+1)=(32.5\times 2+43\times 1)/3=36}

Proportional averaging therefore gives equal weight to the initial distances of matrixD1{\displaystyle D_{1}}. This is the reason why the method isunweighted, not with respect to the mathematical procedure but with respect to the initial distances.

Third step

[edit]
  • Third clustering

We again reiterate the three previous steps, starting from the updated distance matrixD3{\displaystyle D_{3}}.

((a,b),e)cd
((a,b),e)03036
c30028
d36280

Here,D3(c,d)=28{\displaystyle D_{3}(c,d)=28} is the smallest value ofD3{\displaystyle D_{3}}, so we join elementsc{\displaystyle c} andd{\displaystyle d}.

  • Third branch length estimation

Letw{\displaystyle w} denote the node to whichc{\displaystyle c} andd{\displaystyle d} are now connected.The branches joiningc{\displaystyle c} andd{\displaystyle d} tow{\displaystyle w} then have lengthsδ(c,w)=δ(d,w)=28/2=14{\displaystyle \delta (c,w)=\delta (d,w)=28/2=14} (see the final dendrogram)

  • Third distance matrix update

There is a single entry to update, keeping in mind that the two elementsc{\displaystyle c} andd{\displaystyle d} each have a contribution of1{\displaystyle 1} in theaverage computation:

D4((c,d),((a,b),e))=(D3(c,((a,b),e))×1+D3(d,((a,b),e))×1)/(1+1)=(30×1+36×1)/2=33{\displaystyle D_{4}((c,d),((a,b),e))=(D_{3}(c,((a,b),e))\times 1+D_{3}(d,((a,b),e))\times 1)/(1+1)=(30\times 1+36\times 1)/2=33}

Final step

[edit]

The finalD4{\displaystyle D_{4}} matrix is:

((a,b),e)(c,d)
((a,b),e)033
(c,d)330

So we join clusters((a,b),e){\displaystyle ((a,b),e)} and(c,d){\displaystyle (c,d)}.

Letr{\displaystyle r} denote the (root) node to which((a,b),e){\displaystyle ((a,b),e)} and(c,d){\displaystyle (c,d)} are now connected.The branches joining((a,b),e){\displaystyle ((a,b),e)} and(c,d){\displaystyle (c,d)} tor{\displaystyle r} then have lengths:

δ(((a,b),e),r)=δ((c,d),r)=33/2=16.5{\displaystyle \delta (((a,b),e),r)=\delta ((c,d),r)=33/2=16.5}

We deduce the two remaining branch lengths:

δ(v,r)=δ(((a,b),e),r)δ(e,v)=16.511=5.5{\displaystyle \delta (v,r)=\delta (((a,b),e),r)-\delta (e,v)=16.5-11=5.5}

δ(w,r)=δ((c,d),r)δ(c,w)=16.514=2.5{\displaystyle \delta (w,r)=\delta ((c,d),r)-\delta (c,w)=16.5-14=2.5}

The UPGMA dendrogram

[edit]

The dendrogram is now complete.[5] It is ultrametric because all tips (a{\displaystyle a} toe{\displaystyle e}) are equidistant fromr{\displaystyle r} :

δ(a,r)=δ(b,r)=δ(e,r)=δ(c,r)=δ(d,r)=16.5{\displaystyle \delta (a,r)=\delta (b,r)=\delta (e,r)=\delta (c,r)=\delta (d,r)=16.5}

The dendrogram is therefore rooted byr{\displaystyle r}, its deepest node.

Comparison with other linkages

[edit]

Alternative linkage schemes includesingle linkage clustering,complete linkage clustering, andWPGMA average linkage clustering. Implementing a different linkage is simply a matter of using a different formula to calculate inter-cluster distances during the distance matrix update steps of the above algorithm. Complete linkage clustering avoids a drawback of the alternative single linkage clustering method - the so-calledchaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.[6]

Comparison of dendrograms obtained under different clustering methods from the samedistance matrix.
Single-linkage clusteringComplete-linkage clusteringAverage linkage clustering:WPGMAAverage linkage clustering: UPGMA.

Uses

[edit]
  • Inecology, it is one of the most popular methods for the classification of sampling units (such as vegetation plots) on the basis of their pairwise similarities in relevant descriptor variables (such as species composition).[7] For example, it has been used to understand the trophic interaction between marine bacteria and protists.[8]
  • Inbioinformatics, UPGMA is used for the creation ofphenetictrees (phenograms). UPGMA was initially designed for use inprotein electrophoresis studies, but is currently most often used to produce guide trees for more sophisticated algorithms. This algorithm is for example used insequence alignment procedures, as it proposes one order in which the sequences will be aligned. Indeed, the guide tree aims at grouping the most similar sequences, regardless of their evolutionary rate or phylogenetic affinities, and that is exactly the goal of UPGMA[9]
  • Inphylogenetics, UPGMA assumes a constant rate of evolution (molecular clock hypothesis) and that all sequences were sampled at the same time, and is not a well-regarded method for inferring relationships unless this assumption has been tested and justified for the data set being used. Notice that even under a 'strict clock', sequences sampled at different times should not lead to an ultrametric tree.

Time complexity

[edit]

A trivial implementation of the algorithm to construct the UPGMA tree hasO(n3){\displaystyle O(n^{3})} time complexity, and using a heap for each cluster to keep its distances from other cluster reduces its time toO(n2logn){\displaystyle O(n^{2}\log n)}. Fionn Murtagh presented anO(n2){\displaystyle O(n^{2})} time and space algorithm.[10]

See also

[edit]

References

[edit]
  1. ^Sokal,Michener (1958)."A statistical method for evaluating systematic relationships".University of Kansas Science Bulletin.38:1409–1438.
  2. ^Garcia S, Puigbò P."DendroUPGMA: A dendrogram construction utility"(PDF). p. 4.
  3. ^Erdmann VA, Wolters J (1986)."Collection of published 5S, 5.8S and 4.5S ribosomal RNA sequences".Nucleic Acids Research. 14 Suppl (Suppl): r1–59.doi:10.1093/nar/14.suppl.r1.PMC 341310.PMID 2422630.
  4. ^Olsen GJ (1988). "Phylogenetic analysis using ribosomal RNA".Ribosomes. Methods in Enzymology. Vol. 164. pp. 793–812.doi:10.1016/s0076-6879(88)64084-5.ISBN 978-0-12-182065-7.PMID 3241556.
  5. ^Swofford DL, Olsen GJ, Waddell PJ, Hillis DM (1996). "Phylogenetic inference". In Hillis DM, Moritz C, Mable BK (eds.).Molecular Systematics, 2nd edition. Sunderland, MA: Sinauer. pp. 407–514.ISBN 9780878932825.
  6. ^Everitt, B. S.; Landau, S.; Leese, M. (2001).Cluster Analysis. 4th Edition. London: Arnold. pp. 62–64.
  7. ^Legendre P, Legendre L (1998).Numerical Ecology. Developments in Environmental Modelling. Vol. 20 (Second English ed.). Amsterdam: Elsevier.
  8. ^Vázquez-Domínguez E, Casamayor EO, Català P, Lebaron P (April 2005). "Different marine heterotrophic nanoflagellates affect differentially the composition of enriched bacterial communities".Microbial Ecology.49 (3):474–85.Bibcode:2005MicEc..49..474V.doi:10.1007/s00248-004-0035-5.JSTOR 25153200.PMID 16003474.S2CID 22300174.
  9. ^Wheeler TJ, Kececioglu JD (July 2007)."Multiple alignment by aligning alignments".Bioinformatics.23 (13): i559–68.doi:10.1093/bioinformatics/btm226.PMID 17646343.
  10. ^Murtagh F (1984). "Complexities of Hierarchic Clustering Algorithms: the state of the art".Computational Statistics Quarterly.1:101–113.

External links

[edit]
Relevant fields
Basic concepts
Inference methods
Current topics
Group traits
Group types
Nomenclature
Retrieved from "https://en.wikipedia.org/w/index.php?title=UPGMA&oldid=1233469513"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp