Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Canopy clustering algorithm

From Wikipedia, the free encyclopedia

Thecanopy clustering algorithm is an unsupervised pre-clustering algorithm introduced byAndrew McCallum, Kamal Nigam and Lyle Ungar in 2000.[1] It is often used as preprocessing step for theK-means algorithm or thehierarchical clustering algorithm. It is intended to speed upclustering operations on largedata sets, where using another algorithm directly may be impractical due to the size of the data set.

Description

[edit]

The algorithm proceeds as follows, using two thresholdsT1{\displaystyle T_{1}} (the loose distance) andT2{\displaystyle T_{2}} (the tight distance), whereT1>T2{\displaystyle T_{1}>T_{2}} .[1][2]

  1. Begin with the set of data points to be clustered.
  2. Remove a point from the set, beginning a new 'canopy' containing this point.
  3. For each point left in the set, assign it to the new canopy if its distance to the first point of the canopy is less than the loose distanceT1{\displaystyle T_{1}}.
  4. If the distance of the point is additionally less than the tight distanceT2{\displaystyle T_{2}}, remove it from the original set.
  5. Repeat from step 2 until there are no more data points in the set to cluster.
  6. These relatively cheaply clustered canopies can be sub-clustered using a more expensive but accurate algorithm.

An important note is that individual data points may be part of several canopies. As an additional speed-up, an approximate and fast distance metric can be used for 3, where a more accurate and slow distance metric can be used for step 4.

Applicability

[edit]

Since the algorithm uses distance functions and requires the specification of distance thresholds, its applicability for high-dimensional data is limited by thecurse of dimensionality. Only when a cheap and approximative – low-dimensional – distance function is available, the produced canopies will preserve the clusters produced by K-means.

Its benefits include:

  • The number of instances of training data that must be compared at each step is reduced.
  • There is some evidence that the resulting clusters are improved.[3]

References

[edit]
  1. ^abMcCallum, A.; Nigam, K.; and Ungar L.H. (2000)"Efficient Clustering of High Dimensional Data Sets with Application to Reference Matching", Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, 169-178doi:10.1145/347090.347123
  2. ^"The Canopies Algorithm".courses.cs.washington.edu. Retrieved2014-09-06.
  3. ^Mahout description of Canopy-Clustering Retrieved 2022-07-02.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Canopy_clustering_algorithm&oldid=1244361142"
Category:

[8]ページ先頭

©2009-2025 Movatter.jp