Source: https://en.wikipedia.org/wiki/Cluster_analysis – (most) links to Wikipedia
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics.
Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their notion of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances among the cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multiobjective optimization problem. The appropriate clustering algorithm and parameter settings (including values such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process of knowledge discovery or interactive multiobjective optimization that involves trial and failure. It will often be necessary to modify data preprocessing and model parameters until the result achieves the desired properties.
Besides the term clustering, there are a number of terms with similar meanings, including automatic classification, numerical taxonomy and typological analysis. The subtle differences are often in the usage of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest. This often leads to misunderstandings between researchers coming from the fields of data mining and machine learning, since they use the same terms and often the same algorithms, but have different goals.
Cluster analysis was originated in anthropology by Driver and Kroeber in 1932 and introduced to psychology by Zubin in 1938 and Robert Tryon in 1939^{[1]}^{[2]} and famously used by Cattell beginning in 1943^{[3]} for trait theory classification in personality psychology.
Contents
Clusters and clusterings
According to Vladimir EstivillCastro, the notion of a “cluster” cannot be precisely defined, which is one of the reasons why there are so many clustering algorithms.^{[4]} There is a common denominator: a group of data objects. However, different researchers employ different cluster models, and for each of these cluster models again different algorithms can be given. The notion of a cluster, as found by different algorithms, varies significantly in its properties. Understanding these “cluster models” is key to understanding the differences between the various algorithms. Typical cluster models include:
 Connectivity models: for example hierarchical clustering builds models based on distance connectivity.
 Centroid models: for example the kmeans algorithm represents each cluster by a single mean vector.
 Distribution models: clusters are modeled using statistical distributions, such as multivariate normal distributions used by the Expectationmaximization algorithm.
 Density models: for example DBSCAN and OPTICS defines clusters as connected dense regions in the data space.
 Subspace models: in Biclustering (also known as Coclustering or twomodeclustering), clusters are modeled with both cluster members and relevant attributes.
 Group models: some algorithms do not provide a refined model for their results and just provide the grouping information.
 Graphbased models: a clique, i.e., a subset of nodes in a graph such that every two nodes in the subset are connected by an edge can be considered as a prototypical form of cluster. Relaxations of the complete connectivity requirement (a fraction of the edges can be missing) are known as quasicliques.
A “clustering” is essentially a set of such clusters, usually containing all objects in the data set. Additionally, it may specify the relationship of the clusters to each other, for example a hierarchy of clusters embedded in each other. Clusterings can be roughly distinguished as:
 hard clustering: each object belongs to a cluster or not
 soft clustering (also: fuzzy clustering): each object belongs to each cluster to a certain degree (e.g. a likelihood of belonging to the cluster)
There are also finer distinctions possible, for example:
 strict partitioning clustering: here each object belongs to exactly one cluster
 strict partitioning clustering with outliers: objects can also belong to no cluster, and are considered outliers.
 overlapping clustering (also: alternative clustering, multiview clustering): while usually a hard clustering, objects may belong to more than one cluster.
 hierarchical clustering: objects that belong to a child cluster also belong to the parent cluster
 subspace clustering: while an overlapping clustering, within a uniquely defined subspace, clusters are not expected to overlap.
Clustering algorithms
Clustering algorithms can be categorized based on their cluster model, as listed above. The following overview will only list the most prominent examples of clustering algorithms, as there are possibly over 100 published clustering algorithms. Not all provide models for their clusters and can thus not easily be categorized.
There is no objectively “correct” clustering algorithm, but as it was noted, “clustering is in the eye of the beholder.”^{[4]} The most appropriate clustering algorithm for a particular problem often needs to be chosen experimentally, unless there is a mathematical reason to prefer one cluster model over another. It should be noted that an algorithm that is designed for one kind of model has no chance on a data set that contains a radically different kind of model.^{[4]} For example, kmeans cannot find nonconvex clusters.^{[4]}
Connectivity based clustering (hierarchical clustering)
Connectivity based clustering, also known as hierarchical clustering, is based on the core idea of objects being more related to nearby objects than to objects farther away. These algorithms connect “objects” to form “clusters” based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using a dendrogram, which explains where the common name “hierarchical clustering” comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the yaxis marks the distance at which the clusters merge, while the objects are placed along the xaxis such that the clusters don’t mix.
Connectivity based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice of distance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance to) to use. Popular choices are known as singlelinkage clustering (the minimum of object distances), complete linkage clustering (the maximum of object distances) or UPGMA (“Unweighted Pair Group Method with Arithmetic Mean”, also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions).
These methods will not produce a unique partitioning of the data set, but a hierarchy from which the user still needs to choose appropriate clusters. They are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as “chaining phenomenon”, in particular with singlelinkage clustering). In the general case, the complexity is , which makes them too slow for large data sets. For some special cases, optimal efficient methods (of complexity ) are known: SLINK^{[5]} for singlelinkage and CLINK^{[6]} for completelinkage clustering. In the data mining community these methods are recognized as a theoretical foundation of cluster analysis, but often considered obsolete. They did however provide inspiration for many later methods such as density based clustering.
 Linkage clustering examples
Centroidbased clustering
In centroidbased clustering, clusters are represented by a central vector, which may not necessarily be a member of the data set. When the number of clusters is fixed to k, kmeans clusteringgives a formal definition as an optimization problem: find the cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized.
The optimization problem itself is known to be NPhard, and thus the common approach is to search only for approximate solutions. A particularly well known approximative method is Lloyd’s algorithm,^{[7]} often actually referred to as “kmeans algorithm“. It does however only find a local optimum, and is commonly run multiple times with different random initializations. Variations of kmeans often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set (kmedoids), choosing medians (kmedians clustering), choosing the initial centers less randomly (Kmeans++) or allowing a fuzzy cluster assignment (Fuzzy cmeans).
Most kmeanstype algorithms require the number of clusters   to be specified in advance, which is considered to be one of the biggest drawbacks of these algorithms. Furthermore, the algorithms prefer clusters of approximately similar size, as they will always assign an object to the nearest centroid. This often leads to incorrectly cut borders in between of clusters (which is not surprising, as the algorithm optimized cluster centers, not cluster borders).
Kmeans has a number of interesting theoretical properties. On the one hand, it partitions the data space into a structure known as a Voronoi diagram. On the other hand, it is conceptually close to nearest neighbor classification, and as such is popular in machine learning. Third, it can be seen as a variation of model based classification, and Lloyd’s algorithm as a variation of theExpectationmaximization algorithm for this model discussed below.
 kMeans clustering examples
Distributionbased clustering
The clustering model most closely related to statistics is based on distribution models. Clusters can then easily be defined as objects belonging most likely to the same distribution. A nice property of this approach is that this closely resembles the way artificial data sets are generated: by sampling random objects from a distribution.
While the theoretical foundation of these methods is excellent, they suffer from one key problem known as overfitting, unless constraints are put on the model complexity. A more complex model will usually always be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult.
One prominent method is known as Gaussian mixture models (using the expectationmaximization algorithm). Here, the data set is usually modelled with a fixed (to avoid overfitting) number ofGaussian distributions that are initialized randomly and whose parameters are iteratively optimized to fit better to the data set. This will converge to a local optimum, so multiple runs may produce different results. In order to obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to; for soft clusterings, this is not necessary.
Distributionbased clustering is a semantically strong^{[clarification needed]} method, as it not only provides you with clusters, but also produces complex models for the clusters that can also capture correlation and dependence of attributes. However, using these algorithms puts an extra burden on the user: to choose appropriate data models to optimize, and for many real data sets, there may be no mathematical model available the algorithm is able to optimize (e.g. assuming Gaussian distributions is a rather strong assumption on the data).
 ExpectationMaximization (EM) clustering examples
Densitybased clustering
In densitybased clustering,^{[8]} clusters are defined as areas of higher density than the remainder of the data set. Objects in these sparse areas – that are required to separate clusters – are usually considered to be noise and border points.
The most popular^{[9]} density based clustering method is DBSCAN.^{[10]} In contrast to many newer methods, it features a welldefined cluster model called “densityreachability”. Similar to linkage based clustering, it is based on connecting points within certain distance thresholds. However, it only connects points that satisfy a density criterion, in the original variant defined as a minimum number of other objects within this radius. A cluster consists of all densityconnected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects’ range. Another interesting property of DBSCAN is that its complexity is fairly low – it requires a linear number of range queries on the database – and that it will discover essentially the same results (it is deterministic for core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times. OPTICS^{[11]} is a generalization of DBSCAN that removes the need to choose an appropriate value for the range parameter , and produces a hierarchical result related to that of linkage clustering. DeLiClu,^{[12]}DensityLinkClustering combines ideas from singlelinkage clustering and OPTICS, eliminating the parameter entirely and offering performance improvements over OPTICS by using an Rtreeindex.
The key drawback of DBSCAN and OPTICS is that they expect some kind of density drop to detect cluster borders. Moreover, they cannot detect intrinsic cluster structures which are prevalent in the majority of real life data. A variation of DBSCAN, EnDBSCAN,^{[13]} efficiently detects such kinds of structures. On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such as EM clustering that are able to precisely model this kind of data.
 Densitybased clustering examples

Densitybased clustering with DBSCAN.

DBSCAN assumes clusters of similar density, and may have problems separating nearby clusters

OPTICS is a DBSCAN variant that handles different densities much better
Recent developments
In recent years considerable effort has been put into improving algorithm performance of the existing algorithms.^{[14]}^{[15]} Among them are CLARANS (Ng and Han, 1994),^{[16]} and BIRCH (Zhang et al., 1996).^{[17]} With the recent need to process larger and larger data sets (also known as big data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of preclustering methods such as canopy clustering, which can process huge data sets efficiently, but the resulting “clusters” are merely a rough prepartitioning of the data set to then analyze the partitions with existing slower methods such as kmeans clustering. Various other approaches to clustering have been tried such as seed based clustering.^{[18]}
For highdimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in highdimensional spaces. This led to new clustering algorithms for highdimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated (“correlated”) subspace clusters that can be modeled by giving a correlation of their attributes. Examples for such clustering algorithms are CLIQUE^{[19]} and SUBCLU.^{[20]}
Ideas from densitybased clustering methods (in particular the DBSCAN/OPTICS family of algorithms) have been adopted to subspace clustering (HiSC,^{[21]} hierarchical subspace clustering and DiSH^{[22]}) and correlation clustering (HiCO,^{[23]} hierarchical correlation clustering, 4C^{[24]} using “correlation connectivity” and ERiC^{[25]} exploring hierarchical densitybased correlation clusters).
Several different clustering systems based on mutual information have been proposed. One is Marina Meilă’s variation of information metric;^{[26]} another provides hierarchical clustering.^{[27]} Using genetic algorithms, a wide range of different fitfunctions can be optimized, including mutual information.^{[28]} Also message passing algorithms, a recent development in Computer Science and Statistical Physics, has led to the creation of new types of clustering algorithms.^{[29]}
Evaluation of clustering results
Evaluation of clustering results sometimes is referred to as cluster validation.
There have been several suggestions for a measure of similarity between two clusterings. Such a measure can be used to compare how well different data clustering algorithms perform on a set of data. These measures are usually tied to the type of criterion being considered in assessing the quality of a clustering method.
Internal evaluation
When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications.^{[30]} Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example kMeans clustering naturally optimizes object distances, and a distancebased internal criterion will likely overrate the resulting clustering.
Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another.^{[4]} Validity as measured by such an index depends on the claim that this kind of structure exists in the data set. An algorithm designed for some kind of models has no chance if the data set contains a radically different set of models, or if the evaluation measures a radically different criterion.^{[4]} For example, kmeans clustering can only find convex clusters, and many evaluation indexes assume convex clusters. On a data set with nonconvex clusters neither the use of kmeans, nor of an evaluation criterion that assumes convexity, is sound.
The following methods can be used to assess the quality of clustering algorithms based on internal criterion:
 The Davies–Bouldin index can be calculated by the following formula:
 where n is the number of clusters, is the centroid of cluster , is the average distance of all elements in cluster to centroid , and is the distance between centroids and . Since algorithms that produce clusters with low intracluster distances (high intracluster similarity) and high intercluster distances (low intercluster similarity) will have a low Davies–Bouldin index, the clustering algorithm that produces a collection of clusters with the smallest Davies–Bouldin index is considered the best algorithm based on this criterion.
 Dunn index (J. C. Dunn 1974)
 The Dunn index aims to identify dense and wellseparated clusters. It is defined as the ratio between the minimal intercluster distance to maximal intracluster distance. For each cluster partition, the Dunn index can be calculated by the following formula:^{[31]}
 where represents the distance between clusters and , and measures the intracluster distance of cluster . The intercluster distance between two clusters may be any number of distance measures, such as the distance between the centroids of the clusters. Similarly, the intracluster distance may be measured in a variety ways, such as the maximal distance between any pair of elements in cluster . Since internal criterion seek clusters with high intracluster similarity and low intercluster similarity, algorithms that produce clusters with high Dunn index are more desirable.
External evaluation
In external evaluation, clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of preclassified items, and these sets are often created by human (experts). Thus, the benchmark sets can be thought of as a gold standard for evaluation. These types of evaluation methods measure how close the clustering is to the predetermined benchmark classes. However, it has recently been discussed whether this is adequate for real data, or only on synthetic data sets with a factual ground truth, since classes can contain internal structure, the attributes present may not allow separation of clusters or the classes may contain anomalies.^{[32]} Additionally, from a knowledge discovery point of view, the reproduction of known knowledge may not necessarily be the intended result.^{[32]}
Some of the measures of quality of a cluster algorithm using external criterion include:
 Rand measure (William M. Rand 1971)^{[33]}
 The Rand index computes how similar the clusters (returned by the clustering algorithm) are to the benchmark classifications. One can also view the Rand index as a measure of the percentage of correct decisions made by the algorithm. It can be computed using the following formula:
 where is the number of true positives, is the number of true negatives, is the number of false positives, and is the number of false negatives. One issue with the Rand index is that false positives and false negatives are equally weighted. This may be an undesirable characteristic for some clustering applications. The Fmeasure addresses this concern.
 The Fmeasure can be used to balance the contribution of false negatives by weighting recall through a parameter . Let precision and recall be defined as follows:
 where is the precision rate and is the recall rate. We can calculate the Fmeasure by using the following formula:^{[30]}
 Notice that when , . In other words, recall has no impact on the Fmeasure when , and increasing allocates an increasing amount of weight to recall in the final Fmeasure.
 Paircounting FMeasure is the FMeasure applied to the set of object pairs, where objects are paired with each other when they are part of the same cluster. This measure is able to compare clusterings with different numbers of clusters.
 Jaccard index
 The Jaccard index is used to quantify the similarity between two datasets. The Jaccard index takes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements. The Jaccard index is defined by the following formula:
 This is simply the number of unique elements common to both sets divided by the total number of unique elements in both sets.
 Fowlkes–Mallows index (E. B. Fowlkes & C. L. Mallows 1983)^{[34]}
 The FowlkesMallows index computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the FowlkesMallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula:
 where is the number of true positives, is the number of false positives, and is the number of false negatives. The index is the geometric mean of the precision and recall and , while the Fmeasure is their harmonic mean.^{[35]} Moreover, precision and recall are also known as Wallace’s indices and .^{[36]}
 A confusion matrix can be used to quickly visualize the results of a classification (or clustering) algorithm. It shows how different a cluster is from the gold standard cluster.
 The Mutual Information is an information theoretic measure of how much information is shared between a clustering and a groundtruth classification that can detect a nonlinear similarity between two clusterings. Adjusted mutual information is the correctedforchance variant of this that has a reduced bias for varying cluster numbers.
Applications
 Business and marketing

 Market research
 Cluster analysis is widely used in market research when working with multivariate data from surveys and test panels. Market researchers use cluster analysis to partition the general population of consumers into market segments and to better understand the relationships between different groups of consumers/potential customers, and for use in market segmentation,Product positioning, New product development and Selecting test markets.
 Grouping of shopping items
 Clustering can be used to group all the shopping items available on the web into a set of unique products. For example, all the items on eBay can be grouped into unique products. (eBay doesn’t have the concept of a SKU).

 Social network analysis
 In the study of social networks, clustering may be used to recognize communities within large groups of people.
 Search result grouping
 In the process of intelligent grouping of the files and websites, clustering may be used to create a more relevant set of search results compared to normal search engines like Google. There are currently a number of web based clustering tools such as Clusty.
 Slippy map optimization
 Flickr‘s map of photos and other map sites use clustering to reduce the number of markers on a map. This makes it both faster and reduces the amount of visual clutter.

 Image segmentation
 Clustering can be used to divide a digital image into distinct regions for border detection or object recognition.
 Recommender systems
 Recommender systems are designed to recommend new items based on a user’s tastes. They sometimes use clustering algorithms to predict a user’s preferences based on the preferences of other users in the user’s cluster.
 Markov chain Monte Carlo methods
 Clustering is often utilized to locate and characterize extrema in the target distribution.
 Social science

 Crime analysis
 Cluster analysis can be used to identify areas where there are greater incidences of particular types of crime. By identifying these distinct areas or “hot spots” where a similar crime has happened over a period of time, it is possible to manage law enforcement resources more effectively.
 Educational data mining
 Cluster analysis is for example used to identify groups of schools or students with similar properties.
 Typologies
 From poll data, projects such as those undertaken by the Pew Research Center use cluster analysis to discern typologies of opinions, habits, and demographics that may be useful in politics and marketing.
Related methods
References
 Jump up^ Bailey, Ken (1994). “Numerical Taxonomy and Cluster Analysis”. Typologies and Taxonomies. p. 34.ISBN 9780803952591.
 Jump up^ Tryon, Robert C. (1939). Cluster Analysis: Correlation Profile and Orthometric (factor) Analysis for the Isolation of Unities in Mind and Personality. Edwards Brothers.
 Jump up^ Cattell, R. B. (1943). The description of personality: Basic traits resolved into clusters. Journal of Abnormal and Social Psychology, 38, 476506.
 ^ Jump up to:^{a} ^{b} ^{c} ^{d} ^{e} ^{f} EstivillCastro, Vladimir (20 June 200202). “Why so many clustering algorithms — A Position Paper”. ACM SIGKDD Explorations Newsletter 4 (1): 65–75.doi:10.1145/568574.568575.
 Jump up^ R. Sibson (1973). “SLINK: an optimally efficient algorithm for the singlelink cluster method”. The Computer Journal(British Computer Society) 16 (1): 30–34.doi:10.1093/comjnl/16.1.30.
 Jump up^ D. Defays (1977). “An efficient algorithm for a complete link method”. The Computer Journal (British Computer Society)20 (4): 364–366. doi:10.1093/comjnl/20.4.364.
 Jump up^ Lloyd, S. (1982). “Least squares quantization in PCM”.IEEE Transactions on Information Theory 28 (2): 129–137.doi:10.1109/TIT.1982.1056489. edit
 Jump up^ HansPeter Kriegel, Peer Kröger, Jörg Sander, Arthur Zimek (2011). “Densitybased Clustering”. WIREs Data Mining and Knowledge Discovery 1 (3): 231–240.doi:10.1002/widm.30.
 Jump up^ Microsoft academic search: most cited data mining articles: DBSCAN is on rank 24, when accessed on: 4/18/2010
 Jump up^ Martin Ester, HansPeter Kriegel, Jörg Sander, Xiaowei Xu (1996). “A densitybased algorithm for discovering clusters in large spatial databases with noise”. In Evangelos Simoudis, Jiawei Han, Usama M. Fayyad. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD96). AAAI Press. pp. 226–231. ISBN 1577350049.
 Jump up^ Mihael Ankerst, Markus M. Breunig, HansPeter Kriegel, Jörg Sander (1999). “OPTICS: Ordering Points To Identify the Clustering Structure”. ACM SIGMOD international conference on Management of data. ACM Press. pp. 49–60.
 Jump up^ Achtert, E.; Böhm, C.; Kröger, P. (2006). “DeLiClu: Boosting Robustness, Completeness, Usability, and Efficiency of Hierarchical Clustering by a Closest Pair Ranking”. LNCS: Advances in Knowledge Discovery and Data Mining. Lecture Notes in Computer Science 3918: 119–128. doi:10.1007/11731139_16. ISBN 9783540332060. edit
 Jump up^ S Roy, D K Bhattacharyya (2005). “An Approach to find Embedded Clusters Using Density Based Techniques”.LNCS Vol.3816. Springer Verlag. pp. 523–535.
 Jump up^ D. Sculley (2010). “Webscale kmeans clustering”. Proc. 19th WWW.
 Jump up^ Z. Huang. “Extensions to the kmeans algorithm for clustering large data sets with categorical values”. Data Mining and Knowledge Discovery, 2:283–304, 1998.
 Jump up^ R. Ng and J. Han. “Efficient and effective clustering method for spatial data mining”. In: Proceedings of the 20th VLDB Conference, pages 144155, Santiago, Chile, 1994.
 Jump up^ Tian Zhang, Raghu Ramakrishnan, Miron Livny. “An Efficient Data Clustering Method for Very Large Databases.” In: Proc. Int’l Conf. on Management of Data, ACM SIGMOD, pp. 103–114.
 Jump up^ Can, F.; Ozkarahan, E. A. (1990). “Concepts and effectiveness of the covercoefficientbased clustering methodology for text databases”. ACM Transactions on Database Systems 15 (4): 483. doi:10.1145/99935.99938.edit
 Jump up^ Agrawal, R.; Gehrke, J.; Gunopulos, D.; Raghavan, P. (2005). “Automatic Subspace Clustering of High Dimensional Data”. Data Mining and Knowledge Discovery11: 5. doi:10.1007/s1061800513961. edit
 Jump up^ Karin Kailing, HansPeter Kriegel and Peer Kröger.DensityConnected Subspace Clustering for HighDimensional Data. In: Proc. SIAM Int. Conf. on Data Mining (SDM’04), pp. 246257, 2004.
 Jump up^ Achtert, E.; Böhm, C.; Kriegel, H. P.; Kröger, P.; MüllerGorman, I.; Zimek, A. (2006). “Finding Hierarchies of Subspace Clusters”. LNCS: Knowledge Discovery in Databases: PKDD 2006. Lecture Notes in Computer Science 4213: 446–453. doi:10.1007/11871637_42.ISBN 9783540453741. edit
 Jump up^ Achtert, E.; Böhm, C.; Kriegel, H. P.; Kröger, P.; MüllerGorman, I.; Zimek, A. (2007). “Detection and Visualization of Subspace Cluster Hierarchies”. LNCS: Advances in Databases: Concepts, Systems and Applications. Lecture Notes in Computer Science 4443: 152–163.doi:10.1007/9783540717034_15. ISBN 9783540717027. edit
 Jump up^ Achtert, E.; Böhm, C.; Kröger, P.; Zimek, A. (2006). “Mining Hierarchies of Correlation Clusters”. Proc. 18th International Conference on Scientific and Statistical Database Management (SSDBM): 119–128.doi:10.1109/SSDBM.2006.35. ISBN 0769525903. edit
 Jump up^ Böhm, C.; Kailing, K.; Kröger, P.; Zimek, A. (2004). “Computing Clusters of Correlation Connected objects”.Proceedings of the 2004 ACM SIGMOD international conference on Management of data – SIGMOD ’04. p. 455.doi:10.1145/1007568.1007620. ISBN 1581138598. edit
 Jump up^ Achtert, E.; Bohm, C.; Kriegel, H. P.; Kröger, P.; Zimek, A. (2007). “On Exploring Complex Relationships of Correlation Clusters”. 19th International Conference on Scientific and Statistical Database Management (SSDBM 2007). p. 7.doi:10.1109/SSDBM.2007.21. ISBN 0769528686. edit
 Jump up^ Meilă, Marina (2003). “Comparing Clusterings by the Variation of Information”. Learning Theory and Kernel Machines. Lecture Notes in Computer Science 2777: 173–187. doi:10.1007/9783540451679_14. ISBN 9783540407201.
 Jump up^ Alexander Kraskov, Harald Stögbauer, Ralph G. Andrzejak, and Peter Grassberger, “Hierarchical Clustering Based on Mutual Information”, (2003) ArXiv qbio/0311039
 Jump up^ Auffarth, B. (2010). Clustering by a Genetic Algorithm with Biased Mutation Operator. WCCI CEC. IEEE, July 18–23, 2010. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.170.869
 Jump up^ B.J. Frey and D. Dueck (2007). “Clustering by Passing Messages Between Data Points”. Science 315 (5814): 972–976. doi:10.1126/science.1136800. PMID 17218491Papercore summary Frey2007
 ^ Jump up to:^{a} ^{b} Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze. Introduction to Information Retrieval. Cambridge University Press. ISBN 9780521865715.
 Jump up^ Dunn, J. (1974). “Well separated clusters and optimal fuzzy partitions”. Journal of Cybernetics 4: 95–104.doi:10.1080/01969727408546059.
 ^ Jump up to:^{a} ^{b} Ines Färber, Stephan Günnemann, HansPeter Kriegel, Peer Kröger, Emmanuel Müller, Erich Schubert, Thomas Seidl, Arthur Zimek (2010). “On Using ClassLabels in Evaluation of Clusterings”. In Xiaoli Z. Fern, Ian Davidson, Jennifer Dy. MultiClust: Discovering, Summarizing, and Using Multiple Clusterings. ACM SIGKDD.
 Jump up^ W. M. Rand (1971). “Objective criteria for the evaluation of clustering methods”. Journal of the American Statistical Association (American Statistical Association) 66 (336): 846–850. doi:10.2307/2284239. JSTOR 2284239.
 Jump up^ E. B. Fowlkes & C. L. Mallows (1983), “A Method for Comparing Two Hierarchical Clusterings”, Journal of the American Statistical Association 78, 553–569.
 Jump up^ L. Hubert et P. Arabie. Comparing partitions. J. of Classification, 2(1), 1985.
 Jump up^ D. L. Wallace. Comment. Journal of the American Statistical Association, 78 :569– 579, 1983.
 Jump up^ R. B. Zadeh, S BenDavid. “A Uniqueness Theorem for Clustering”, in Proceedings of the Conference of Uncertainty in Artificial Intelligence, 2009.
 Jump up^ J Kleinberg, “An Impossibility Theorem for Clustering”, Proceedings of The Neural Information Processing Systems Conference 2002
 Jump up^ Bewley A. et al. “Realtime volume estimation of a dragline payload”. “IEEE International Conference on Robotics and Automation”,2011: 15711576.
 Jump up^ Basak S.C., Magnuson V.R., Niemi C.J., Regal R.R. “Determining Structural Similarity of Chemicals Using Graph Theoretic Indices”. Discr. Appl. Math., 19, 1988: 1744.
 Jump up^ Huth R. et al. “Classifications of Atmospheric Circulation Patterns: Recent Advances and Applications”. Ann. N.Y. Acad. Sci., 1146, 2008: 105152
allenpg
Latest posts by allenpg (see all)
 iMacros Javascript Scripting Interface  June 1, 2014
 Integrate Python and Eclipse IDE  May 31, 2014
 Setting up Python in Windows 8.1  May 30, 2014