top of page

Clustering: Unveiling Patterns in Data

Clustering is a powerful data analysis technique with the primary objective of discovering groups, or clusters, of instances in a dataset. Each cluster consists of instances that exhibit high similarity to one another while being significantly different from instances in other clusters. This method is widely used in various fields such as marketing, biology, and social sciences to identify natural groupings within data.

The essence of clustering lies in its ability to reveal the underlying structure of a dataset. By grouping similar instances together, clustering helps in understanding and interpreting data more effectively. For example, in marketing, clustering can segment customers based on their purchasing behavior, allowing for more targeted and personalized marketing strategies. In biology, it can help identify groups of genes with similar functions or patterns of expression.

These techniques are a core component of unsupervised learning, a branch of machine learning that deals with analyzing and interpreting data without the need for labeled instances. Unlike supervised learning, which relies on predefined output labels for training models, clustering aims to discover natural groupings within the data based solely on the inherent similarities and differences among instances. This makes clustering particularly valuable for exploratory data analysis, where the objective is to identify patterns and relationships that are not immediately obvious.


The term "cluster" is inherently ambiguous, as its definition can vary greatly depending on the nature of the data and the desired outcomes of the analysis. This variability makes clustering both a flexible and a complex technique.

In some contexts, a cluster might be defined as a group of points in close proximity to each other in a multidimensional space. In others, it might refer to groups of instances sharing similar characteristics, regardless of their spatial relationships. For instance, in the following image, we can see different ways of creating valid groups over the same dataset.

Different grouping approaches over the same data.
Different grouping approaches over the same data. Source[2]

The specific goals of the analysis further influence the ambiguity in the definition of a cluster. Additionally, the choice of clustering algorithm can shape the definition of clusters. Each has its own methods for identifying clusters, which can lead to different results even with the same dataset.

Ultimately, the concept of a cluster is defined by the unique characteristics of the data and the specific objectives of the analysis. This ambiguity allows clustering to be a versatile tool adaptable to a wide range of applications, though it also requires careful consideration and customization to ensure meaningful and accurate results.


In this article, we will explore various clustering techniques used in unsupervised learning to discover the hidden structures within data. We will delve into three prominent methods: K-Means clustering, which partitions data into a predetermined number of clusters by minimizing within-cluster variance; hierarchical agglomerative clustering, which builds a tree of clusters by progressively merging or splitting them based on their similarities; and DBSCAN (Density-Based Spatial Clustering of Applications with Noise), which identifies clusters based on the density of data points, making it effective at discovering clusters of arbitrary shape and handling noise. By understanding these techniques, we can better appreciate how clustering can reveal patterns and insights from complex datasets.


It is one of the most widely used clustering algorithms in unsupervised learning. It is designed to partition a set of data points into K distinct clusters, where each data point belongs to the cluster with the nearest mean. The data is represented as instances {x_1,…,x_n} in a pp-dimensional space.

The primary goal of K-Means is to partition the dataset into K clusters such that the within-cluster variance is minimized. For a given K, the algorithm aims to find the optimal clustering by iteratively refining the positions of the cluster centroids.


The steps to apply this method could be summarized as follows:

  1. Initialization: Randomly choose K initial centroids μ_1,…,μ_k

  2. Iteration: - Assignment Step: Assign each instance to the nearest centroid. This forms K clusters based on proximity to the centroids. - Update Step: Recompute the centroids of each cluster as the mean of the instances assigned to it.

  3. Convergence: Repeat the assignment and update steps until the centroids no longer change.

In the following picture, we can see how the process is implemented:

K-means algorithm for K=2
K-means algorithm for K=2. Source[1]


The complexity of K-Means clustering can be analyzed in terms of time and space:

  • Time Complexity: O(iter⋅n⋅K⋅p)

  • Here, iter represents the number of iterations, n is the number of instances, K is the number of clusters, and p is the number of dimensions (attributes).

  • Space Complexity: O((n+K)⋅p)

  • This accounts for the storage of the instances and the K centroids in the p-dimensional space.

Advantages and Disadvantages

As with any algorithm, especially in this intricate area, it has some strengths and weaknesses. Let’s review them.

  • Simplicity: K-Means is straightforward to understand and implement. Its algorithmic steps are easy to follow, making it accessible for beginners.

  • Efficiency: The computational efficiency of K-Means is high, allowing it to handle large datasets effectively.

  • General Applicability: K-Means can be applied to a wide range of problems across various domains, from image segmentation to customer segmentation in marketing.

  • Need to Specify K: The number of clusters, K, must be specified in advance, which can be challenging without prior knowledge of the data's structure.

  • Sensitivity to Noise and Outliers: K-Means is sensitive to noise and outliers, which can significantly distort the resulting clusters.

  • Initial Centroids Impact: The algorithm is highly sensitive to the initial placement of centroids, and poor initialization can lead to suboptimal clustering.

  • Multiple Initializations: While multiple initializations can sometimes mitigate poor results, they do not always guarantee an optimal solution.

  • Limited to Globular Clusters: K-Means assumes clusters to be globular and equally sized, which limits its ability to identify clusters of arbitrary shapes or varying sizes.

Hierarchical Agglomerative Clustering

HAC is a method of cluster analysis that seeks to build a hierarchy of clusters. It is an iterative process that starts by treating each data point as an individual cluster and then successively merges the closest pairs of clusters until all data points belong to a single cluster, forming a tree-like structure known as a dendrogram.


  1. Initialization: Each data point is initially considered as a separate cluster. If there are n data points, we start with n clusters.

  2. Distance Matrix Calculation: Compute a distance matrix that measures the distance between every pair of clusters. Common distance metrics include Euclidean distance, Manhattan distance, and cosine similarity.

  3. Merging Clusters:

  • Find Closest Clusters: Identify the pair of clusters with the smallest distance between them.

  • Merge Clusters: Combine these two clusters into a single cluster.

  • Update Distance Matrix: Recalculate the distances between the new cluster and all remaining clusters. Different linkage methods determine how the distance is recalculated, including:

    • Single Linkage: Minimum distance between points in the two clusters.

    • Complete Linkage: Maximum distance between points in the two clusters.

    • Average Linkage: Average distance between all pairs of points in the two clusters.

    • Ward's Method: Minimize the increase in total within-cluster variance after merging.

4. Repeat: Continue merging the closest clusters and updating the distance matrix until all points are merged into a single cluster.

Visualization with Dendrogram

The results of HAC are often visualized using a dendrogram, a tree-like diagram that shows the order and distances at which clusters were merged. The vertical axis represents the distance or dissimilarity between clusters, and the horizontal axis represents the individual data points. By cutting the dendrogram at a chosen level, a desired number of clusters can be obtained.

Example of HAC over a dataset of 4 items
Example of HAC over a dataset of 4 items

Advantages and Disadvantages

Again, we have different strengths and weaknesses in this method.


  • No Need to Specify K: HAC does not require the number of clusters to be specified in advance, which is beneficial when the number of natural clusters in the data is unknown.

  • Deterministic: Unlike K-Means, HAC produces the same result with each run, as there is no random initialization.

  • Hierarchical Representation: The dendrogram provides a comprehensive view of the data's clustering structure, which can be used to understand relationships at different levels of granularity.

  • Computational Complexity: HAC can be computationally intensive, especially for large datasets, due to the repeated calculation and updating of the distance matrix.

  • Sensitivity to Noise and Outliers: Like many clustering methods, HAC can be sensitive to noise and outliers, which may distort the dendrogram.

  • Difficulty Handling Large Datasets: The memory and time requirements for calculating and storing the distance matrix can become prohibitive for very large datasets.


Density-Based Spatial Clustering of Applications with Noise (DBSCAN)[3] is a clustering algorithm particularly well-suited for datasets with clusters of varying shapes and sizes, including the presence of noise (outliers). Unlike K-Means and hierarchical clustering, DBSCAN does not require the number of clusters to be specified in advance and is capable of discovering clusters of arbitrary shape.

In the next image, we can see an example of how DBSCAN can detect clusters of different shapes without being influenced by outliers.

DBSCAN can handle clusters of different shapes and sizes.
DBSCAN can handle clusters of different shapes and sizes. Source [2]


It relies on two key parameters:

  • Epsilon (ε): The maximum distance between two points for one to be considered as in the neighborhood of the other.

  • MinPts: The minimum number of points required to form a dense region (a cluster).

The implementation can be summarized in the following steps:

  1. Initialization: Begin with an arbitrary starting point that has not been visited.

  2. Neighborhood Check: Retrieve all points within the ε-neighborhood of the starting point.

  3. Core Point Determination: - If the number of points in the ε-neighborhood is greater than or equal to MinPts, the point is a core point and a new cluster is formed. - If the point is not a core point and is within the ε-neighborhood of a core point, it is a border point. - If the point is neither a core point nor within the ε-neighborhood of any core point, it is labeled as noise.

  4. Expansion: Expand the new cluster by recursively visiting all points within the ε-neighborhood of each core point. If these points are also core points, their ε-neighborhood is further explored.

  5. Termination: Repeat the process until all points have been visited and assigned to a cluster or labeled as noise.

In the following image, we can see how DBSCAN labels the different kinds of points for a given epsilon and MinPts.

DBSCAN points classification
DBSCAN points classification

Advantages and Disadvantages


  • No Need for Predefining Clusters: It does not require specifying the number of clusters in advance, which is advantageous for exploratory data analysis.

  • Robust to Noise: DBSCAN can effectively handle noise and outliers, labeling them as such rather than forcing them into clusters.

  • Flexible Cluster Shapes: The algorithm can find clusters of arbitrary shape and size, unlike K-Means, which is limited to spherical clusters.


  • Parameter Sensitivity: The performance heavily depends on the choice of ε and MinPts. Selecting inappropriate values can lead to poor clustering results.

  • Varying Density: It may struggle with datasets containing clusters of varying density, as a single ε value may not be suitable for all clusters.

  • Computational Complexity: The time complexity is approximately O(n log n), where n is the number of points. While efficient, it can become computationally intensive for very large datasets.


Evaluating the quality of clusters produced by a clustering algorithm is crucial to understanding how well the algorithm has performed. Here are some common methods and metrics used for this purpose:

Similarity Matrix

It is a square, symmetric matrix that measures the similarity between each pair of instances in the dataset. The values in the matrix range from 0 to 1, where higher values indicate greater similarity.

  • Structure: When the rows and columns of the MS are ordered according to the cluster assignments of the instances, a good clustering result will show a block-diagonal structure. This means that instances within the same cluster have high similarity (appearing as blocks along the diagonal), while instances from different clusters have low similarity.

  • Evaluation Methods:

    • Visual Inspection: By visually inspecting the similarity matrix, one can assess whether the clustering results form clear block-diagonal patterns.

    • Correlation Calculation: Compute the correlation between the actual similarity matrix and an ideal block-diagonal similarity matrix to quantitatively measure clustering quality.

Here you can see a case of a similarity matrix over a specific clustering:

Matrix similarity in a cluster of 3 well-divided groups
Matrix similarity in a cluster of 3 well-divided groups

Metrics Based on Cohesion and Separation

Two key aspects of cluster quality are cohesion and separation:

  • Cohesion: Measures how closely related the instances within a cluster are. High cohesion indicates that instances within the same cluster are very similar to each other.

  • Separation: Measures how distinct or well-separated different clusters are. High separation means that instances in different clusters are very dissimilar.

Example Metric: Silhouette Coefficient

The Silhouette Coefficient[4] combines both cohesion and separation to provide an overall measure of clustering quality for each instance:

  • For a point x_i:

    • a: The average distance from x_i to all other points in its own cluster.

    • b: The minimum average distance from x_i to points in the nearest different cluster.

  • Silhouette Score s_i is calculated as:

Silhouette Score
  • Range: The Silhouette score ranges from -1 to 1, where a score closer to 1 indicates that the point is well clustered, a score around 0 indicates that the point is on or very close to the decision boundary between two neighboring clusters, and a score closer to -1 indicates that the point might have been assigned to the wrong cluster.


Clustering is a crucial unsupervised learning technique for uncovering hidden patterns in data. Here we reviewed three key methods: K-Means, hierarchical agglomerative clustering, and DBSCAN, each with distinct advantages and challenges.

K-Means is simple and efficient but requires specifying the number of clusters and is sensitive to noise and initial centroids. HAC provides a detailed hierarchical structure without needing to predefine the number of clusters, though it can be computationally intensive. DBSCAN excels at identifying clusters of varying shapes and handling noise, but it is sensitive to parameter settings.

Effective cluster evaluation, using methods like the similarity matrix and the Silhouette Coefficient, ensures meaningful and accurate results. Each clustering technique offers unique insights, and the choice depends on the data characteristics and analysis goals. By understanding and applying these methods, we can derive valuable insights and make informed decisions across various domains.


[1] Bishop, C. M. (2006). Pattern recognition and machine learning. Springer google schola, 2, 1122-1128.

[2] Tan, P. N., Steinbach, M., & Kumar, V. (2013). Data mining cluster analysis: basic concepts and algorithms. Introduction to data mining, 487, 533.

[3] Ester, M., Kriegel, H. P., Sander, J., & Xu, X. (1996, August). A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd (Vol. 96, No. 34, pp. 226-231).

[4] Rousseeuw, P. J. (1987). Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20, 53-65.


bottom of page