K-means is a partitional clustering algorithm which aims to group a set of K objects according to their attributes. The procedure follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) fixed a priori. The main idea is to define k centroids, one for each cluster. These centroids should be placed in a smart way because of different location causes different result. So, the better choice is to place them as much as possible far away from each other. The next step is to take each point belonging to a given data set and associate it to the nearest centroid. When no point is pending, the first step is completed, and an early groupage is done. At this point we need to re-calculate k new centroids as barycentre’s of the clusters resulting from the previous step. After we have these k new centroids, a new binding has to be done between the same data set points and the nearest new centroid. A loop has been generated. Because of this loop we may notice that the k centroids change their location step by step until no more changes are done. In other words, centroids do not move any more. Generally, in most of the time K-means plays nice in quickly finding the solution, but be aware of its complexity in the worst case, it’s superpolynomial.