计算机工程与应用 ›› 2020, Vol. 56 ›› Issue (15): 172-178.DOI: 10.3778/j.issn.1002-8331.1910-0220

• 模式识别与人工智能 • 上一篇    下一篇

优化初始聚类中心的K-means聚类算法

郭永坤,章新友,刘莉萍,丁亮,牛晓录   

  1. 1.江西中医药大学 计算机学院,南昌 330004
    2.江西中医药大学 药学院,南昌 330004
  • 出版日期:2020-08-01 发布日期:2020-07-30

K-means Clustering Algorithm of Optimizing Initial Clustering Center

GUO Yongkun, ZHANG Xinyou, LIU Liping, DING Liang, NIU Xiaolu   

  1. 1.School of Computing, Jiangxi University of Traditional Chinese Medicine, Nanchang 330004, China
    2.School of Pharmacy, Jiangxi University of Traditional Chinese Medicine, Nanchang 330004, China
  • Online:2020-08-01 Published:2020-07-30

摘要:

针对传统K-means算法对初始中心十分敏感,聚类结果不稳定问题,提出了一种改进K-means聚类算法。该算法首先计算样本间的距离,根据样本距离找出距离最近的两点形成集合,根据点与集合的计算公式找出其他所有离集合最近的点,直到集合内数据数目大于或等于[α]([α]为样本集数据点数目与聚类的簇类数目的比值),再把该集合从样本集中删除,重复以上步骤得到K(K为簇类数目)个集合,计算每个集合的均值作为初始中心,并根据K-means算法得到最终的聚类结果。在Wine、Hayes-Roth、Iris、Tae、Heart-stalog、Ionosphere、Haberman数据集中,改进算法比传统K-means、K-means++算法的聚类结果更稳定;在Wine、Iris、Tae数据集中,比最小方差优化初始聚类中心的K-means算法聚类准确率更高,且在7组数据集中改进算法得到的轮廓系数和F1值最大。对于密度差异较大数据集,聚类结果比传统K-means、K-means++算法更稳定,更准确,且比最小方差优化初始聚类中心的K-means算法更高效。

关键词: K-means聚类算法, 算法优化, 初始聚类中心

Abstract:

An improved K-means clustering algorithm is proposed to solve the problem that traditional K-means algorithm is very sensitive to the initial center and the clustering result is unstable. The algorithm calculates the distance between samples, then finds the nearest two points to form a set according to the distance between samples. The algorithm finds all other nearest points to the set according to the calculation formula of points and sets until the number of data points in the set is greater or equal to [α]([α] is the ratio of the number of data points in the sample set to the number of clusters in the cluster), while the set is deleted from the sample set. The steps above are repeated, and K(K is the number of clusters) sets are obtained. The mean of each set is calculated as the initial center, and then the final clustering results are obtained according to K-means algorithm. In Wine, Hayes-Roth, Iris, Tae, Heart-stalog, Ionosphere and Haberman datasets, the improved algorithm designed in this study is more stable than the traditional K-means and K-means++ clustering results. In Wine, Iris and Tae datasets, the improved algorithm has higher clustering accuracy than the K-means algorithm which optimizes the initial clustering center with minimum variance, and the contour coefficients and F1 values obtained by the improved algorithm are the largest in seven sets of data. For data sets with large density differences, the improved clustering algorithm designed in this study is more stable and accurate than the traditional K-means and K-means++ algorithms, and the improved clustering algorithm is more efficient than the K-means algorithm which optimizes the initial clustering center with minimum variance.

Key words: K-means clustering algorithm, algorithm optimization, initial clustering center