计算机工程与应用 ›› 2018, Vol. 54 ›› Issue (19): 1-9.DOI: 10.3778/j.issn.1002-8331.1808-0208

• 热点与综述 • 上一篇    下一篇

基于对象数量的宽度加权聚类kNN算法

陈  辉1,关凯胜1,李嘉兴1,2   

  1. 1.广东工业大学 计算机学院,广州 510006
    2.广东省大数据分析与处理重点实验室,广州 510006
  • 出版日期:2018-10-01 发布日期:2018-10-19

Width-weighted clustering kNN algorithm based on number of objects

CHEN Hui1, GUAN Kaisheng1, LI Jiaxing1,2   

  1. 1.School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
    2.Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou 510006, China
  • Online:2018-10-01 Published:2018-10-19

摘要: 传统k最近邻算法(k-Nearest Neighbor,kNN)作为一种非参数化分类技术在数据分析中具有广泛的应用,但该算法具有较多的冗余计算,致使处理数据时需要花费较多的计算时间。目前大量的研究都集中在数据的预处理阶段,通过为数据建立模型降低kNN查询的计算量。提出一种基于对象数量的宽度加权聚类kNN算法(NOWCkNN),该算法中数据集首先以全局宽度进行聚类,每个生成的子集群根据其对象数量递归计算其宽度的权值,然后算法根据其权值的大小和调和系数调节宽度值,最后生成不同宽度大小的集群用于kNN查询。这不仅减少了算法的聚类时间,还能平衡产生集群的大小,减少迭代次数,使三角不等式修剪率达到最大。实验结果表明,NOWCkNN算法与现有工作相比在各个维度的数据集中有较好的性能,尤其是在高维度、数据量较大的数据集中有更高的修剪效率。

关键词: 聚类, k-最近邻, 三角不等式, 宽度加权, 高维数据

Abstract: The traditional k-Nearest Neighbor(kNN) algorithm has a wide range of applications as a non-parameterized data clustering algorithm. However, the algorithm has more redundancy calculations, which leads to more computation time when processing data. A large amount of research is currently focused on the preprocessing stage of data, and the computational complexity of kNN queries is reduced by modeling the data. This paper proposes a width-weighted clustering kNN algorithm based on the number of objects for kNN query(NOWCkNN). The algorithm performs width learning based on the number of objects. Firstly, data sets are clustered with global width, and then each generated cluster calculates the weight of its width recursively based on the number of objects. The algorithm can adjust the width value according to the number of cluster’s objects. In terms of clustering, the algorithm not only reduces clustering time and the number of iterations, but also balances cluster size and maximizes the trigonometric inequality. Experimental results show that the work outperforms than the existing works in all dimensions of datasets, especially in high-dimensional and large datasets.

Key words: clustering, k-Nearest Neighbors(kNN), trigonometric inequality, width-weighted, high-dimensional data