计算机工程与应用 ›› 2015, Vol. 51 ›› Issue (19): 152-157.

• 数据库、数据挖掘、机器学习 • 上一篇    下一篇

基于信息熵与动态聚类的文本特征选择方法

唐立力   

  1. 重庆工商大学 融智学院,重庆 400033
  • 出版日期:2015-09-30 发布日期:2015-10-13

Text feature selection method based on information entropy and dynamic clustering

TANG Lili   

  1. Rongzhi College of Chongqing Technology and Business University, Chongqing 400033, China
  • Online:2015-09-30 Published:2015-10-13

摘要: 根据科技文献的结构特点,搭建了一个四层挖掘模式,提出了一种应用于科技文献分类的文本特征选择方法。该方法首先依据科技文献的结构将其分为四个层次,然后采用K-means聚类对前三层逐层实现特征词提取,最后再使用Aprori算法找出第四层的最大频繁项集,并作为第四层的特征词集合。在该方法中,针对K-means算法受初始中心点的影响较大的问题,首先采用信息熵对聚类对象赋权的方式来修正对象间的距离函数,然后再利用初始聚类的赋权函数值选出较合适的初始聚类中心点。同时,通过为K-means算法的终止条件设定标准值,来减少算法迭代次数,以减少学习时间;通过删除由信息动态变化而产生的冗余信息,来减少动态聚类过程中的干扰,从而使算法达到更准确更高效的聚类效果。上述措施使得该文本特征选择方法能够在文献语料库中更加准确地找到特征词,较之以前的方法有很大提升,尤其是在科技文献方面更为适用。实验结果表明,当数据量较大时,该方法结合改进后的K-means算法在科技文献分类方面有较高的性能。

关键词: K-means算法, 动态聚类, 特征选择, 信息熵

Abstract: By means of a four-mining model which is constructed based on the structural characteristics of scientific literatures, a text feature selection method is proposed to apply in classification of scientific literatures. The proposed method firstly divides scientific literature into four layers according to its structure, and then selects features progressively for the former three layers by K-means algorithm, and finally finds out the maximum frequent itemsets of fourth layer by Aprori algorithm to act as a collection of fourth layer features. Meanwhile, K-means algorithm is also improved which firstly uses information entropy empower the clustering objects to correct the distance function, and then employs empowerment function value to select the optimal initial clustering center, and subsequently reduces algorithm iterations and learning time by setting the standard value for termination condition of the algorithm and reduces interference of dynamic clustering by removing redundant information from the changing information to make the algorithm achieve more accurate and efficient clustering effect. So, it is possible for this proposed method to find features more accurately in the literature corpus. Experimental results show that the proposed method is feasible and effective, and has higher performance in scientific literature classification which is compared with the previous methods.

Key words: K-means algorithm, dynamic clustering, feature selection, information entropy