Computer Engineering and Applications ›› 2020, Vol. 56 ›› Issue (14): 111-117.DOI: 10.3778/j.issn.1002-8331.1904-0273

Previous Articles     Next Articles

Multi-label Text Classification Based on Joint Model

LIU Xinhui, CHEN Wenshi, ZHOU Ai, CHEN Fei, QU Wen, LU Mingyu   

  1. College of Information Science and Technology, Dalian Maritime University, Dalian, Liaoning 116026, China
  • Online:2020-07-15 Published:2020-07-14



  1. 大连海事大学 信息科学技术学院,辽宁 大连 116026


At present, the multi-label text classification algorithm ignores the importance of different words in text sequences and the influence of different levels of text features. This paper proposes an ATT-Capsule-BiLSTM method based on multi-head attention, CapsuleNet and the Bidirectional Long Short-Term Memory network(BiLSTM) model. Firstly, the text sequence is vectorized, and the weight distribution of the words is learned by multi-head attention on the basis of the word vector. Then the feature representation of the local spatial information and the context timing information are extracted by the Capsule network and BiLSTM respectively, and the fusion is performed through the fusion layer. After that, it is classified by the sigmoid classifier. The comparison experiments are carried out on two data sets, Reuters-21578 and AAPD. The experimental results show that the proposed joint model achieves better performance based on simple architecture. The [F1] values reach 89.82% and 67.48% respectively.

Key words: multi-label text classification, multi-head attention, CapsuleNet, Bidirectional Long Short-Term Memory network(BiLSTM), joint model


目前大部分多标签文本分类算法忽视文本序列中不同词的重要程度、不同层次文本特征的影响,提出一种ATT-Capsule-BiLSTM模型,使用多头注意力机制(Multi-head Attention),结合胶囊网络(CapsuleNet)与双向长短期记忆网络(BiLSTM)方法。将文本序列向量化表示,在词向量的基础上通过多头注意力机制学习单词的权重分布。通过胶囊网络和BiLSTM分别提取局部空间信息和上下文时序信息的特征表示,通过平均融合后,由sigmoid分类器进行分类。在Reuters-21578和AAPD两个数据集上进行对比实验,实验结果表明,提出的联合模型在使用简单架构的情况下,达到了较好的性能,[F1]值分别达到了89.82%和67.48%。

关键词: 多标签文本分类, 多头注意力机制, 胶囊网络, 双向长短期记忆网络, 联合模型