Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (4): 163-172.DOI: 10.3778/j.issn.1002-8331.2209-0472

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

Extreme Multi-Label Text Classification Based on Balance Function

CHEN Zhaohong, HONG Zhiyong, YU Wenhua, ZHANG Xin   

  1. Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, Guangdong 529020, China
  • Online:2024-02-15 Published:2024-02-15

采用平衡函数的大规模多标签文本分类

陈钊鸿,洪智勇,余文华,张昕   

  1. 五邑大学  智能制造学部,广东  江门  529020

Abstract: Extreme multi-label text classification is a challenging task in the field of natural language processing. In this task, there is a long-tailed distribution situation of labeled data. In this situation, model has a poor ability to learn tail labels classification, which results the overall classification effect is not good. In order to address the above problems, an extreme multi-label text classification method based on balance function is proposed. Firstly, the BERT pre-training model is used for word embedding. Further, the concatenated output of the multi-layer encoder in the pre-trained model is used as the text vector representation to obtain richer text semantic information and improves the model convergence speed. Finally, the balance function is used to assign different attenuation weights to the training losses of different prediction labels, which improves the learning ability of the method on tail label classification. The experimental results on Eurlex-4K and Wiki10-31K datasets show that the evaluation indicators P@1, P@3 and P@5 respectively reach 86.95%, 74.12%, 61.43% and 88.57%, 77.46% and 67.90%.

Key words: natural language processing (NLP), extreme multi-label text classification, BERT, balance function, deep learning

摘要: 大规模多标签文本分类是自然语言处理领域的一项挑战性任务。该任务存在标签数据长尾分布的情况,在这种情况下,模型学习尾部标签分类能力不佳,导致模型的整体分类效果不理想。为解决以上问题,提出采用平衡函数的大规模多标签文本分类方法。该方法使用BERT预训练模型对文本进行词嵌入处理,进一步使用预训练模型中多层编码器的拼接输出作为文本向量表示,获取了丰富的文本语义信息,提高了模型收敛速度。最后采用平衡函数针对预测标签的训练损失赋予不同的衰减权重,提高了方法在尾部标签分类上的学习能力。在Eurlex-4K和Wiki10-31K数据集上的实验结果表明,评价指标P@1、P@3和P@5上分别达到86.95%、74.12%、61.43%和88.57%、77.46%、67.90%。

关键词: 自然语言处理, 大规模多标签文本分类, BERT, 平衡函数, 深度学习