Computer Engineering and Applications ›› 2022, Vol. 58 ›› Issue (23): 178-185.DOI: 10.3778/j.issn.1002-8331.2105-0220

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

BSLA: Improved Text Similarity Model for Siamese-LSTM

MENG Jinxu, SHAN Hongtao, WAN Junjie, JIA Renxiang   

  1. School of Electronic and Electrical Engineering,Shanghai University of Engineering Science, Shanghai 201620, China
  • Online:2022-12-01 Published:2022-12-01

BSLA:改进Siamese-LSTM的文本相似模型

孟金旭,单鸿涛,万俊杰,贾仁祥   

  1. 上海工程技术大学 电子电气工程学院,上海 201620

Abstract: Aiming at the problem that Siamese-LSTM model has poor ability to extract similar text features, an improved Siamese-LSTM text similarity model is proposed. This method introduces an attention mechanism to assign greater weight to similar words, and enhance the recognition ability of similar words in texts. At the same time, the current advanced pre-training model BERT is also introduced to improve the interaction ability of different words in the context of similar texts, and strengthen the correlation between words, so as to realize the recognition of similar and dissimilar texts. The experimental results show that compared with the current popular text similarity models such as Siamese-LSTM, ABCNN, ESIM and BIMPM, and the Siamese-LSTM model that only introduces the BERT model or attention mechanism, the text similarity model of the Siamese-LSTM that both combines BERT and Attention shows good results in accuracy, precision, recall rate and F1 evaluation index, and the F1 value reachs the best effect of 86.18% and 89.08% on the LCQMC data set and Quora Question Pairs data set, respectively.

Key words: Siamese-LSTM, text similarity, attention mechanism, BERT

摘要: 针对Siamese-LSTM模型对相似文本特征提取能力差的问题,提出了一种改进Siamese-LSTM的文本相似模型,该方法引入注意力机制,对相似词分配更大的权重,增强了对文本中相似词的识别能力,同时又引入目前先进的预训练模型BERT,提高相似文本上下文中不同词的交互能力,加强词与词之间的关联度,从而实现对相似与不相似文本的识别。实验结果表明,与当前流行的文本相似模型Siamese-LSTM、ABCNN、ESIM,BIMPM和仅引入BERT模型或注意力机制的Siamese-LSTM模型相比,Siamese-LSTM同时融合BERT和Attention的文本相似模型在准确率、精确率、召回率和F1评价指标表现出了很好的效果,在LCQMC和Quora Question Pairs数据集上F1值分别达到了86.18%和89.08%的最佳效果。

关键词: Siamese-LSTM, 文本相似, 注意力机制, BERT