Computer Engineering and Applications ›› 2022, Vol. 58 ›› Issue (21): 223-231.DOI: 10.3778/j.issn.1002-8331.2103-0426

• Graphics and Image Processing • Previous Articles     Next Articles

Image Caption with ELMo Embedding and Multimodal Transformer

YANG Wenrui, SHEN Tao, ZHU Yan, ZENG Kai, LIU Yingli   

  1. 1.Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
    2.Yunnan Key Laboratory of Computer Technologies Application, Kunming University of Science and Technology, Kunming 650500, China
  • Online:2022-11-01 Published:2022-11-01



  1. 1.昆明理工大学 信息工程与自动化学院,昆明 650500
    2.昆明理工大学 云南省计算机重点实验室,昆明 650500

Abstract: The task of image caption is aim to generate the corresponding description of a given image. In order to solve the problem of incomplete understanding of semantic information in existing algorithms, a multimodal Transformer model for image description is proposed. In the attention module, the model captures the interaction within and between modes simultaneously, and further uses ELMo to obtain word embeddings which containing context information, so that the model can obtain more rich semantic description as input. This model can better understand and infer complex multimodal information and generate more accurate natural language description. The model has been widely tested on Microsoft COCO dataset, and the experimental results show that it has a great improvement compared with the baseline model using bottom-up attention and LSTM. The model has an improvement of 0.7, 0.4, 0.9, 1.3, 0.6, 4.9 percentage points on BLEU-1, BLEU-2, BLEU-3, BLEU-4, ROUGE-L, CIDEr-D respectively.

Key words: Transformer, image caption, ELMo, attention mechanism

摘要: 图像描述任务旨在针对一张给出的图像产生其对应描述。针对现有算法中语义信息理解不够全面的问题,提出了一个针对图像描述领域的多模态Transformer模型。该模型在注意模块中同时捕捉模态内和模态间的相互作用;更进一步使用ELMo获得包含上下文信息的文本特征,使模型获得更加丰富的语义描述输入。该模型可以对复杂的多模态信息进行更好地理解与推断并且生成更为准确的自然语言描述。该模型在Microsoft COCO数据集上进行了广泛的实验,实验结果表明,相比于使用bottom-up注意力机制以及LSTM进行图像描述的基线模型具有较大的效果提升,模型在BLEU-1、BLEU-2、BLEU-3、BLEU-4、ROUGE-L、CIDEr-D上分别有0.7、0.4、0.9、1.3、0.6、4.9个百分点的提高。

关键词: Transformer, 图像描述, ELMo, 注意力机制