JING Li, YAO Ke. Research on Text Classification Based on Knowledge Graph and Multimodal[J]. Computer Engineering and Applications, 2023, 59(2): 102-109.
[1] 贺鸣,孙建军,成颖.基于朴素贝叶斯的文本分类研究综述[J].情报科学,2016,34(7):147-154.
HE Ming,SUN Jianjun,CHENG Ying.Text classification based on naive bayes:a review[J].Information Science,2016,34(7):147-154.
[2] 崔建明,刘建明,廖周宇.基于SVM算法的文本分类技术研究[J].计算机仿真,2013,30(2):299-302.
CUI Jianming,LIU Jianming,LIAO Zhouyu.Research of text categorization based on support vector machine[J].Computer Simulation,2013,30(2):299-302.
[3] 张宁,贾自艳,史忠植.使用KNN算法的文本分类[J].计算机工程,2005,31(8):171-172.
ZHANG Ning,JIA Ziyan,SHI Zhongzhi.Text categorization with KNN algorithm[J].Computer Engineering,2005,31(8):171-172.
[4] HINTON G E,SALAKHUTDINOV R R.Reducing the dimensionality of data with neural networks[J].Science,2006,313(5786):504-507.
[5] LECUN Y,BOTTOU L.Gradient-based learning applied to document recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324.
[6] LIU P,QIU X,HUANG X.Recurrent neural network for text classification with multi-task learning[J].arXiv:1605.
05101,2016.
[7] MIKOLOV T,CHEN K,CORRADO G,et al.Efficient estimation of word representations in vector space[J].arXiv:1301.3781,2013.
[8] PETERS M,NEUMANN M,IYYER M,et al.Deep contextualized word representations[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies(HLT-NAACL),Volume 1(Long Papers),2018:2227-2237.
[9] RADFORD A,NARASIMHAN K,SALIMANS T.Improving language understanding by generative pre-training[EB/OL].[2021-11-10].https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_
understanding_paper.pdf.
[10] DEVLIN J,CHANG M W,LEE K,et al.BERT:pre-training of deep bidirectional transformers for language understanding[J].arXiv:1810.04805,2018.
[11] HOCHREITER S,SCHMIDHUBER J.Long short-term memory[J].Neural Computation,1997,9(8):1735-1780.
[12] KIM Y.Convolutional neural networks for sentence classification[J].arXiv:1408.5882,2014.
[13] KALCHBRENNER N,GREFENSTETTE E,BLUNSOM P.A convolutional neural network for modelling sentences[J].arXiv:1404.2188,2014.
[14] BAHDANAU D,CHO K,BENGIO Y.Neural machine translation by jointly learning to align and translate[J].arXiv:1409.0473,2014.
[15] VASWANI A,SHAZEER N,PARMAR N,et al.Attention is all you need[C]//Advances in Neural Information Processing Systems(NIPS),2017:5998-6008.
[16] VRANDECIC D,KRTOETZSCH M.Wikidata:a free collaborative knowledgebase[J].Communications of the ACM,2014,57(10):78-85.
[17] SUCHANEK F M,KASNECI G,WEIKUM G.YAGO:a core of semantic knowledge unifying WordNet and Wikipedia[C]//International Conference on World Wide Web(ICWWW),2007:697-706.
[18] AUER S,BIZER C,KOBILAROV G,et al.DBpedia:a nucleus for a web of open data[C]//Proceedings of International Semantic Web Conference(ISWC),2007:722-735.
[19] MILLER G A.WordNet:a lexical database for English[J].Communications of the ACM,1995,38(11):39-41.
[20] WANG J,WANG Z,ZHANG D,et al.Combining knowledge with deep convolutional neural networks for short text classification[C]//Twenty-Sixth International Joint Conference on Artificial Intelligence(AAAI),2017:2915-2921.
[21] CHEN J,HU Y,LIU J,et al.Deep short text classification with knowledge powered attention[C]//Proceedings of the AAAI Conference on Artificial Intelligence,2019:6252-6259.
[22] ZHANG Z,HAN X,LIU Z,et al.ERNIE:enhanced language representation with informative entities[J].arXiv:1905.07129,2019.
[23] LIU W,ZHOU P,ZHAO Z,et al.K-bert:enabling language representation with knowledge graph[J].arXiv:1909.07606,2019.
[24] ANASTASOPOULOS A,KUMAR S,LIAO H.Neural language modeling with visual features[J].arXiv:1903.02930,2019.
[25] ZADEH A,CHEN M,PORIA S,et al.Tensor fusion network for multimodal sentiment analysis[C]//Empirical Methods in Natural Language Processing,2017:1103-1114.
[26] NAM H,HA J W,KIM J.Dual attention networks for multimodal reasoning and matching[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:299-307.
[27] LU J,BATRA D,PARIKH D,et al.ViLBERT:pretraining task-agnostic visiolinguistic representations for vision-and-language tasks[J].arXiv:1908.02265,2019.
[28] LI L H,YATSKAR M,YIN D,et al.VisualBERT:a simple and performant baseline for vision and language[J].arXiv:1908.03557,2019.
[29] ALBERTI C,LING J,COLLINS M,et al.Fusion of detected objects in text for visual question answering[J].arXiv:1908.05054,2019.
[30] KIELA D,BHOOSHAN S,FIROOZ H,et al.Supervised multimodal bitransformers for classifying images and text[J].arXiv:1909.02950,2019.
[31] WU L,PETRONI F,JOSIFOSKI M,et al.Scalable zero-shot entity linking with dense entity retrieval[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP),2020:6397-6407.
[32] BORDES A,USUNIER N,GARCIA-DURAN A,et al.Translating embeddings for modeling multi-relational data[C]//Proceedings of the 26th International Conference on Neural Information Processing Systems(NIPS)-Volume 2,2013:2787-2795.
[33] HAN X,CAO S,LV X,et al.OpenKE:an open toolkit for knowledge embedding[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing:System Demonstrations(EMNLP),2018:139-144.
[34] HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition,2016:770-778.
[35] DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.An image is worth 16x16 words:transformers for image recognition at scale[J].arXiv:2010.11929,2020.
[36] AREVALO J,SOLORIO T,MONTES-Y-GóMEZ M,et al.Gated multimodal units for information fusion[J].arXiv:1702.01992,2017.
[37] YU J,JIANG J,XIA R.Entity-sensitive attention and fusion network for entity-level multimodal sentiment classification[J].IEEE/ACM Transactions on Audio,Speech,and Language Processing,2019,28:429-439.