[1] SáNCHEZ-Nú?EZ P, COBO M J, DE LAS HERAS-PEDROSA C, et al. Opinion mining, sentiment analysis and emotion understanding in advertising: a bibliometric analysis[J]. IEEE Access, 2020, 8: 134563-134576.
[2] RAZALI N A M, MALIZAN N A, HASBULLAH N A, et al. Opinion mining for national security: techniques, domain applications, challenges and research opportunities[J]. Journal of Big Data, 2021, 8(1): 1-46.
[3] WANG Z, WAN Z, WAN X. Transmodality: an end2end fusion method with transformer for multimodal sentiment analysis[C]//Proceedings of The Web Conference 2020, 2020: 2514-2520.
[4] ZHU L, ZHU Z, ZHANG C, et al. Multimodal sentiment analysis based on fusion methods: a survey[J]. Information Fusion, 2023, 95: 306-325.
[5] 黄健, 王颖. 基于图像语义翻译的图文融合情感分析方法[J]. 计算机工程与应用, 2023, 59(11): 180-187.
HUANG J, WANG Y. Image-text fusion sentiment analysis method based on image semantic translation[J]. Computer Engineering and Applications, 2023, 59 (11): 180-187.
[6] ZHANG X, JIANG T, LV Y. Weibo short-text sentiment classification algorithm on serial hybrid network[C]//Proceedings of the 2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP), 2022: 535-539.
[7] MAI S, XING S, HU H. Analyzing multimodal sentiment via acoustic-and visual-LSTM with channel-aware temporal convolution network[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 1424-1437.
[8] 张亚洲, 戎璐, 宋大为, 等. 多模态情感分析研究综述[J]. 模式识别与人工智能, 2020, 33(5): 426-438.
ZHANG Y Z, RONG L, SONG D W, et al. A survey on multimodal sentiment analysis[J]. Pattern Recognition and Artificial Intelligence, 2020, 33(5): 426-438.
[9] 江涛, 黄昌昊, 孙斌. 基于文本挖掘的弹幕情绪分析研究[J]. 智能计算机与应用, 2022, 12(8): 60-64.
JIANG T, HUANG C H, SUN B. Research on sentiment analysis of Danmaku based on text mining[J]. Intelligent Computer and Applications, 2022, 12(8): 60-64.
[10] HUANG F, LI X, YUAN C, et al. Attention-emotion-enhanced convolutional LSTM for sentiment analysis[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(9): 4332-4345.
[11] KUMAR P, RAMAN B. A BERT based dual-channel explainable text emotion recognition system[J]. Neural Networks, 2022, 150: 392-407.
[12] DEVLIN J, CHANG M W, LEE K, et al. Bert: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[13] RUAN S, ZHANG K, WU L, et al. Color enhanced cross correlation net for image sentiment analysis[J]. IEEE Transactions on Multimedia, 2021, 26: 4097-4109.
[14] ZHANG J, LIU X, CHEN M, et al. Image sentiment classification via multi-level sentiment region correlation analysis[J]. Neurocomputing, 2022, 469: 221-233.
[15] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017.
[16] 黄宏展, 蒙祖强. 基于双向注意力机制的多模态情感分类方法[J]. 计算机工程与应用, 2021, 57(11): 119-127.
HUANG H Z, MENG Z Q. Bidirectional attention mechanism based multimodal sentiment classification method[J]. Computer Engineering and Applications, 2021, 57(11): 119-127.
[17] JIA L, MA T, RONG H, et al. Affective region recognition and fusion network for target-level multimodal sentiment classification[J]. IEEE Transactions on Emerging Topics in Computing, 2023: 1-11.
[18] PéREZ-RúA J M, VIELZEUF V, PATEUX S, et al. MFAS: multimodal fusion architecture search[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 6966-6975.
[19] MA M, REN J, ZHAO L, et al. SMIL: multimodal learning with severely missing modality[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 2302-2310.
[20] MAJUMDER N, HAZARIKA D, GELBUKH A, et al. Multimodal sentiment analysis using hierarchical fusion with context modeling[J]. Knowledge-Based Systems, 2018, 161: 124-133.
[21] JIA Z, LIN Y, WANG J, et al. HetEmotionNet: two-stream heterogeneous graph recurrent neural network for multi-modal emotion recognition[C]//Proceedings of the 29th ACM International Conference on Multimedia, 2021: 1047-1056.
[22] YUAN Z, LI W, XU H, et al. Transformer-based feature reconstruction network for robust multimodal sentiment analysis[C]//Proceedings of the 29th ACM International Conference on Multimedia, 2021: 4400-4407.
[23] DELBROUCK J B, TITS N, BROUSMICHE M, et al. A transformer-based joint-encoding for emotion recognition and sentiment analysis[J]. arXiv:2006.15955, 2020.
[24] YANG K, XU H, GAO K. CM-BERT: cross-modal bert for text-audio sentiment analysis[C]//Proceedings of the 28th ACM International Conference on Multimedia, 2020: 521-528.
[25] WU J, ZHU T, ZHU J, et al. A optimized bert for multimodal sentiment analysis[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(2S): 1-12.
[26] TSAI Y H H, BAI S, LIANG P P, et al. Multimodal transformer for unaligned multimodal language sequences[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 6558.
[27] ZADEH A, ZELLERS R, PINCUS E, et al. Multimodal sentiment intensity analysis in videos: facial gestures and verbal messages[J]. IEEE Intelligent Systems, 2016, 31(6): 82-88.
[28] TSAI Y H H, LIANG P P, ZADEH A, et al. Learning factorized multimodal representations[C]//Proceedings of the International Conference on Representation Learning, 2019.
[29] PHAM H, LIANG P P, MANZINI T, et al. Found in translation: learning robust joint representations by cyclic translations between modalities[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 6892-6899.
[30] 张峰, 李希城, 董春茹, 等. 基于深度情感唤醒网络的多模态情感分析与情绪识别[J]. 控制与决策, 2022, 37(11): 2984-2992.
ZHANG F, LI X C, DONG C R, et al. Deep emotional arousal network for multimodal sentiment analysis and emotion recognition[J]. Control and Decision, 2022, 37(11): 2984-2992.
[31] HAZARIKA D, ZIMMERMANN R, PORIA S. MISA: modality?invariant and?specific representations for multimodal sentiment analysis[C]//Proceedings of the 28th ACM International Conference on Multimedia, 2020: 1122-1131.
[32] 胡新荣, 陈志恒, 刘军平, 等. 基于多模态表示学习的情感分析框架[J]. 计算机科学, 2022, 49(S2): 631-636.
HU X R, CHEN Z H, LIU J P, et al. Sentiment analysis framework based on multimodal representation learning[J]. Computer Science, 2022, 49(S2): 631-636.
[33] 缪裕青, 杨爽, 刘同来, 等. 基于跨模态门控机制和改进融合方法的多模态情感分析[J]. 计算机应用研究, 2023, 40(7): 2025-2030.
MIAO Y Q, YANG S, LIU T L, et al. Multimodal sentiment analysis based on cross-modal gating mechanism and improved fusion method[J]. Application Research of Computers, 2023, 40(7): 2025-2030. |