[1] 郭小宇, 马静, Arkaitz Zubiaga, 等. 互联网迷因研究: 现状与展望[J]. 情报理论与实践, 2021, 44(6): 199-207.
GUO X Y, MA J, ZUBIAGA A, et al. A review of internet meme studies: state of the art and outlook[J]. Information Studies: Theory & Application, 2021, 44(6): 199-207.
[2] 田恕存. 基于注意力机制的跨领域情感分析的应用研究[D]. 哈尔滨: 哈尔滨工业大学, 2019.
TIAN S C. Research on corss-domain sentiment analysis based on attention mechanism[D]. Harbin: Harbin Institute of Technology, 2019.
[3] DEVLIN J, CHANG M W, LEE K, et al. BERT: pretraining of deep bidirectional transformer for language understanding[C]//Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2019: 4171-4186.
[4] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[EB/OL]. [2022-09-13]. https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving. pdf.
[5] 王腾, 张大伟, 王利琴, 等. 多模态特征自适应融合的虚假新闻检测[J]. 计算机工程与应用, 2024, 60(13): 102-112.
WANG T, ZHANG D W, WANG L Q, et al. Multimodal feature adaptive fusion for fake news detection[J]. Computer Engineering and Applications, 2024, 60(13): 102-112.
[6] 戚力鑫, 万书振, 唐斌, 等. 基于注意力机制的多模态融合谣言检测方法[J]. 计算机工程与应用, 2022, 58(19): 209-217.
QI L X, WAN S Z, TANG B, et al. Multimodal fusion rumor detection method based on attention mechanism[J]. Computer Engineering and Applications, 2022, 58(19): 209-217.
[7] 陈杰, 马静, 李晓峰, 等. 基于DR-Transformer模型的多模态情感识别研究[J]. 情报科学, 2022, 40(3): 117-125.
CHEN J, MA J, LI X F, et al. Multi-modal emotion recognition based on DR-Transformer model[J]. Information Science, 2022, 40(3): 117-125.
[8] GUO X Y, MA J, ZUBIAGA A. NUAA-QMUL at SemEval-2020 task 8: utilizing BERT and DenseNet for Internet meme emotion analysis[C]//Proceedings of the Fourteenth Workshop on Semantic Evaluation, 2020: 901-907.
[9] HUANG G, LIU Z, LAURENS M, et al. Densely connected convolutional networks[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 2017: 2261-2269.
[10] 李卓容, 唐云祁. 基于深度学习的多模态生物特征融合模型[J]. 计算机工程与应用, 2023, 59(7): 180-189.
LI Z R, TANG Y Q. Multimodal biometric fusion model based on deep learning[J]. Computer Engineering and Applications, 2023, 59(7): 180-189.
[11] SUN W, MIN X K, TU D Y, et al. Blind quality assessment for in-the-wild images via hierarchical feature fusion and iterative mixed database training[J]. IEEE Journal of Selected Topics in Signal Processing, 2023, 17(6): 1178-1192.
[12] ABDAR M, SALARI S, QAHREMANI S, et al. Uncertainty FuseNet: robust uncertainty-aware hierarchical feature fusion model with ensemble Monte Carlo dropout for COVID-19 detection[J]. Information Fusion, 2023, 90: 364-381.
[13] LIU Y H, OTT M, GOYAL N, et al. Roberta: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019.
[14] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 2016: 770-778.
[15] JAWAHAR G, SAGOT B, SEDDAH D. What does BERT learn about the structure of language[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, 2019: 3651-3657.
[16] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6000-6010.
[17] 苏剑林. 线性Transformer应该不是你要等的那个模型[EB/OL]. (2021-08-09) [2023-12-10]. https://kexue.fm/archives/8610.
SU J L. The linear Transformer is probably not the model you are waiting for[EB/OL]. (2021-08-09) [2023-12-10]. https://kexue.fm/archives/8610.
[18] 周志华. 机器学习 [M]. 北京: 清华大学出版社, 2016: 25.
ZHOU Z H. Machine learning[M]. Beijing: Tsinghua University Press, 2016: 25.
[19] NIU T, ZHU S A, PANG L, et al. Sentiment analysis on multi-view social data[C]//Proceedings of the 22nd International Conference on Multimedia Modeling, Miami, FL, USA, 2016: 15-27.
[20] XU N, MAO W J. MultiSentiNet: a deep semantic network for multimodal sentiment analysis[C]//Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2017: 2399-2402.
[21] 韩燚笑, 马静. RCHFN模型: 一种多模态特征融合的情感分类方法[J]. 数据分析与知识发现, 2024, 8(12): 18-29.
HAN Y X, MA J. The RCHFN model: a multimodal feature fusion approach for sentiment classification[J]. Data Analysis and Knowledge Discovery, 2024, 8(12): 18-29.
[22] CHEEMA G S, HAKIMOV S, MüLLER-BUDACK E, et al. A fair and comprehensive comparison of multimodal tweet sentiment analysis methods[C]//Proceedings of the 2021 Workshop on Multi-Modal Pre-Training for Multimedia Understanding, 2021: 37-45.
[23] GUO X Y, MA J, ZUBIAGA A. NUAA-QMUL-AIIT at memotion 3: multi-modal fusion with squeeze-and-excitation for Internet meme emotion analysis[J]. arXiv:2302.08326, 2023.
[24] YANG X C, FENG S, ZHANG Y F, et al. Multimodal sentiment detection based on multi-channel graph neural networks[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 328-339.
[25] LI Z, XU B, ZHU C H, et al. CLMLF: a contrastive learning and multi-layer fusion method for multimodal sentiment detection[C]//Findings of the Association for Computational Linguistics (NAACL 2022), 2022: 2282-2294.
[26] XIAO X W, PU Y Y, ZHAO Z P, et al. Image-text sentiment analysis via context guided adaptive fine-tuning Transformer[J]. Neural Processing Letters, 2023, 55(3): 2103-2125. |