[1] 王人玉, 项威, 王邦, 等. 文档级事件抽取研究综述[J]. 中文信息学报, 2023, 37(6): 1-14.
WANG R Y, XIANG W, WANG B, et al. Review of document-level event extraction research [J]. Journal of Chinese Information Processing, 2023, 37(6): 1-14.
[2]DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186.
[3] DU X Y, CLAIRE C. Document-level event role filler extraction using multi-granularity contextualized encoding[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 8010-8020.
[4] ZHENG S C, WANG F, BAO H Y, et al. Joint extraction of entities and relations based on a novel tagging scheme[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1227-1236.
[5] WADDEN D, WENNBERG U, LUAN Y, et al. Entity, relation, and event extraction with contextualized span representations[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 5783-5788.
[6] LI F Y, PENG W H, CHEN Y G, et al. Event extraction as multi-turn question answering[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, 2020: 829-838.
[7] CHEN Y M, CHEN T F, EBNER S, et al. Reading the manual: event extraction as definition comprehension[C]//Proceedings of the Fourth Workshop on Structured Prediction for NLP@EMNLP, 2020: 74-83.
[8] YANG S, FENG D W, QIAO L B, et al. Exploring pre-trained language models for event extraction and generation[C]//Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2019: 5284-5294.
[9] MCCANN B, BRADBURY J, XIONG C M, et al. Learned in translation: contextualized word vectors[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6294-6305.
[10] PETERS M, NEUMANN M, IYYE R M, et al. Deep contextualized word representations[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018: 2227-2237.
[11] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[12] HOWARD J, RUDER S. Universal language model fine-tuning for text classification[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018: 328-339.
[13] ASHISH V, NOAM S, NIKI P, et al. Attention is all you need[C]//Proceedings of Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, 2017: 5998-6008.
[14] ALEC R, KARTHIK N, TIM S, et al. Improving language understanding by generative pre-training[EB/OL]. [2023-05-21]. https://www.cs.ubc.ca/~amuham01/LING530/papers/ radford2018improving.pdf.
[15] LI Q Z, ZHANG Q. A unified model for financial event classification, detection and summarization[C]//Proceedings of International Joint Conference on Artificial Intelligence, 2020.
[16] LU Y, LIN H, XU J, et al. Text2Event: controllable sequence-to-structure generation for end-to-end event extraction[J]. arXiv:2106.09232, 2021.
[17] SHENG J W, GUO S, YU B W, et al. CasEE: a joint learning framework with cascade decoding for overlapping event extraction[C]//Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2020: 164-174.
[18] YU B W, ZHANG Z Y, SHENG J W, et al. Semi-open information extraction[C]//Proceedings of the Web Conference, 2021.
[19] 王腾, 张大伟, 王利琴, 等. 多模态特征自适应融合的虚假新闻检测[J]. 计算机工程与应用, 2024, 60(13): 102-112.
WANG T, ZHANG D W, WANG L Q, et al. Multimodal feature adaptive fusion for fake news detection[J]. Computer Engineering and Applications, 2024, 60(13): 102-112.
[20] CHO K, VAN MERRIENBOER B, GULCEHRE C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]//Proceedings of Association for Computational Linguistics, 2014.
[21] RUPESH K S, KLAUS G, JURGEN S, Training very deep networks[C]//Proceedings of Neural Information Processing Systems, 2015.
[22] XU D, OUYANG W L, WANG X G, et al. PAD-Net: multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[23] KONG S, FOWLKES C. Recurrent scene parsing with perspective understanding in the loop[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[24] LI X T, ZHAO H, HAN L, et al. GFF: gated fully fusion for semantic segmentation[C]//Proceedings of Association for the Advance of Artificial Intelligence, 2020.
[25] LI Q, JI H, HUANG L. Joint event extraction via structured prediction with global features[C]//Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, 2013: 73-82.
[26] LIU S L, LI Y, ZHANG F, et al. Event detection without triggers[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 735-744.
[27] LI X Y, LI F Y, PAN L, et al. DuEE: a large-scale dataset for Chinese event extraction in real-world scenarios[C]//Proceedings of the CCF International Conference on Natural Language Processing and Chinese Computing, 2020.
[28] ZHOU Y, CHEN Y B, ZHAO J, et al. What the role is vs. what plays the role: semi-supervised event argument extraction via dual question answering[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 14638-14646.
[29] CHEN Y B, XU L H, LIU K, et al. Event extraction via dynamic multipooling convolutional neural networks[C]//Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2015: 167-176.
[30] ZHANG Z Y, HAN X, LIU Z Y, et al. ERNIE: enhanced language representation with informative entities[C]//Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2019. |