[1] LI Q, JI H, HUANG L, et al. Joint event extraction via structured prediction with global features[C]//Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Aug 4-9, 2013. Stroudsburg: ACL, 2013: 73-82.
[2] CHEN Y B, XU L H, LIU K, et al. ?Event extraction via dynamic multi-pooling convolutional neural networks[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, Jul 26-31, 2015. Stroudsburg: ACL, 2015: 167-176.
[3] LIU J, CHEN Y B, LIU K, et al. Event extraction as machine reading comprehension[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, Nov 16-20, 2020: 1641-1651.
[4] CHEN Y B, YANG H, LIU K, et al. Collective event detection via a hierarchical and bias tagging networks with gated multi-level attention mechanisms[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Oct 31-Nov 4, 2018: 1267-1276.
[5] YANG S, FENG D W, QIAO L B, et al. ?Exploring pre-trained language models for event extraction and generation[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Jul 28-Aug 2, 2019. Stroudsburg: ACL, 2019: 5284-5294.
[6] LI F Y, PENG W H, CHEN Y G, et al. Event extraction as multi-turn question answering[C]//Findings of the Association for Computational Linguistics: EMNLP 2020, 2020: 829-838.
[7] DU X Y, CARDIE C. Event extraction by answering (almost) natural questions[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020: 671-683.
[8] WANG J M, HAN B, WANG F, et al. Document-level core events extraction based on QA[J]. Journal of Physics, 2022, 2171(1): 012062.
[9] ZHENG S, CAO W, XU W, et al. Doc2EDAG: an end-to-end document-level framework for Chinese financial event extraction[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China Nov 3-7, 2019: 337-346.
[10] XU R X, LIU T Y, LI L, et al. Document-level event extraction via heterogeneous graph-based interaction model with a tracker[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 3533-3546.
[11] HUANG Y S, JIA W J. Exploring sentence community for document-level event extraction[C]//Findings of the Association for Computational Linguistics: EMNLP 2021, 2021: 340-351.
[12] YANG H, CHEN Y B, LIU K, et al. DCFEE: a document-level Chinese financial event extraction system based on automatically labeled training data[C]//Proceedings of ACL 2018, Melbourne, Jul 15-20, 2018. Stroudsburg: ACL, 2018: 50-55.
[13] 王雷, 李瑞轩, 李玉华, 等. 文档级无触发词事件抽取联合模型[J]. 计算机科学与探索, 2021, 15(12): 2327-2334.
WANG L, LI R X, LI Y H, et al. Joint model for document-level event extraction without triggers[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(12): 2327-2334.
[14] 仲伟峰, 杨航, 陈玉博, 等. 基于联合标注和全局推理的篇章级事件抽取[J]. 中文信息学报, 2019, 33(9): 88-95.
ZHONG W F, YANG H, CHEN Y B, et al. Document-level event extraction based on joint labeling and global reasoning[J]. Journal of Chinese Information Processing, 2019, 33(9): 88-95.
[15] WANG P, DENG Z, CUI R. TDJEE: a document-level joint model for financial event extraction[J]. Electronics, 2021, 10(7): 824.
[16] DU X Y, CARDIE C. Document-level event role filler extraction using multi-granularity contextualized encoding[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 8010-8020.
[17] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Jun 2-7, 2019. Stroudsburg: ACL, 2019: 4171-4186.
[18] LAFFERTY J, MCCALLUM A, PEREIRA F C. Conditional random fields: probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the Eighteenth International Conference on Machine Learning, 2001: 282-289.
[19] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st Annual Conference on Neural Information Processing Systems, Long Beach, Dec 4-9, 2017: 5998-6008.
[20] HUANG Z H, XU W, YU K. Bidirectional LSTM-CRF models for sequence tagging[J]. arXiv:1508.01991,2015. |