[1] CHEN Y, XU L, LIU K, et al. Event extraction via dynamic multi-pooling convolutional neural networks[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 2015: 167-176.
[2] LIU S, CHEN Y, LIU K, et al. Exploiting argument information to improve event detection via supervised attention mechanisms[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017: 1789-1798.
[3] YANG S, FENG D, QIAO L, et al. Exploring pre-trained language models for event extraction and generation[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 5284-5294.
[4] DU X, CARDIE C. Event extraction by answering (almost) natural questions[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP), 2020: 671-683.
[5] MA Y, WANG Z, CAO Y, et al. Prompt for extraction? PAIE: prompting argument interaction for event argument extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 6759-6774.
[6] HUANG L, JI H, CHO K, et al. Zero-shot transfer learning for event extraction[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018: 2160-2170.
[7] ZHANG H, WANG H, ROTH D. Zero-shot label-aware event trigger and argument classification[C]//Findings of the Association for Computational Linguistics 2021, 2021: 1331-1340.
[8] LYU Q, ZHANG H, SULEM E, et al. Zero-shot event extraction via transfer learning: challenges and insights[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 322-332.
[9] ZHANG S, JI T, JI W, et al. Zero-shot event detection based on ordered contrastive learning and prompt-based prediction[C]//Findings of the Association for Computational Linguistics: NAACL 2022, 2022: 2572-2580.
[10] LI S, JI H, HAN J. Open relation and event type discovery with type abstraction[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022: 6864-6877.
[11] HUANG L, CASSIDY T, FENG X, et al. Liberal event extraction and event schema induction[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), 2016: 258-268.
[12] HUANG L, JI H. Semi-supervised new event type induction and event detection[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020: 718-724.
[13] SHEN J, ZHANG Y, JI H, et al. Corpus-based open-domain event type induction[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021: 5427-5440.
[14] CAO P, HAO Y, CHEN Y, et al. Event ontology completion with hierarchical structure evolution networks[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023: 306-320.
[15] ZHAO J, GUI T, ZHANG Q, et al. A relation-oriented clustering method for open relation extraction[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021: 9707-9718.
[16] WALKER C, STRASSEL S, MEDERO J, et al. ACE 2005 multilingual training corpus ldc2006t06, 2006[EB/OL]. [2006]. https://catalog.ldc.upenn.edu/LDC2006T06.
[17] GRISHMAN R. Information extraction: capabilities and challenges[R]. 2012 International Winter School in Language and Speech Technologies, 2012.
[18] 刘泽旖, 余文华, 洪智勇, 等. 基于问题回答模式的中文事件抽取[J]. 计算机工程与应用, 2023, 59(2): 153-160.
LIU Z Y, YU W H, HONG Z Y, et al. Chinese event extraction using question answering[J]. Computer Engineering and Applications, 2023, 59(2): 153-160.
[19] RILOFF E. Automatically constructing a dictionary for information extraction tasks[C]//Proceedings of the Eleventh National Conference on Artificial Intelligence, 1993: 811-816.
[20] YAKUSHIJI A, TATEISI Y, MIYAO Y, et al. Event extraction from biomedical papers using a full parser[C]//Proceedings of the Pacific Symposium on Biocomputing, Pacific Symposium on Biocomputing, 2001: 408-419.
[21] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019: 4171-4186.
[22] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[EB/OL]. [2023-09-01]. https://gwerb.net/doc/ai/nn/transformer/gpt/2019-radford. pdf.
[23] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020: 1877-1901.
[24] GAO T, FISCH A, CHEN D. Making pre-trained language models better few-shot learners[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021: 3816-3830.
[25] SCHICK T, SCHüTZE H. It’s not just size that matters: small language models are also few-shot learners[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 2339-2352.
[26] 鲍彤, 章成志. ChatGPT中文信息抽取能力测评——以三种典型的抽取任务为例[J]. 数据分析与知识发现, ?2023, 7(9): 1-11.
BAO T, ZHANG C Z. Extracting Chinese information with ChatGPT: an empirical study by three typical tasks[J]. Data Analysis and Knowledge Discovery, 2023, 7(9): 1-11.
[27] 王泽深, 杨云, 向鸿鑫, 等. 零样本学习综述[J]. 计算机工程与应用, 2021, 57(19): 1-17.
WANG Z S, YANG Y, XIANG H X, et al. Survey on zero-shot learning[J]. Computer Engineering and Applications, 2021, 57(19): 1-17.
[28] 张森辉. 基于有序对比学习的零样本事件检测技术[D]. 上海: 华东师范大学, 2023.
ZHANG S H. Zero-shot event detection based on ordered contrastive learning[D]. Shanghai: East China Normal University, 2023.
[29] XIAN Y, AKATA Z, SHARMA G, et al. Latent embeddings for zero-shot classification[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 69-77.
[30] CHEN Z, LI J, LUO Y, et al. CANZSL: cycle-consistent adversarial networks for zero-shot learning from natural language[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020: 874-883.
[31] 赵鹏, 汪纯燕, 张思颖, 等. 一种基于融合重构的子空间学习的零样本图像分类方法[J]. 计算机学报, 2021, 44(2): 409-421.
ZHAO P, WANG C Y, ZHANG S Y, et al. A Zero-shot image classification method based on subspace learning with the fusion of reconstruction[J]. Chinese Joural of Computers, 2021, 44(2): 409-421.
[32] XIAN Y, LORENZ T, SCHIELE B, et al. Feature generating networks for zero-shot learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 5542-5551.
[33] FENG Y, HUANG X, YANG P, et al. Non-generative generalized zero-shot learning via task-correlated disentanglement and controllable samples synthesis[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 9346-9355.
[34] 张海涛, 苏琳. 结合知识图谱的变分自编码器零样本图像识别[J]. 计算机工程与应用, 2023, 59(1): 236-243.
ZHANG H T, SU L. Variational auto-encoder combined with knowledge graph zero-shot learning[J]. Computer Engineering and Applications, 2023, 59(1): 236-243.
[35] CHEN T, KORNBLITH S, NOROUZi M, et al. A simple framework for contrastive learning of visual representations[C]//Proceedings of the International Conference on Machine Learning, 2020: 1597-1607.
[36] FANG K, XIE P. 2020. Cert: contrastive selfsupervised learning for language understanding[J]. arXiv:2005.12766, 2020.
[37] GAO T, YAO X, CHEN D. SimCSE: simple contrastive learning of sentence embeddings[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), 2021: 6894-6910.
[38] ZHANG D, NAN F, WEI X, et al. Supporting clustering with contrastive learning[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 5419-5430.
[39] LIANG B, ZHU Q, LI X, et al. Jointcl: a joint contrastive learning framework for zero-shot stance detection[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 81-91.
[40] LI J, SHANG J, MCAULEY J. UCTopic: unsupervised contrastive learning for phrase representations and topic mining[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volme 1: Long Papers), 2022: 6159-6169.
[41] OpenAI. ChatGPT: optimizing language models for dialogue[EB/OL]. [2023-01-12]. https://openai. com/blog/chatgpt/.
[42] ZENG A, LIU X, DU Z, et al. GLM-130B: an open bilingual pre-trained model[C]//Proceedings of the Eleventh International Conference on Learning Representations, 2022.
[43] VAN DER MAATEN L, HINTON G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(11): 2579-2605. |