
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (14): 1-19.DOI: 10.3778/j.issn.1002-8331.2412-0102
杭婷婷,郭亚,李德胜,冯钧
出版日期:2025-07-15
发布日期:2025-07-15
HANG Tingting, GUO Ya, LI Desheng, FENG Jun
Online:2025-07-15
Published:2025-07-15
摘要: 关系抽取旨在从文本数据中识别并提取实体之间的关系。随着数据流的动态变化,传统关系抽取模型在处理新出现的关系类型时,往往面临灵活性和有效性的双重挑战。持续关系抽取模型通过实时学习,不仅能够适应新关系类型的引入,还能有效保留已学到的知识,为知识图谱的动态更新与扩展提供了重要支持。系统综述了持续关系抽取领域的研究进展。阐述了持续关系抽取的发展历程、基本概念以及任务定义;从关系原型、对抗增强、对比学习及其他方法四个方面总结了当前的研究方法;介绍了常用的数据集与评价指标,并对主流模型的性能进行了对比评估。最后,分析了现有方法的局限性与挑战,并对未来的研究方向提出了展望。
杭婷婷, 郭亚, 李德胜, 冯钧. 持续关系抽取方法研究综述[J]. 计算机工程与应用, 2025, 61(14): 1-19.
HANG Tingting, GUO Ya, LI Desheng, FENG Jun. Survey on Research of Continual Relation Extraction Methods[J]. Computer Engineering and Applications, 2025, 61(14): 1-19.
| [1] ZARATIANA U, TOMEH N, HOLAT P, et al. An utoregressive text-to-graph framework for joint entity and relation extraction[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 19477-19487. [2] CUI S Y, CAO J X, CONG X, et al. Enhancing multimodal entity and relation extraction with variational information bottleneck[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024, 32: 1274-1285. [3] JIAN Z R, LIU S Q, YIN H X. A multi-granularity contrastive learning for distantly supervised relation extraction[C]//Proceedings of the 20th International Conference on Advanced Intelligent Computing Technology and Applications. Singapore: Springer Nature Singapore, 2024: 352-364. [4] WANG H, XIONG W, YU M, et al. Sentence embedding alignment for lifelong relation extraction[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Wilmington: Association for Computational Linguistics, 2019: 796-806. [5] SANDERSON K. AI science search engines are exploding in number: are they any good?[J]. Nature, 2023, 616: 639-640. [6] CHEN X C, YAO L N, MCAULEY J, et al. Deep reinforcement learning in recommender systems: a survey and new perspectives[J]. Knowledge-Based Systems, 2023, 264: 110335. [7] GOYAL R, KUMAR P, SINGH V P. Automated question and answer generation from texts using text-to-text transformers[J]. Arabian Journal for Science and Engineering, 2024, 49(3): 3027-3041. [8] WANG L Y, ZHANG X X, SU H, et al. A comprehensive survey of continual learning: theory, method and application[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(8): 5362-5383. [9] FRENCH R M. Catastrophic forgetting in connectionist networks[J]. Trends in Cognitive Sciences, 1999, 3(4): 128-135. [10] LU Y, BARTOLO M, MOORE A, et al. Fantastically ordered prompts and where to find them: overcoming few-shot prompt order sensitivity[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Wilmington: Association for Computational Linguistics, 2022: 8086-8098. [11] KONG Y J, LIU L, CHEN H H, et al. Overcoming catastrophic forgetting in continual learning by exploring eigenvalues of hessian matrix[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(11): 16196-16210. [12] SPARROW Z M, ERNST B G, QUADY T K, et al. Uniting nonempirical and empirical density functional approximation strategies using constraint-based regularization[J]. The Journal of Physical Chemistry Letters, 2022, 13(30): 6896-6904. [13] HAN Y Z, HUANG G, SONG S J, et al. Dynamic neural networks: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(11): 7436-7456. [14] CAMBRIA E, COUGHLIN M F, FLORYAN M A, et al. Linking cell mechanical memory and cancer metastasis[J]. Nature Reviews Cancer, 2024, 24(3): 216-228. [15] NIU Z Y, ZHONG G Q, YU H. A review on the attention mechanism of deep learning[J]. Neurocomputing, 2021, 452: 48-62. [16] CHICCO D. Siamese neural networks: an overview[J]. Methods in Molecular Biology, 2021, 2190: 73-94. [17] SUN R, ZHENG Z, WANG Z. Learning encodings for constructive neural combinatorial optimization needs to regret[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 20803-20811. [18] GUI J, CHEN T, ZHANG J, et al. A survey on self-supervised learning: algorithms, applications, and future trends[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(12): 9052-9071. [19] ZHONG W, GUO L, GAO Q, et al. MemoryBank: enhancing large language models with long-term memory[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 19724-19731. [20] 鄂海红, 张文静, 肖思琪, 等. 深度学习实体关系抽取研究综述[J]. 软件学报, 2019, 30(6): 1793-1818. E H H, ZHANG W J, XIAO S Q, et al. Survey of entity relationship extraction based on deep learning[J]. Journal of Software, 2019, 30(6): 1793-1818. [21] 王传栋, 徐娇, 张永. 实体关系抽取综述[J]. 计算机工程与应用, 2020, 56(12): 25-36. WANG C D, XU J, ZHANG Y. Survey of entity relation extraction[J]. Computer Engineering and Applications, 2020, 56(12): 25-36. [22] 李冬梅, 张扬, 李东远, 等. 实体关系抽取方法研究综述[J]. 计算机研究与发展, 2020, 57(7): 1424-1448. LI D M, ZHANG Y, LI D Y, et al. Review of entity relation extraction methods[J]. Journal of Computer Research and Development, 2020, 57(7): 1424-1448. [23] 韩亚楠, 刘建伟, 罗雄麟. 连续学习研究进展[J]. 计算机研究与发展, 2022, 59(6): 1213-1239. HAN Y N, LIU J W, LUO X L. Research progress of continual learning[J]. Journal of Computer Research and Development, 2022, 59(6): 1213-1239. [24] 周大蔚, 汪福运, 叶翰嘉, 等. 基于深度学习的类别增量学习算法综述[J]. 计算机学报, 2023, 46(8): 1577-1605. ZHOU D W, WANG F Y, YE H J, et al. Deep learning for class-incremental learning: a survey[J]. Chinese Journal of Computers, 2023, 46(8): 1577-1605. [25] 李文斌, 熊亚锟, 范祉辰, 等. 持续学习的研究进展与趋势[J]. 计算机研究与发展, 2024, 61(6): 1476-1496. LI W B, XIONG Y K, FAN Z C, et al. Advances and trends of continual learning[J]. Journal of Computer Research and Development, 2024, 61(6): 1476-1496. [26] 刘壮, 董子宸, 董宜琳, 等. 图终身学习: 综述[J]. 计算机研究与发展, 2024, 61(8): 2067-2096. LIU Z, DONG Z C, DONG Y L, et al. Lifelong graph learning: a comprehensive review[J]. Journal of Computer Research and Development, 2024, 61(8): 2067-2096. [27] CHEN L X, LI J P, LUO J L, et al. A review of continual relation extraction[C]//Proceedings of the 2023 20th International Computer Conference on Wavelet Active Media Technology and Information Processing. Piscataway: IEEE, 2023: 1-6. [28] WAN Q, WEI L N, ZHAO S, et al. A span-based multi-modal attention network for joint entity-relation extraction[J]. Knowledge-Based Systems, 2023, 262: 110228. [29] WANG L Y, ZHANG X X, SU H, et al. A comprehensive survey of continual learning: theory, method and application[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(8): 5362-5383. [30] LANGE D M, ALJUNDI R, MASANA M, et al. A continual learning survey: defying forgetting in classification tasks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(7): 3366-3385. [31] LI Z, HOIEM D. Learning without forgetting[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(12): 2935-2947. [32] TENG S Y, HU X M, DENG P, et al. Motion planning for autonomous driving: the state of the art and future perspectives[J]. IEEE Transactions on Intelligent Vehicles, 2023, 8(6): 3692-3711. [33] WU C Y, ZHANG X M, ZHANG Y, et al. MedKLIP: medical knowledge enhanced language-image pre-training for X-ray diagnosis[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 21315-21326. [34] VULLAM N, VELLELA S S, B V R, et al. Multi-agent personalized recommendation system in E-commerce based on user[C]//Proceedings of the 2023 2nd International Conference on Applied Artificial Intelligence and Computing. Piscataway: IEEE, 2023: 1194-1199. [35] ZONG Y C, ZUO Q K, NG M K, et al. A new brain network construction paradigm for brain disorder via diffusion-based graph contrastive learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(12): 10389-10403. [36] HOLLY S, MAULIK B, SAMUEL I. Use of whatsapp as a learning media to increase students’ learning interest[J]. Scientechno: Journal of Science and Technology, 2023, 2(1): 35-48. [37] JAWAHAR G, SAGOT B, SEDDAH D. What does BERT learn about the structure of language?[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 3651-3657. [38] YIN F F, ZHAO H P, LU S S, et al. DNA-framework-based multidimensional molecular classifiers for cancer diagnosis[J]. Nature Nanotechnology, 2023, 18(6): 677-686. [39] HAN X, DAI Y, GAO T Y, et al. Continual relation learning via episodic memory activation and reconsolidation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 6429-6440. [40] WEN M T, XIA T Y, LIAO B W, et al. Few-shot relation classification using clustering-based prototype modification[J]. Knowledge-Based Systems, 2023, 268: 110477. [41] LIU C, FU X, WANG Y, et al. Overcoming data limitations: a few-shot specific emitter identification method using self-supervised learning and adversarial augmentation[J]. IEEE Transactions on Information Forensics and Security, 2023, 19: 500-513. [42] WU J, CHEN J, WU J, et al. Understanding contrastive learning via distributionally robust optimization[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems. Berkeley: Neural Information Processing Systems Foundation, 2023: 23297-23320. [43] DAI M F, XING S, XU Q, et al. Multiprototype relational network for few-shot ALS point cloud semantic segmentation by transferring knowledge from photogrammetric point clouds[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 1-17. [44] MILTON A L. Drug memory reconsolidation: from molecular mechanisms to the clinical context[J]. Translational Psychiatry, 2023, 13(1): 370. [45] GREENE N R, NAVEH-BENJAMIN M. Adult age-related changes in the specificity of episodic memory representations: a review and theoretical framework[J]. Psychology and Aging, 2023, 38(2): 67-86. [46] VERSTYNEN T, KORDING K P. Overfitting to ‘predict’ suicidal ideation[J]. Nature Human Behaviour, 2023, 7(5): 680-681. [47] CUI L, YANG D Q, YU J X, et al. Refining sample embeddings with relation prototypes to enhance continual relation extraction[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2021: 232-243. [48] ZHANG H, LIANG B, YANG M, et al. Prompt-based prototypical framework for continual relation extraction[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022, 30: 2801-2813. [49] SIMO-SERRA E, IIZUKA S, ISHIKAWA H. Mastering sketching: adversarial augmentation for structured prediction[J]. ACM Transactions on Graphics (TOG), 2018, 37(1): 1-13. [50] WANG P, SONG Y, LIU T, et al. Learning robust representations for continual relation extraction via adversarial class augmentation[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Somerville: Association for Computational Linguistics, 2022: 6264-6278. [51] ZHAO W Z, CUI Y N, HU W. Improving continual relation extraction by distinguishing analogous semantics[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 1162-1175. [52] EYSENBACH B, ZHANG T, LEVINE S, et al. Contrastive learning as goal-conditioned reinforcement learning[C]//Advances in Neural Information Processing Systems, 2022: 35603-35620. [53] HU C, YANG D, JIN H, et al. Improving continual relation extraction through prototypical contrastive learning[C]//Proceedings of the 29th International Conference on Computational Linguistics. Beijing: International Committee on Computational Linguistics, 2022: 1885-1895. [54] ZHAO K, XU H, YANG J G, et al. Consistent representation learning for continual relation extraction[C]//Proceedings of the Findings of the Association for Computational Linguistics (ACL 2022). Stroudsburg: ACL, 2022: 3402-3411. [55] HUANG M, XIAO M, WANG L, et al. DP-CRE: continual relation extraction via decoupled contrastive learning and memory structure preservation[C]//Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Brussels: European Language Resources Association, 2024: 5338-5349. [56] WU T T, LI X K, LI Y F, et al. Curriculum-meta learning for order-robust continual relation extraction[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 10363-10369. [57] XIA H M, WANG P Y, LIU T Y, et al. Enhancing continual relation extraction via classifier decomposition[C]//Proceedings of the Findings of the Association for Computational Linguistics (ACL 2023). Stroudsburg: ACL, 2023: 10053-10062. [58] HAN X, ZHU H, YU P F, et al. FewRel: a large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 4803-4809. [59] ALT C, GABRYSZAK A, HENNIG L. TACRED revisited: a thorough evaluation of the TACRED relation extraction task[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 1558-1569. [60] BORDES A, USUNIER N, CHOPRA S, et al. Large-scale simple question answering with memory networks[J]. arXiv: 1506.02075, 2015. [61] YU M, YIN W, HASAN K S, et al. Improved neural relation detection for knowledge base question answering[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Wilmington: Association for Computational Linguistics, 2017: 571-581. [62] BEDDARD A, SHERIDAN C E, BARNES M, et al. Improved accuracy average value models of modular multilevel converters[J]. IEEE Transactions on Power Delivery, 2016, 31(5): 2260-2269. [63] CAO Q, BALASUBRAMANIAN A, BALASUBRAMANIAN N. Towards accurate and reliable energy measurement of NLP models[C]//Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing. Wilmington: Association for Computational Linguistics, 2020: 141-148. [64] DODI? D, REGODI? D. Tokenization and memory optimization for reducing GPU load in NLP deep learning models[J]. Technical Gazette/Tehni?ki Vjesnik, 2024, 31(6). [65] LORIMER MOSELEY G, LEAKE H B, BEETSMA A J, et al. Teaching patients about pain: the emergence of pain science education, its learning frameworks and delivery strategies[J]. The Journal of Pain, 2024, 25(5): 104425. [66] BAI B J, YANG X L, LI Y Z, et al. Deep learning-enabled virtual histological staining of biological samples[J]. Light, Science & Applications, 2023, 12(1): 57. [67] MEISTER C, PIMENTEL T, WIHER G, et al. Locally typical sampling[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 102-121. [68] MANIAS D M, NAOUM-SAWAYA J, SHAMI A, et al. Robust traffic grooming and infrastructure placement in OTN-over-DWDM networks[J]. Journal of Optical Communications and Networking, 2023, 15(8): 553-568. [69] ZHOU X H, HUANG Y, KE K, et al. Large-size shape memory alloy plates subjected to cyclic tension: towards novel self-centring connections in steel frames[J]. Thin-Walled Structures, 2023, 185: 110591. [70] LI Z X, SHANG X Y, HE R, et al. No fear of classifier biases: neural collapse inspired federated learning with synthetic and fixed classifier[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 5296-5306. [71] SHAHBAZI N, LIN Y, ASUDEH A, et al. Representation bias in data: a survey on identification and resolution techniquesJ]. ACM Computing Surveys, 2023, 55(13S): 1-39. [72] ZHENG C F, LYU X Y, GAO L L, et al. Prototype-based embedding network for scene graph generation[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 22783-22792. [73] ALLENSPACH S, HISS J A, SCHNEIDER G. Neural multi-task learning in drug design[J]. Nature Machine Intelligence, 2024, 6(2): 124-137. [74] KWON W, LI Z H, ZHUANG S Y, et al. Efficient memory management for large language model serving with PagedAttention[C]//Proceedings of the 29th Symposium on Operating Systems Principles. New York: ACM, 2023: 611-626. [75] KUMARAN D, HASSABIS D, MCCLELLAND J L. What learning systems do intelligent agents need?complementary learning systems theory updated[J]. Trends in Cognitive Sciences, 2016, 20(7): 512-534. [76] SORRENTI A, BELLITTO G, SALANITRI F P, et al. Wake-sleep consolidated learning[J]. arXiv:2401.08623, 2024. |
| [1] | 吴瑕, 王绍卿, 张尧. 跨视图的用户多行为对比推荐模型[J]. 计算机工程与应用, 2025, 61(6): 244-253. |
| [2] | 段苛苛, 郑俊蓉, 晏泽. 结合移动对象发现和对比学习的无监督跟踪[J]. 计算机工程与应用, 2025, 61(4): 141-149. |
| [3] | 冀义豪, 任一支, 袁理锋, 刘容轲, 潘高宁. 结合对比学习和迭代优化的事件类型归纳方法[J]. 计算机工程与应用, 2025, 61(3): 196-211. |
| [4] | 余本功, 石中玉. 深层注意力和两阶段融合的图文情感对比学习方法[J]. 计算机工程与应用, 2025, 61(3): 223-233. |
| [5] | 孙帅祺, 魏桂英, 武森. 融合超图增强与双重对比学习的情感分析方法[J]. 计算机工程与应用, 2025, 61(22): 137-147. |
| [6] | 刘景祥, 王锋, 魏巍. 融合元知识和SVD的多视图对比学习推荐系统[J]. 计算机工程与应用, 2025, 61(22): 159-169. |
| [7] | 白天, 高月红, 谢正光, 李洪均. 多模跨视图对比记忆增强网络的自监督骨架动作识别[J]. 计算机工程与应用, 2025, 61(21): 225-233. |
| [8] | 邓皓文, 王恒升. 基于对比学习与去噪扩散模型的薄膜表面瑕疵图像分类[J]. 计算机工程与应用, 2025, 61(21): 242-252. |
| [9] | 贺祺祥, 郭红钰, 陈启志, 刘玉龙. 面向对象语义线索的无监督语义分割研究[J]. 计算机工程与应用, 2025, 61(20): 218-227. |
| [10] | 张政, 刘金硕, 邓娟, 王丽娜. 引入单模态监督对比学习的多视图讽刺检测[J]. 计算机工程与应用, 2025, 61(19): 118-126. |
| [11] | 翟社平, 杨晴, 黄妍. 基于图结构与文本信息对比的知识图谱补全[J]. 计算机工程与应用, 2025, 61(17): 200-208. |
| [12] | 任衍栋, 张东, 李冠宇. 融合注意力与结构降噪的对比学习知识感知推荐[J]. 计算机工程与应用, 2025, 61(17): 232-240. |
| [13] | 肖慈美, 降爱莲, 冀伟, 高峰. 掩码重建融合对比学习的自监督医学图像分割[J]. 计算机工程与应用, 2025, 61(15): 298-309. |
| [14] | 赵宏, 王贺, 李文改. 对比学习改进文本生成图像方法的研究[J]. 计算机工程与应用, 2025, 61(14): 264-273. |
| [15] | 王澳飞, 孙福振, 孙秀娟, 张文轩, 王绍卿. 面向序列推荐的扩散增强多视角意图对比学习方法[J]. 计算机工程与应用, 2025, 61(13): 338-348. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||