计算机工程与应用 ›› 2023, Vol. 59 ›› Issue (17): 266-274.DOI: 10.3778/j.issn.1002-8331.2210-0266

• 大数据与云计算 • 上一篇    下一篇

融合关系感知与时间注意的时序知识图谱补全

许智宏,毛琛,王利琴,董永峰   

  1. 1.河北工业大学 人工智能与数据科学学院,天津 300401
    2.河北省大数据计算重点实验室,天津 300401
    3.河北省数据驱动工业智能工程研究中心,天津 300401
  • 出版日期:2023-09-01 发布日期:2023-09-01

Incorporating Relational Awareness and Temporal Attention for Temporal Knowledge Graph Completion

XU Zhihong, MAO Chen, WANG Liqin, DONG Yongfeng   

  1. 1.School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300401, China
    2.Hebei Key Laboratory of Big Data Computing, Tianjin 300401, China
    3.Hebei Engineering Research Center of Data-Driven Industrial Intelligent, Tianjin 300401, China
  • Online:2023-09-01 Published:2023-09-01

摘要: 针对现有时序知识图谱补全方法大多将时间信息内嵌于三元组中,依赖静态知识图谱补全方法学习实体特征,无法全面考虑图谱结构信息和时序信息的问题,提出一种融合关系感知与时间注意的时序知识图谱补全方法(incorporating relational awareness and temporal attention for temporal knowledge graph completion,RATA)。一方面,通过引入关系感知聚合机制的图卷积神经网络集成实体和关系特征,特定于关系的参数可以增强消息函数的表达能力,封装更加丰富的邻域上下文信息;另一方面,使用融合自注意力机制的长短期记忆网络学习蕴含在时序数据中的全局特征和局部特征。在ICEWS18、ICEWS14、YAGO和WIKI数据集上的实验结果表明,RATA模型在MRR、Hits@1、Hits@3和Hits@10上普遍优于基线模型,在大规模时序数据集上具有优势。

关键词: 时序知识图谱, 图卷积神经网络, 长短期记忆网络, 注意力机制

Abstract: To address the problem that most of the existing temporal knowledge graph completion methods embed time information into triples and rely on static knowledge graph completion means to learn entity features, which cannot holistically consider structural information and temporal information in the graph. This paper proposes a temporal knowledge graph completion model incorporating relational awareness and temporal attention(RATA). On the one hand, by introducing graph convolutional network with relation-aware aggregation mechanism to integrate entity and relation features, relation-specific parameters can enhance the expression ability of the message function, and encapsulate richer neighborhood context information. On the other hand, it employs the long short-term memory network incorporating self-attention mechanism to learn global and local features in the sequence. The experimental results on ICEWS18, ICEWS14, YAGO and WIKI show RATA generally outperforms the baseline model on MRR, Hits@1, Hits@3, and Hits@10, and has better advantages on large-scale temporal datasets.

Key words: temporal knowledge graph, graph convolutional network, long short-term memory, attention mechanism