计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (2): 147-153.DOI: 10.3778/j.issn.1002-8331.2210-0023

• 模式识别与人工智能 • 上一篇    下一篇

邻域信息分层感知的知识图谱补全方法

梁梅霖,段友祥,昌伦杰,孙歧峰   

  1. 1.中国石油大学(华东) 计算机科学与技术学院,山东 青岛 266580
    2.中国石油塔里木油田分公司 勘探开发研究院,新疆 库尔勒 841000
  • 出版日期:2024-01-15 发布日期:2024-01-15

Knowledge Graph Completion Method Based on Neighborhood Hierarchical Perception

LIANG Meilin, DUAN Youxiang, CHANG Lunjie, SUN Qifeng   

  1. 1.College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong 266580, China
    2.Research Institute of Exploration & Development, PetroChina Tarim Oilfield Company, Korla, Xinjiang 841000, China
  • Online:2024-01-15 Published:2024-01-15

摘要: 知识图谱补全(KGC)旨在利用知识图谱的现有知识推断三元组的缺失值。近期的一些研究表明,将图卷积网络(GCN)应用于KGC任务有助于改善模型的推理性能。针对目前大多数GCN模型存在的同等对待邻域信息、忽略了邻接实体对中心实体的不同贡献度、采用简单的线性变换更新关系嵌入等问题,提出了一个邻域信息分层感知的图神经网络模型NAHAT,在关系更新中引入实体特征信息,通过聚合实体和关系表征来丰富异质关系语义,提高模型的表达能力。同时,将自我对立的负样本训练应用到损失计算中,实现模型的高效训练。实验结果表明,与图卷积网络模型COMPGCN相比,所提出的模型在FB15K-237数据集上Hits@1、Hits@10指标分别提高了3%、2.6%;在WN18RR数据集上分别提高了0.9%、2.2%。验证了所提出的模型的有效性。

关键词: 知识图谱, 知识表示学习, 分层注意力机制, 图神经网络

Abstract: Knowledge graph completion (KGC) is aiming at inferring the missing values of triples by using the existing knowledge of the knowledge graph. Recently, there are some studies showing that applying the graph convolution network (GCN) to the KGC task can improve the inference performance of the model. Currently, most GCN models have the problems of treating the neighborhood information equally, ignoring the different contributions of neighboring entities to the central entity, and using simple linear transformation to update the relationship embedding. Aiming at these problems, a neighborhood aware hierarchical attention network, named NAHAT, is proposed. In order to improve the expression ability of the model, NAHAT introduces entity feature information into relation updating, and aggregates entity and relation representation to enrich heterogeneous relation semantics. At the same time, NAHAT applies self-adversarial negative sample training to the loss calculation to train the model efficiently. Compared with composition-based multi-relational graph convolutional networks, the Hits@1 and Hits@10 metrics of the proposed model increases by 3% and 2.6% respectively on the FB15K-237 dataset, and 0.9% and 2.2% respectively on the WN18RR dataset. Experimental results demonstrate the effectiveness of the proposed model.

Key words: knowledge graph, knowledge representation learning, hierarchical attention mechanism (HAT), graph neural network (GNN)