计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (12): 136-143.DOI: 10.3778/j.issn.1002-8331.2303-0164

• 模式识别与人工智能 • 上一篇    下一篇

多关系下图自注意机制增强的知识表示学习

刘冬帅,安敬民,孟繁琛,李冠宇   

  1. 1.大连东软信息学院 计算机与软件学院,辽宁 大连 116023
    2.大连海事大学 信息科学技术学院,辽宁 大连 116026
  • 出版日期:2024-06-15 发布日期:2024-06-14

Multi-Relational Graph Self-Attention Mechanism Enhanced Knowledge Representation Learning

LIU Dongshuai, AN Jingmin, MENG Fanchen, LI Guanyu   

  1. 1.School of Computer and Software, Dalian Neusoft University of Information, Dalian, Liaoning 116023, China
    2.College of Information Science and Technology, Dalian Maritime University, Dalian, Liaoning 116026, China
  • Online:2024-06-15 Published:2024-06-14

摘要: 知识图谱为表示多关系型数据的异构图。已有知识表示学习方法通过增加实体和关系嵌入之间的交互提高知识三元组表达能力,但无法使其蕴含多层次语义,即特定关系下实体具有的多重关联属性。图神经网络利用结构信息为实体的邻居节点分配权重,但无法对实体和邻居之间的复杂交互进行更精准的消息传递。为此,提出了基于自注意机制的知识表示学习模型(CompESAT),为聚合了邻居信息的复合实体引入自注意机制,依据不同邻居贡献度的变化动态更新实体表示。该模型编码器定义了多个图注意机制层来处理复合实体的多重局部特征,自适应地学习复合实体嵌入。解码器补充解码三元组的全局特征。链接预测任务中该模型在FB15k-237数据集上各项评价指标均有提升,MRR和Hit@10分别提升了0.042和0.045;在WN18RR数据集上,Hit@10提升了0.069。

关键词: 知识表示学习, 图结构, 自注意机制, 多层次语义, 邻域聚合, 异构链接

Abstract: Heterogeneous graph represents multi-relational data. Current knowledge representation learning methods improve the ability of knowledge triples representation, these methods increase the interaction between entities and relation embeddings, but they cannot make triples contain multi-level semantics, that is, multiple association attributes of entities under a specific relation. Graph neural networks use structural informations to assign weights to the  neighbor nodes of entity, but there is no way to more precise messaging of complex interactions between entities and neighbors. To solve this problem, a knowledge representation learning model based on graph self-attention mechanism (CompESAT) is proposed to encode triples. The self-attention mechanism is introduced into composite entities generated by aggregated neighbors, and the entity representation is dynamically updated with the changes of different neighbor contributions. The model encoder adaptively learns composite entity embedding by defining multiple graph attention mechanism layers. These layers can deal with multiple local features of composite entity representation. The decoder complements the global features of the decoding triplet. In the link prediction task, all evaluation indexes of the model are improved on the dataset FB15k-237, and MRR and Hit@10 are respectively improved by 0.042 and 0.045, on the dataset WN18RR, Hit@10 is improved by 0.069.

Key words: knowledge representation learning, graph structure, self attention mechanism, multi-level semantics, neighborhood aggregation, heterogeneous links