计算机工程与应用 ›› 2023, Vol. 59 ›› Issue (17): 152-158.DOI: 10.3778/j.issn.1002-8331.2205-0432

• 模式识别与人工智能 • 上一篇    下一篇

面向图嵌入的改进图注意机制模型

李智杰,韩津津,李昌华,张颉   

  1. 西安建筑科技大学 信息与控制工程学院,西安 710055
  • 出版日期:2023-09-01 发布日期:2023-09-01

Improved Graph Attention Mechanism Model for Graph Embedding

LI Zhijie, HAN Jinjin, LI Changhua, ZHANG Jie   

  1. College of Information and Control Engineering, Xi’an University of Architectural Science and Technology, Xi’an 710055, China
  • Online:2023-09-01 Published:2023-09-01

摘要: 针对图神经网络模型学习图嵌入节点表示过程中易丢失大量特征信息及其图拓扑保留不完整的问题,提出了一种改进的图注意力(graph attention)机制模型。该模型分为节点级双向注意力机制和图级自注意图池化两部分。在学习图节点新的特征向量表示过程中,采取计算双向图注意力权重的方式,为邻域节点的保留提供可靠选择的同时增强节点间的相似属性;在图的整体拓扑上结合自注意图池,使用节点特征向量作为输入,通过注意卷积层提供的自注意权重在池化层生成图嵌入表示;在Cora、Citeseer、Pubmed数据集上进行了测试,实验结果表明:相比于基准图注意力机制模型,改进模型能够充分考虑图的局部和整体结构特征,有效增强模型聚合邻域信息的能力,减少了图嵌入过程中原始特征的丢失,明显提升了模型在下游任务的表现性能。

关键词: 图嵌入, 双向图注意力机制, 自注意图池, 特征表示, 图拓扑

Abstract: Addressing the problem of easy loss of large amounts of feature information and the incompletely preserved graph topology in the process of representation of embedded nodes in learning graph in the graph neural network model, an improved graph attention mechanism model is proposed. The model is divided into the node-level bidirectional attention mechanism and the graph-level self-attention graph pooling. During the representation of new feature vectors of the learning graph nodes, calculations of the bidirectional graph attention weight are adopted to provide reliable choices for the retention of neighboring nodes while enhancing similar properties between nodes. A self-attention graph pooling is combined with the overall topology of the graph, using node feature vectors as input to generate a graph embedding representation at the pooling layer through the self-attention weights provided by the attention convolution layer. It is tested on Cora, Citeseer, and Pubmed datasets. According to the experimental results, compared with the baseline graph attention mechanism model, the improved model takes full account of the local and global structural features of the graph. It effectively enhances the ability of the model to aggregate neighborhood information, reduces the loss of original features in graph embedding, and significantly improves the performance of the model for downstream tasks.

Key words: graph embedding, bidirectional graph attention mechanism, self-attention graph pool, feature representation, graph topology