Computer Engineering and Applications ›› 2023, Vol. 59 ›› Issue (11): 131-140.DOI: 10.3778/j.issn.1002-8331.2202-0229

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

Graph Representation Learning Model for Multi-Level Feature Augmentation

FENG Yao, KONG Bing, ZHOU Lihua, BAO Chongming, WANG Chongyun   

  1. 1.School of Information, Yunnan University, Kunming 650500, China
    2.School of Software, Yunnan University, Kunming 650500, China
    3.School of Ecology and Environmental Science, Yunnan University, Kunming 650504, China
  • Online:2023-06-01 Published:2023-06-01

多级特征增强的图表示学习模型

冯耀,孔兵,周丽华,包崇明,王崇云   

  1. 1.云南大学 信息学院,昆明 650500
    2.云南大学 软件学院,昆明 650500
    3.云南大学 生态与环境学院,昆明 650504

Abstract: Representation learning based on graph data has shown significant value for graph downstream tasks such as recommendation system and link prediction. However, current methods have some drawbacks:the fixed propagation of graph neural network limits the semantic expression of node representations, and encoder-decoder architecture with regularized reconstruction is prevented from learning differentiated features between nodes, which may lead to node representations not well suited to some graph downstream tasks. Therefore, a multi-level feature augmented graph representation learning model has been proposed via mutual information maximization, which is capable of learning high quality node representations in an unsupervised manner. This model first uses an extractor to preserve distinguishable features contained in original attributes, which are then fed to an aggregator to maintain the local relevance and global difference of nodes in encoding space. Finally, the strategy of deep graph infomax is applied to unify the global encoding rules. Experimental results demonstrate that the encoding performance of the model completely outperforms all mainstream comparative baselines on several classification benchmark datasets for transductive and inductive learning.

Key words: graph representation learning, mutual information maximization, unsupervised learning, transductive learning, inductive learning

摘要: 针对图数据的表示学习在推荐系统、链接预测等图下游任务已展现出重要的研究价值。然而目前主流的方法存在一些缺陷:图卷积网络的固定传播模式限制节点表示的语义表达能力,以及编码器-解码器结构中的正则化重建阻碍学习节点间的差异化特征,这些都可能导致节点表示不能很好适应图下游任务。为此,基于互信息最大化理论提出一种多级特征增强的图表示学习模型,能以无监督的方式生成高质量的节点表示。模型使用提取器保留节点原始属性中的差异化特征,利用注意力聚合器维持编码空间中节点分布的局部相关性和全局差异性,应用深度图信息最大化策略统一全局编码规则。实验结果证明,在几个基准图数据集上该模型在直推式学习和归纳式学习下的编码表现均超过了所有的主流对比基线。

关键词: 图表示学习, 互信息最大化, 无监督学习, 直推式学习, 归纳式学习