Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (14): 37-49.DOI: 10.3778/j.issn.1002-8331.2308-0216

• Research Hotspots and Reviews • Previous Articles     Next Articles

Overview of Research Progress in Graph Transformers

ZHOU Chengchen, YU Qiancheng, ZHANG Lisi, HU Zhiyong, ZHAO Mingzhi   

  1. 1.School of Computing Science and Engineering, North Minzu University, Yinchuan 750021, China
    2.The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan 750021, China
  • Online:2024-07-15 Published:2024-07-15

Graph Transformers研究进展综述

周诚辰,于千城,张丽丝,胡智勇,赵明智   

  1. 1.北方民族大学 计算机科学与工程学院,银川 750021
    2.图形图像国家民委重点实验室,银川 750021

Abstract: With the widespread application of graph structured data in various practical scenarios, the demand for effective modeling and processing is increasing. Graph Transformers (GTs), as a type of model that uses Transformers to process graph data, can effectively alleviate the problems of over smoothing and over squeezing in traditional graph neural network (GNN), and thus can learn better feature representations. Firstly, based on the research on recent GTs related literature, the existing model architectures are divided into two categories: the first category adds graph position and structure information to Transformers through absolute encoding and relative encoding to enhance Transformers’ understanding and processing ability of graph structure data; the second type combines GNN with Transformers in different ways (serial, alternating, parallel) to fully utilize their advantages. Secondly, the application of GTs in fields such as information security, drug discovery, and knowledge graphs is introduced, and the advantages and disadvantages of models with different uses are compared and summarized. Finally, the challenges faced by future research on GTs are analyzed from aspects such as scalability, complex graphs, and better integration methods.

Key words: Graph Transformers (GTs), graph neural network, graph representation learning, heterogeneous graph

摘要: 随着图结构数据在各种实际场景中的广泛应用,对其进行有效建模和处理的需求日益增加。Graph Transformers(GTs)作为一类使用Transformers处理图数据的模型,能够有效缓解传统图神经网络(GNN)中存在的过平滑和过挤压等问题,因此可以学习到更好的特征表示。根据对近年来GTs相关文献的研究,将现有的模型架构分为两类:第一类通过绝对编码和相对编码向Transformers中加入图的位置和结构信息,以增强Transformers对图结构数据的理解和处理能力;第二类根据不同的方式(串行、交替、并行)将GNN与Transformers进行结合,以充分利用两者的优势。介绍了GTs在信息安全、药物发现和知识图谱等领域的应用,对比总结了不同用途的模型及其优缺点。最后,从可扩展性、复杂图、更好的结合方式等方面分析了GTs未来研究面临的挑战。

关键词: Graph Transformers(GTs), 图神经网络, 图表示学习, 异构图