计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (4): 39-56.DOI: 10.3778/j.issn.1002-8331.2304-0139

• 热点与综述 • 上一篇    下一篇

视觉Transformer在低级视觉领域的研究综述

朱凯,李理,张彤,江晟,别一鸣   

  1. 1. 长春理工大学  物理学院,长春  130022
    2. 长春理工大学中山研究院  光电/生物纳米检测与制造中心,广东  中山  528437
    3. 长春理工大学  电子信息工程学院,长春  130022
    4. 吉林大学  交通学院,长春  130012
  • 出版日期:2024-02-15 发布日期:2024-02-15

Survey of Vision Transformer in Low-Level Computer Vision

ZHU Kai, LI Li, ZHANG Tong, JIANG Sheng, BIE Yiming   

  1. 1. School of Physics, Changchun University of Science and Technology, Changchun 130022, China
    2. Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, Guangdong 528437, China
    3. School of Electronical and Information Engineering, Changchun University of Science and Technology, Changchun 130022, China
    4. Transportation College, Jilin University, Changchun 130012, China
  • Online:2024-02-15 Published:2024-02-15

摘要: Transformer是一种革命性的神经网络模型架构,最初为自然语言处理而设计,但其由于卓越的性能,在计算机视觉领域获得了广泛的应用。虽然关于Transformer在自然语言处理领域的应用有大量的研究和文献,但针对低级视觉任务的综述相对匮乏。简要介绍了Transformer的原理并分析归纳了几种变体。在低级视觉任务的应用方面,将重点放在图像恢复、图像增强和图像生成这三个关键领域。通过详细分析不同模型在这些任务中的表现,探讨了它们在常用数据集上的性能差异。对Transformer在低级视觉领域的发展趋势进行了总结和展望,提出了未来的研究方向,以进一步推动Transformer在低级视觉任务中的创新和发展。这一领域的迅猛发展将为计算机视觉和图像处理领域带来更多的突破,为实际应用提供更加强大和高效的解决方案。

关键词: Transformer, 深度学习, 注意力机制, 计算机视觉, 低级视觉任务

Abstract: Transformer is a revolutionary neural network architecture initially designed for natural language processing. However, its outstanding performance and versatility have led to widespread applications in the field of computer vision. While there is a wealth of research and literature on Transformer applications in natural language processing, there remains a relative scarcity of specialized reviews focusing on low-level visual tasks. In light of this, this paper begins by providing a brief introduction to the principles of Transformer and analyzing several variants. Subsequently, the focus shifts to the application of Transformer in low-level visual tasks, specifically in the key areas of image restoration, image enhancement, and image generation. Through a detailed analysis of the performance of different models in these tasks, this paper explores the variations in their effectiveness on commonly used datasets. This includes achievements in restoring damaged images, improving image quality, and generating realistic images. Finally, this paper summarizes and forecasts the development trends of Transformer in the field of low-level visual tasks. It suggests directions for future research to further drive innovation and advancement in Transformer applications. The rapid progress in this field promises breakthroughs for computer vision and image processing, providing more powerful and efficient solutions for practical applications.

Key words: Transformer, deep learning, attention mechanism, computer vision, low-level vision task