Computer Engineering and Applications ›› 2025, Vol. 61 ›› Issue (18): 187-197.DOI: 10.3778/j.issn.1002-8331.2406-0114

• Graphics and Image Processing • Previous Articles     Next Articles

Coherent Semantic-Driven Approach for Thick Cloud Removal in Optical Remote Sensing Images

CHU Yuting, LUO Xiaobo, ZHOU Jianjun, GOU Yongcheng, GUO Haihong   

  1. 1.School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
    2.Chongqing Municipal Engineering Research Center for Intelligent Spatial Big Data Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
    3.Jinan Urban Construction Group, Jinan 250031, China
  • Online:2025-09-15 Published:2025-09-15

考虑连贯语义的光学遥感图像厚云去除方法

楚玉婷,罗小波,周建军,苟永承,郭海洪   

  1. 1.重庆邮电大学 计算机科学与技术学院,重庆 400065
    2.重庆邮电大学 空间大数据智能技术重庆市工程研究中心,重庆 400065
    3.济南城建集团有限公司,济南 250031

Abstract: Thick cloud cover significantly impacts the quality of optical remote sensing images, limiting their practical applications. Deep learning methods have shown promise in addressing the challenging task of thick cloud removal. However, existing approaches often suffer from issues such as blurry textures and distorted structures due to their disregard for semantic correlations and feature continuity within cloud-covered areas. To tackle these challenges, a novel coherent semantic-based two-stage generative adversarial network method for cloud removal (CSTGAN-CR) is proposed. This method effectively models the semantic correlations between cloud-covered and cloud-free regions, as well as within the cloud-covered areas, preserving contextual structures and improving the accuracy of missing part prediction. The CSTGAN-CR utilizes a two-stage deep neural network with a coherent semantic module and a multi-scale feature aggregation module embedded in the second stage. Experimental evaluations on the 38-cloud synthetic dataset and the RICE2 real dataset demonstrate that the proposed method generates higher-quality images compared to existing approaches, offering significant support for optical remote sensing image applications.

Key words: optical remote sensing images, cloud removal, coherent semantics, multi-scale feature aggregation

摘要: 厚云对光学遥感图像的质量有严重影响,限制了其应用。深度学习方法在修复厚云的挑战性任务中表现出了良好的效果。然而,现有方法由于忽略了云覆盖区域的语义相关性和特征连续性,导致生成的内容往往纹理模糊、结构失真。为了解决上述问题,提出了连贯语义的两阶段云去除生成对抗网络方法。该方法通过对云覆盖区域与无云区域之间以及云覆盖区域内部特征的语义相关性进行建模,保留了上下文结构并对缺失部分进行更有效的预测。该方法采用两阶段深度神经网络,其中连贯语义模块和多尺度特征聚合模块嵌入到第二阶段网络。在38-cloud模拟数据集和RICE2真实数据集上进行了实验验证。实验结果表明,相较于现有方法,该方法能够生成更高质量的图像,为光学遥感图像应用提供了有力的支持。

关键词: 光学遥感图像, 去云, 连贯语义, 多尺度特征聚合