计算机工程与应用 ›› 2022, Vol. 58 ›› Issue (2): 289-294.DOI: 10.3778/j.issn.1002-8331.2008-0117

• 工程与应用 • 上一篇    下一篇

复杂场景下深度表示的SAR船舶目标检测算法

袁国文,张彩霞,杨阳,张文生,白江波   

  1. 1.佛山科学技术学院 机电工程与自动化学院,广东 佛山 528000
    2.中国科学院 自动化研究所,北京 100080
    3.广东省智慧城市基础设施健康监测与评估工程技术研究中心,广东 佛山 528000
  • 出版日期:2022-01-15 发布日期:2022-01-18

SAR Target Detection Algorithm for Depth Representation in Complex Scenes

YUAN Guowen, ZHANG Caixia, YANG  Yang, ZHANG Wensheng, BAI Jiangbo   

  1. 1.School of Mechatronic Engineering and Automation, Foshan University, Foshan, Guangdong 528000, China
    2.Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China
    3.Guangdong Province Smart City Infrastructure Health Monitoring and Evaluation Engineering Technology Research Center, Foshan, Guangdong 528000, China
  • Online:2022-01-15 Published:2022-01-18

摘要: 雷达图像目标检测是国家海洋军事和经济发展的重点研究领域。与被动成像的光学雷达相比,合成孔径雷达(synthetic aperture radar,SAR)由于其高分辨率、全天候、全天时、主动式等特点,成为20世纪以来多国雷达研究的重要组成部分。图像目标检测是雷达图像解译的基础。提出一种复杂场景下深度表示的SAR船舶目标检测算法,针对SAR图像目标检测模型无法专注困难样本以及解决FPN多尺度金字塔融合的问题,提出将Libra R-CNN网络与NAS-FPN特征提取网络相结合。其中Libra R-CNN网络在采样、特征、目标三种水平分别具有先进的IoU平衡采样、平衡特征金字塔、平衡L1损失方法,同时将Libra R-CNN模型中的FPN特征提取网络替换为在COCO数据集中借助神经架构搜索(neural architecture search,NAS)方法形成的7层NAS-FPN网络。模型最终在SAR船舶数据集中进行了对比实验,与原先的NAS-FPN网络相比,组合后的网络平均精度提高了4.4个百分点,证明了模型融合后的有效性。

关键词: 合成孔径雷达(SAR)图像, 目标检测, Libra R-CNN网络, NAS-FPN网络

Abstract: Radar image target detection is a key research area of national marine military and economic development. Compared with passive imaging optical radar, due to synthetic aperture radar(SAR) has high resolution, all-weather, active and other characteristics, it has become an important part of radar research in many countries since the 20th century. Image target detection is the basis of radar image interpretation. This paper proposes a SAR ship target detection algorithm for depth representation in complex scenes. Because target detection model for SAR images cannot focus on difficult samples and solve the problem of FPN multi-scale pyramid fusion, it is proposed to combine the Libra R-CNN network with the NAS-FPN feature extraction network. Among them, the Libra R-CNN network has advanced IoU balanced sampling, balanced feature pyramid, and balanced L1 loss methods at the three levels of sampling, feature, and target, at the same time, the FPN feature extraction network in the Libra R-CNN model is replaced with a 7-layer NAS-FPN network formed by the neural architecture search(NAS) method at COCO data. The model is finally carried out in the SAR ship dataset in comparison experiments, compared with the original NAS-FPN network, the average accuracy of the combined network is improved by 4.4 percentage points, which proves the effectiveness of the model after fusion.

Key words: synthetic aperture radar(SAR) images, object detection, Libra R-CNN network, NAS-FPN network