计算机工程与应用 ›› 2023, Vol. 59 ›› Issue (21): 187-194.DOI: 10.3778/j.issn.1002-8331.2206-0088

• 图形图像处理 • 上一篇    下一篇

融合类激活映射和视野注意力的皮肤病变分割

张宇,梁凤梅,刘建霞   

  1. 太原理工大学 信息与计算机学院,山西 晋中 030600
  • 出版日期:2023-11-01 发布日期:2023-11-01

Skin Lesion Segmentation Based on Classification Activation Mapping and Visual Field Attention

ZHANG Yu, LIANG Fengmei, LIU Jianxia   

  1. College of Information and Computer, Taiyuan University of Technology, Jinzhong, Shanxi 030600, China
  • Online:2023-11-01 Published:2023-11-01

摘要: 在皮肤镜图像分割问题中,分割精度受多重因素影响,包括图像对比度、病变大小及异物干扰等,为提高分割精度,解决病变边界分割不准等问题,提出一种改进的DeepLab V3+网络。该改进网络一方面生成原图像的类激活映射,融入到网络的编码器中作先验信息,为网络提供准确的定位信息并消除部分干扰因素;另一方面,在空洞空间金字塔模块中融合视野注意力机制,实现局部跨视野交互;同时将Dice损失和排序损失相结合作为本网络的损失函数,使网络更关注硬像素的误差,优化分割模型。分别在ISIC-2017和PH2数据集上对所提模型评估,其Jaccard指数(JA)分别达到82.6%和89.2%,准确率分别达到95.2%和96.5%,实验结果表明所提模型分割敏感度更高,综合分割性能较其他先进网络有所提升。

关键词: 医学图像处理, 皮肤病变分割, 类激活映射, 视野注意力机制, 混合损失函数, DeepLab V3+

Abstract: In dermatoscope image segmentation, the segmentation accuracy is affected by multiple factors, including image contrast, lesion size and foreign body interference. In order to improve the segmentation accuracy and solve the inaccurate segmentation of lesion boundaries, an improved DeepLab V3+ network is proposed. This improved network on the one hand generates the classification activation mapping of the original image, which is incorporated into the encoder of the network as a priori information to provide the network with accurate localization information and eliminate some interfering factors; on the other hand, it incorporates the visual field attention mechanism in the atrous spatial pyramid pooling module to realize local cross-field interaction; meanwhile, dice loss and sorting loss are combined into the loss function of the network, which makes the network pay more attention to the error of hard pixels and optimize the segmentation model. The proposed model is evaluated on ISIC-2017 and PH2 datasets, respectively, and its Jaccard index(JA) reaches 82.6% and 89.2%, and its accuracy reaches 95.2% and 96.5%, respectively. The experimental results show that the proposed model has higher segmentation sensitivity and its comprehensive segmentation performance is improved compared with other advanced networks.

Key words: medical image processing, segmentation of skin lesions, classification activation mapping, visual attention mechanism, mixed loss function, DeepLab V3+