Computer Engineering and Applications ›› 2018, Vol. 54 ›› Issue (4): 192-198.DOI: 10.3778/j.issn.1002-8331.1609-0274

Previous Articles     Next Articles

Fusion of infrared and visible images based on adaptive PCNN and information extraction

WANG Lie, LUO Wen, CHEN Junhong, QIN Weimeng   

  1. School of Computer and Electronics Information Engineering, Guangxi University, Nanning 530004, China
  • Online:2018-02-15 Published:2018-03-07

自适应PCNN与信息提取的红外与可见光图像融合

王  烈,罗  文,陈俊鸿,秦伟萌   

  1. 广西大学 计算机与电子信息学院,南宁 530004

Abstract: A novel method based on Non-Sampled Contourlet Transform(NSCT) is presented for the fusion of infrared and visual images to retain their thermal target information and spatial information and to improve their observability and visual effect. Firstly, infrared and visual images are decomposed by the NSCT to get lowpass subband coefficients and bandpass directional subband coefficients. Lowpass subband coefficients are fused by the adaptive Pulse Coupled Neural Network(PCNN) to extract target and the bandpass directional subband coefficients are fused based on the region variance matching. And the first fusion result is obtained through the inverse NSCT. Then the Xydeas-Petrovic index and entropy between original images and intermediate fused image are computed. Finally, in accordance with the Xydeas-Petrovic index and entropy, the original images are fused for the second time and the final fused decomposed are obtained. The experimental results show that the method is better in fusing infrared and visual images than some current multi-resolution transform based methods. Compared with the NSCT method in two group images, their quality indexes have been increased by 261.06%, 48.31%, 5.15%, 142.95%, 21.62% and 372.85%, 54.62%, 4.73%, 163.07%, 25.40% respectively. The algorithm can get a good fusion image with more clearly details such as edges. Besides, the fusion image is more conform to the requirements of human vision.

Key words: image fusion, Non-Sampled Contourlet Transform(NSCT), Pulse Coupled Neural Network(PCNN), Xydeas-Petrovic index

摘要: 提出基于非采样Contourlet变换(NSCT)的红外与可见光图像融合方法,用于有效地保留目标信息与空间背景信息,提高融合图像的可观测性与视觉效果。首先,基于NSCT方法对红外与可见光图像进行第一次融合,采用自适应PCNN方法提取目标信息融合低频子带系数,采用区域方差取大的规则融合高频子带系数,通过逆NSCT得到初次融合图像。然后,通过信息提取,得到初次融合图像和源图像的边缘保持度与信息熵。最后,依据信息熵与边缘保持度,采用不同的融合策略对红外与可见光图像进行第二次融合。实验结果表明,所述方法从主观视觉效果和客观评价都优于几个流行的基于多尺度变换的图像融合方法,与基于NSCT融合图像对比,两组实验融合质量指标分别提高了261.06%、48.31%、5.15%、142.95%、21.62%和372.85%、54.62%、4.73%、163.07%、25.40%。融合图像不仅边缘等细节纹理更加清晰,且视觉上更符合人眼视觉特性。

关键词: 图像融合, 非下采样Contourlet变换, 脉冲耦合神经网络, Xydeas-Petrovic指标