
计算机工程与应用 ›› 2025, Vol. 61 ›› Issue (9): 25-40.DOI: 10.3778/j.issn.1002-8331.2410-0012
李淑慧,蔡伟,王鑫,高蔚洁,狄星雨
出版日期:2025-05-01
发布日期:2025-04-30
LI Shuhui, CAI Wei, WANG Xin, GAO Weijie, DI Xingyu
Online:2025-05-01
Published:2025-04-30
摘要: 红外与可见光图像融合(infrared and visible image fusion,IVIF)将红外图像与可见光图像的互补信息融合,提升图像质量以支持下游任务。鉴于深度学习在图像融合方面的优势,将其应用在IVIF领域已成为研究热点。对深度学习框架下的红外与可见光图像融合方法进行梳理分析,根据不同的融合框架将融合方法分为基于自编码器、卷积神经网络、生成对抗网络和变换器,并对比分析这四类方法的特点;综述了IVIF的主要应用领域、常用的6个数据集和8个评价指标,并在典型数据集上对各类主流IVIF方法进行定性和定量评估。最后,总结了现有IVIF方法的局限性,并展望了IVIF的未来研究方向。
李淑慧, 蔡伟, 王鑫, 高蔚洁, 狄星雨. 深度学习框架下的红外与可见光图像融合方法综述[J]. 计算机工程与应用, 2025, 61(9): 25-40.
LI Shuhui, CAI Wei, WANG Xin, GAO Weijie, DI Xingyu. Review of Infrared and Visible Image Fusion Methods in Deep Learning Frameworks[J]. Computer Engineering and Applications, 2025, 61(9): 25-40.
| [1] SINGH S, SINGH H, BUENO G, et al. A review of image fusion: methods, applications and performance metrics[J]. Digital Signal Processing, 2023, 137: 104020. [2] TANG Z Y, XU T Y, LI H, et al. Exploring fusion strategies for accurate RGBT visual object tracking[J]. Information Fusion, 2023, 99: 101881. [3] YIN W X, HE K J, XU D, et al. Adaptive low light visual enhancement and high-significant target detection for infrared and visible image fusion[J]. The Visual Computer, 2023, 39(12): 6723-6742. [4] LIU J Y, LIU Z, WU G Y, et al. Multi-interactive feature learning and a full-time multi-modality benchmark for image fusion and segmentation[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 8081-8090. [5] ZHOU T X, RUAN S, CANU S. A review: deep learning for medical image segmentation using multi-modality fusion[J]. Array, 2019, 3: 100004. [6] MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. [7] LIU Y, WANG L, CHENG J, et al. Multi-focus image fusion: a survey of the state of the art[J]. Information Fusion, 2020, 64: 71-91. [8] LIANG N N. Medical image fusion with deep neural networks[J]. Scientific Reports, 2024, 14: 7972. [9] SUN X, TIAN Y, LU W X, et al. From single-to multi-modal remote sensing imagery interpretation: a survey and taxonomy[J]. Science China Information Sciences, 2023, 66(4): 140301. [10] LI J J, ZHANG J C, YANG C, et al. Comparative analysis of pixel-level fusion algorithms and a new high-resolution dataset for SAR and optical image fusion[J]. Remote Sensing, 2023, 15(23): 5514. [11] ZHANG K, YAN J, ZHANG F, et al. Spectral-spatial dual graph unfolding network for multispectral and hyperspectral image fusion[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 1-18. [12] HAO H T, ZHANG B J, WANG K. MGFuse: an infrared and visible image fusion algorithm based on multiscale decomposition optimization and gradient-weighted local energy[J]. IEEE Access, 2023, 11: 33248-33260. [13] ZHANG Z Y, HE C Y, WANG H, et al. Fusion of infrared and visible images via multi-layer convolutional sparse representation[J]. Journal of King Saud University - Computer and Information Sciences, 2024, 36(6): 102090. [14] LIU Y C, DONG L L, XU W H. Infrared and visible image fusion for shipborne electro-optical pod in maritime environment[J]. Infrared Physics & Technology, 2023, 128: 104526. [15] WANG X, GUAN Z, QIAN W H, et al. Contrast saliency information guided infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging, 2023, 9: 769-780. [16] ZHANG X C, DEMIRIS Y. Visible and infrared image fusion using deep learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 10535-10554. [17] 唐霖峰, 张浩, 徐涵, 等. 基于深度学习的图像融合方法综述[J]. 中国图象图形学报, 2023, 28(1): 3-36. TANG L F, ZHANG H, XU H, et al. Deep learning-based image fusion: a survey[J]. Journal of Image and Graphics, 2023, 28(1): 3-36. [18] 吴一非, 杨瑞, 吕其深, 等. 红外与可见光图像融合: 统计分析,深度学习方法和未来展望[J]. 激光与光电子学进展, 2024, 61(14): 1400004. WU Y F, YANG R, LYU Q S, et al. Infrared and visible image fusion: statistical analysis, deep learning approaches and future prospects[J]. Laser & Optoelectronics Progress, 2024, 61(14): 1400004. [19] 王恩龙, 李嘉伟, 雷佳, 等. 基于深度学习的红外可见光图像融合综述[J]. 计算机科学与探索, 2024, 18(4): 899-915. WANG E L, LI J W, LEI J, et al. Deep learning-based infrared and visible image fusion: a survey[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(4): 899-915. [20] 张宏钢, 杨海涛, 郑逢杰, 等. 特征级红外与可见光图像融合方法综述[J]. 计算机工程与应用,2024, 60(18): 17-31. ZHANG H G, YANG H T, ZHENG F J, et al. Review of feature-level infrared and visible image fusion[J]. Computer Engineering and Applications, 2024, 60(18): 17-31. [21] LUO Y Y, LUO Z Q. Infrared and visible image fusion: methods, datasets, applications, and prospects[J]. Applied Sciences, 2023, 13(19): 10891. [22] YANG K X, XIANG W, CHEN Z S, et al. A review on infrared and visible image fusion algorithms based on neural networks[J]. Journal of Visual Communication and Image Representation, 2024, 101: 104179. [23] LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. [24] FU Y, WU X J. A dual-branch network for infrared and visible image fusion[C]//Proceedings of the 2020 25th International Conference on Pattern Recognition. Piscataway: IEEE, 2021: 10675-10680. [25] LI H, WU X J, DURRANI T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656. [26] JIAN L H, YANG X M, LIU Z, et al. SEDRFuse: a symmetric encoder-decoder with residual block network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-15. [27] XIE Z H, ZONG S, LI Q, et al. Interactive residual coordinate attention and contrastive learning for infrared and visible image fusion in triple frequency bands[J]. Scientific Reports, 2024, 14: 90. [28] ZHANG G C, NIE R C, CAO J D. SSL-WAEIE: self-supervised learning with weighted auto-encoding and information exchange for infrared and visible image fusion[J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(9): 1694-1697. [29] TANG L F, XIANG X Y, ZHANG H, et al. DIVFusion: darkness-free infrared and visible image fusion[J]. Information Fusion, 2023, 91: 477-493. [30] ZHANG Z Y, LI H, XU T Y, et al. GuideFuse: a novel guided auto-encoder fusion network for infrared and visible images[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 73: 1-11. [31] HUANG J X, LI X S, TAN H S, et al. DeDNet: infrared and visible image fusion with noise removal by decomposition-driven network[J]. Measurement, 2024, 237: 115092. [32] LI J Y, JIANG J J, LIANG P W, et al. MaeFuse: transferring omni features with pretrained masked autoencoders for infrared and visible image fusion via guided training[J]. arXiv:2404.11016, 2024. [33] LIANG P W, JIANG J J, LIU X M, et al. Fusion from decomposition: a self-supervised decomposition approach for image fusion[C]//Proceedings of the 17th European Conference on Computer Vision. Berlin: Springer, 2022: 719-735. [34] XU H, GONG M Q, TIAN X, et al. CUFD: an encoder-decoder network for visible and infrared image fusion based on common and unique feature decomposition[J]. Computer Vision and Image Understanding, 2022, 218: 103407. [35] ZHANG T T, DU H Q, XIE M. W-shaped network: a lightweight network for real-time infrared and visible image fusion[J]. Journal of Electronic Imaging, 2023, 32(6): 063005. [36] ZHENG B H, XIANG T M, LIN M H, et al. Real-time semantics-driven infrared and visible image fusion network[J]. Sensors, 2023, 23(13): 6113. [37] ZHANG Y, LIU Y, SUN P, et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118. [38] ZHU D C, MA J B, LI D, et al. SCGAFusion: a skip-connecting group convolutional attention network for infrared and visible image fusion[J]. Applied Soft Computing, 2024, 163: 111902. [39] JI C M, ZHOU W J, LEI J S, et al. Infrared and visible image fusion via multiscale receptive field amplification fusion network[J]. IEEE Signal Processing Letters, 2023, 30: 493-497. [40] NIE J Y, SUN H, SUN X, et al. Cross-modal feature fusion and interaction strategy for CNN-transformer-based object detection in visual and infrared remote sensing imagery[J]. IEEE Geoscience and Remote Sensing Letters, 2023, 21: 1-5. [41] LI J F, LIU L, SONG H, et al. DCTNet: a heterogeneous dual-branch multi-cascade network for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-14. [42] YANG Y, ZHOU N, WAN W G, et al. MACCNet: multiscale attention and cross- convolutional network for infrared and visible image fusion[J]. IEEE Sensors Journal, 2024, 24(10): 16587-16600. [43] HU X Y, LIU Y, YANG F. PFCFuse: a poolformer and CNN fusion network for infrared-visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2024, 73: 1-14. [44] ZHAO Z X, XU S, ZHANG J S, et al. Efficient and model-based infrared and visible image fusion via algorithm unrolling[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(3): 1186-1196. [45] TANG L F, YUAN J T, MA J Y. Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network[J]. Information Fusion, 2022, 82: 28-42. [46] LIU Y, CHEN X, CHENG J, et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2018, 16(3): 1850018. [47] ZHANG H, MA J Y. SDNet: a versatile squeeze-and-decomposition network for real-time image fusion[J]. International Journal of Computer Vision, 2021, 129(10): 2761-2785. [48] WU D, WANG Y Z, WANG H R, et al. DCFNet: infrared and visible image fusion network based on discrete wavelet transform and convolutional neural network[J]. Sensors, 2024, 24(13): 4065. [49] WANG Z T, WANG F, WU D, et al. Infrared and visible image fusion method using salience detection and convolutional neural network[J]. Sensors, 2022, 22(14): 5430. [50] YANG C X, HE Y N, SUN C, et al. Multi-scale convolutional neural networks and saliency weight maps for infrared and visible image fusion[J]. Journal of Visual Communication and Image Representation, 2024, 98: 104015. [51] GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014: 2672-2680. [52] MA J Y, YU W, LIANG P W, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. [53] MA J Y, ZHANG H, SHAO Z F, et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 70: 1-14. [54] XU X D, SHEN Y, HAN S. Dense-FG: a fusion GAN model by using densely connected blocks to fuse infrared and visible images[J]. Applied Sciences, 2023, 13(8): 4684. [55] GAO Y, MA S W, LIU J J. DCDR-GAN: a densely connected disentangled representation generative adversarial network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(2): 549-561. [56] MA J Y, XU H, JIANG J J, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. [57] LI J, HUO H T, LI C, et al. AttentionFGAN: infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia, 2020, 23: 1383-1396. [58] LIU J Y, FAN X, HUANG Z B, et al. Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 5792-5801. [59] LU G S, FANG Z L, HE C M, et al. HDDGAN: a heter-ogeneous dual-discriminator generative adversarial network for infrared and visible image fusion[J]. arXiv:2404.15992, 2024. [60] YIN H T, XIAO J H, CHEN H. CSPA-GAN: a cross-scale pyramid attention GAN for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-11. [61] RAO Y J, WU D, HAN M N, et al. AT-GAN: a generative adversarial network with attention and transition for infrared and visible image fusion[J]. Information Fusion, 2023, 92: 336-349. [62] CHANG L, HUANG Y D, LI Q F, et al. DUGAN: infrared and visible image fusion based on dual fusion paths and a U-type discriminator[J]. Neurocomputing, 2024, 578: 127391. [63] RADFORD A, METZ L, CHINTALA S, et al. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv:1511.06434, 2015. [64] MA J Y, LIANG P W, YU W, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion, 2020, 54: 85-98. [65] YUAN C, SUN C Q, TANG X Y, et al. FLGC-fusion GAN: an enhanced fusion GAN model by importing fully learnable group convolution[J]. Mathematical Problems in Engineering, 2020, 2020(1): 6384831. [66] ZHAO Y Q, FU G Y, WANG H Q, et al. The fusion of unmatched infrared and visible images based on generative adversarial networks[J]. Mathematical Problems in Engineering, 2020, 2020(1): 3739040. [67] SONG A Y, DUAN H X, PEI H D, et al. Triple-discriminator generative adversarial network for infrared and visible image fusion[J]. Neurocomputing, 2022, 483: 183-194. [68] LI K X, LIU G, GU X J, et al. DANT-GAN: a dual attention-based of nested training network for infrared and visible image fusion[J]. Digital Signal Processing, 2024, 145: 104316. [69] ZHOU H B, WU W, ZHANG Y D, et al. Semantic-supervised infrared and visible image fusion via a dual-discriminator generative adversarial network[J]. IEEE Transactions on Multimedia, 2021, 25: 635-648. [70] HOU J L, ZHANG D Z, WU W, et al. A generative adversarial network for infrared and visible image fusion based on semantic segmentation[J]. Entropy, 2021, 23(3): 376. [71] LI H, GUAN Z, WANG X, et al. Unpaired high-quality image-guided infrared and visible image fusion via generative adversarial network[J]. Computer Aided Geometric Design, 2024, 111: 102325. [72] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. arXiv:1706.03762, 2017. [73] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: transformers for image recognition at scale[J]. arXiv:2010.11929, 2020. [74] VIBASHAN V S, JOSE VALANARASU J M, OZA P, et al. Image fusion transformer[C]//Proceedings of the 2022 IEEE International Conference on Image Processing. Piscataway: IEEE, 2022: 3566-3570. [75] WANG Z S, CHEN Y L, SHAO W Y, et al. SwinFuse: a residual swin transformer fusion network for infrared and visible images[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12. [76] LI J, YANG B, BAI L, et al. TFIV: multigrained token fusion for infrared and visible image via transformer[J]. IEEE Transactions on Instrumentation and Measurement, 2023, 72: 1-14. [77] LI H, WU X J. CrossFuse: a novel cross attention mechanism based infrared and visible image fusion approach[J]. Information Fusion, 2024, 103: 102147. [78] TANG W, HE F Z, LIU Y. YDTR: infrared and visible image fusion via Y-shape dynamic transformer[J]. IEEE Transactions on Multimedia, 2022, 25: 5413-5428. [79] ZHAO Z X, BAI H W, ZHANG J S, et al. CDDFuse: correlation-driven dual-branch feature decomposition for multi-modality image fusion[J]. arXiv:2211.14461, 2022. [80] RAO D Y, XU T Y, WU X J. TGFuse: an infrared and visible image fusion approach based on transformer and generative adversarial network[J]. arXiv:2201.10147, 2022. [81] JIANG C C, REN H Z, YANG H, et al. M2FNet: multi-modal fusion network for object detection from visible and thermal infrared images[J]. International Journal of Applied Earth Observation and Geoinformation, 2024, 130: 103918. [82] TANG W, HE F Z, LIU Y, et al. DATFuse: infrared and visible image fusion via dual attention transformer[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(7): 3159-3172. [83] TANG W, HE F Z, LIU Y. TCCFusion: an infrared and visible image fusion method based on transformer and cross correlation[J]. Pattern Recognition, 2023, 137: 109295. [84] YAO S C, PIAO A L, JIANG W J, et al. STFNets: learning sensing signals from the time-frequency perspective with short-time Fourier neural networks[C]//Proceedings of the World Wide Web Conference. New York: ACM, 2019: 2192-2202. [85] LIU Y, LI X Y, LIU Y, et al. SimpliFusion: a simplified infrared and visible image fusion network[J]. The Visual Computer, 2025, 41(2): 1335-1350. [86] LUO Y Y, LUO Z Q. Infrared and visible image fusion algorithm based on improved residual swin transformer and sobel operators[J]. IEEE Access, 2024, 12: 82134-82145. [87] CHEN J, DING J F, YU Y, et al. THFuse: an infrared and visible image fusion network using transformer and hybrid feature extractor[J]. Neurocomputing, 2023, 527: 71-82. [88] XIN D, XU L X, CHEN H M, et al. A vehicle target detection method based on feature level fusion of infrared and visible light image[C]//Proceedings of the 2022 34th Chinese Control and Decision Conference. Piscataway: IEEE, 2022: 469-474. [89] KANG X D, YIN H, DUAN P H. Global-local feature fusion network for visible-infrared vehicle detection[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 1-5. [90] MARQUES T, CARREIRA S, MIRAGAIA R, et al. Applying deep learning to real-time UAV-based forest monitoring: leveraging multi-sensor imagery for improved results[J]. Expert Systems with Applications, 2024, 245: 123107. [91] LIU F Y, LIU J, WANG L B, et al. Multiple-type distress detection in asphalt concrete pavement using infrared thermography and deep learning[J]. Automation in Construction, 2024, 161: 105355. [92] WANG P J, XIAO J Z, QIANG X X, et al. An automatic building fa?ade deterioration detection system using infrared-visible image fusion and deep learning[J]. Journal of Building Engineering, 2024, 95: 110122. [93] WEN H, HU X K, ZHONG P. Detecting rice straw burning based on infrared and visible information fusion with UAV remote sensing[J]. Computers and Electronics in Agriculture, 2024, 222: 109078. [94] ZHAO X R, LIU Y Y, HUANG Z B, et al. Early diagnosis of cladosporium fulvum in greenhouse tomato plants based on visible/near-infrared (VIS/NIR) and near-infrared (NIR) data fusion[J]. Scientific Reports, 2024, 14: 20176. [95] GUO S, WU K, JEON S, et al. IETAFusion: an illumination enhancement and target-aware infrared and visible image fusion network for security system of smart city[J]. Expert Systems, 2025, 42(1): e13538. [96] WANG L, ZHAO P H, CHU N, et al. A hierarchical Bayesian fusion method of infrared and visible images for temperature monitoring of high-speed direct-drive blower[J]. IEEE Sensors Journal, 2022, 22(19): 18815-18830. [97] HE Y Z, WANG Y X, WU F W, et al. Temperature monitoring of vehicle brake drum based on dual light fusion and deep learning[J]. Infrared Physics & Technology, 2023, 133: 104823. [98] CIPRIáN-SáNCHEZ J F, OCHOA-RUIZ G, GONZALEZ-MENDOZA M, et al. FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery[J]. Neural Computing and Applications, 2023, 35(25): 18201-18213. [99] SHANG D F, ZHANG F, YUAN D P, et al. Deep learning-based forest fire risk research on monitoring and early warning algorithms[J]. Fire, 2024, 7(4): 151. [100 BARTUZI E, TROKIELEWICZ M. Multispectral hand features for secure biometric authentication systems[J]. Concurrency and Computation: Practice and Experience, 2021, 33(18): e6471. [101] SU L, FEI L K, ZHAO S P, et al. Learning modality-invariant binary descriptor for crossing palmprint to palm-vein recognition[J]. Pattern Recognition Letters, 2023, 172: 1-7. [102] FEI L K, SU L, ZHANG B, et al. Learning frequency-aware common feature for VIS-NIR heterogeneous palmprint recognition[J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 7604-7618. [103] YANG Y M, HU W P, HU H F. Neutral face learning and progressive fusion synthesis network for NIR-VIS face recognition[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(10): 5750-5763. [104] ABDUL-AL M, KYEREMEH G K, QAHWAJI R, et al. A novel approach to enhancing multi-modal facial recognition: integrating convolutional neural networks, principal component analysis, and sequential neural networks[J]. IEEE Access, 2024, 12: 140823-140846. [105] NASEEM M T, LEE C S, KIM N H. Facial expression recognition using visible, IR, and MSX images by early and late fusion of deep learning models[J]. IEEE Access, 2024, 12: 20692-20704. [106] TOET A. The TNO multiband image data collection[J]. Data in Brief, 2017, 15: 249-251. [107] XU H, MA J Y, JIANG J J, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. [108] JIA X Y, ZHU C, LI M Z, et al. LLVIP: a visible-infrared paired dataset for low-light vision[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops. Piscataway: IEEE, 2021: 3489-3497. [109] OCHS M, KRETZ A, MESTER R. SDNet: semantically guided depth estimation network[J]. arXiv:1907.10659, 2019. [110] LI H, WU X J, KITTLER J. RFN-Nest: an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72-86. [111] TANG L F, YUAN J T, ZHANG H, et al. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware[J]. Information Fusion, 2022, 83: 79-92. [112] SHEN J F, CHEN Y F, LIU Y, et al. ICAFusion: iterative cross-attention guided feature fusion for multispectral object detection[J]. Pattern Recognition, 2024, 145: 109913. |
| [1] | 王婧, 李云霞. NS-FEDformer模型对股票收益率的预测研究[J]. 计算机工程与应用, 2025, 61(9): 334-342. |
| [2] | 陈浞, 刘东青, 唐平华, 黄燕, 张文霞, 贾岩, 程海峰. 面向目标检测的物理对抗攻击研究进展[J]. 计算机工程与应用, 2025, 61(9): 80-101. |
| [3] | 周佳妮, 刘春雨, 刘家鹏. 融合通道与多头注意力的股价趋势预测模型[J]. 计算机工程与应用, 2025, 61(8): 324-338. |
| [4] | 甄彤, 张威振, 李智慧. 遥感影像中种植作物结构分类方法综述[J]. 计算机工程与应用, 2025, 61(8): 35-48. |
| [5] | 李仝伟, 仇大伟, 刘静, 逯英航. 基于RGB与骨骼数据的人体行为识别综述[J]. 计算机工程与应用, 2025, 61(8): 62-82. |
| [6] | 温浩, 杨洋. 融合ERNIE与知识增强的临床短文本分类研究[J]. 计算机工程与应用, 2025, 61(8): 108-116. |
| [7] | 孟维超, 卞春江, 聂宏宾. 复杂背景下低信噪比红外弱小目标检测方法[J]. 计算机工程与应用, 2025, 61(8): 183-193. |
| [8] | 王燕, 卢鹏屹, 他雪. 结合特征融合注意力的规范化卷积图像去雾网络[J]. 计算机工程与应用, 2025, 61(8): 226-238. |
| [9] | 邢素霞, 李珂娴, 方俊泽, 郭正, 赵士杭. 深度学习下的医学图像分割综述[J]. 计算机工程与应用, 2025, 61(7): 25-41. |
| [10] | 陈宇, 权冀川. 伪装目标检测:发展与挑战[J]. 计算机工程与应用, 2025, 61(7): 42-60. |
| [11] | 翟慧英, 郝汉, 李均利, 占志峰. 铁路设施无人机自主巡检算法研究综述[J]. 计算机工程与应用, 2025, 61(7): 61-80. |
| [12] | 韩佰轩, 彭月平, 郝鹤翔, 叶泽聪. DMU-YOLO:机载视觉的多类异常行为检测算法[J]. 计算机工程与应用, 2025, 61(7): 128-140. |
| [13] | 史昕, 王浩泽, 纪艺, 马峻岩. 融合时空特征的多模态车辆轨迹预测方法[J]. 计算机工程与应用, 2025, 61(7): 325-333. |
| [14] | 王文举, 唐邦, 顾泽骅, 王森. 深度学习的多视角三维重建技术综述[J]. 计算机工程与应用, 2025, 61(6): 22-35. |
| [15] | 孙宇, 刘川, 周扬. 深度学习在知识图谱构建及推理中的应用[J]. 计算机工程与应用, 2025, 61(6): 36-52. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||