
Computer Engineering and Applications ›› 2025, Vol. 61 ›› Issue (17): 17-32.DOI: 10.3778/j.issn.1002-8331.2501-0206
• Research Hotspots and Reviews • Previous Articles Next Articles
ZHU Ziwen, SONG Xiao’ou, CUI Wei, QI Fengli
Online:2025-09-01
Published:2025-09-01
朱自文,宋晓鸥,崔巍,岂峰利
ZHU Ziwen, SONG Xiao’ou, CUI Wei, QI Fengli. Review of Visible and Infrared Image Fusion for Intelligent Object Detection[J]. Computer Engineering and Applications, 2025, 61(17): 17-32.
朱自文, 宋晓鸥, 崔巍, 岂峰利. 可见光-红外图像融合的目标检测综述[J]. 计算机工程与应用, 2025, 61(17): 17-32.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2501-0206
| [1] QI Y, HOU W, YANG L Q, et al. GMBox: box-supervised remote sensing images instance segmentation based on multi-scale gradient prior fusion and mask correction[C]//Proceedings of the 11th International Conference on Information Systems and Computing Technology. Piscataway: IEEE, 2023: 268-273. [2] LI L C, CHEN W, QI J. VB-SOLO: single-stage instance segmentation of overlapping epithelial cells[J]. IEEE Access, 2024, 12: 52555-52564. [3] WANG X, SUN Z J, CHEHRI A, et al. Deep learning and multi-modal fusion for real-time multi-object tracking: algorithms, challenges, datasets, and comparative study[J]. Information Fusion, 2024, 105: 102247. [4] DU Z H, JIANG H, YANG X, et al. Deep learning-assisted near-Earth asteroid tracking in astronomical images[J]. Adva-nces in Space Research, 2024, 73(10): 5349-5362. [5] CHEN H, YAN H Q, YANG X, et al. Efficient adversarial attack strategy against 3D object detection in autonomous driving systems[J]. IEEE Transactions on Intelligent Transport-ation Systems, 2024, 25(11): 16118-16132. [6] SONG Z Y, LIU L, JIA F Y, et al. Robustness-aware 3D object detection in autonomous driving: a review and outlook[J]. IEEE Transactions on Intelligent Transportation Systems, 2024, 25(11): 15407-15436. [7] ABOUOUF M, SINGH S, MIZOUNI R, et al. Explainable AI for event and anomaly detection and classification in healthcare monitoring systems[J]. IEEE Internet of Things Journal, 2024, 11(2): 3446-3457. [8] ZHAO Q, WANG Y, WANG B Y, et al. MSC-AD: a multiscene unsupervised anomaly detection dataset for small defect detection of casting surface[J]. IEEE Transactions on Industrial Informatics, 2024, 20(4): 6041-6052. [9] WANG Q, GAO S, XIONG L, et al. A casting surface dataset and benchmark for subtle and confusable defect detection in complex contexts[J]. IEEE Sensors Journal, 2024, 24(10): 16721-16733. [10] 梁礼明, 龙鹏威, 卢宝贺, 等. EHH-YOLOv8s: 一种轻量级的带钢表面缺陷检测算法[J/OL]. 北京航空航天大学学报, 2024: 1-15(2024-08-08)[2025-01-05]. https://kns.cnki.net/KCMS/detail/detail.aspx?filename=BJHK20240806002& dbname=CJFD&dbcode=CJFQ. LIANG L M, LONG P W, LU B H, et al. EHH-YOLOv8s: a lightweight algorithm for strip surface defect detection[J/OL]. Journal of Beijing University of Aeronautics and Astronautics, 2024: 1-15(2024-08-08)[2025-01-05]. https://kns.cnki.net/KCMS/detail/detail.aspx?filename=BJHK20240806002& dbname=CJFD&dbcode=CJFQ. [11] 王元喆, 梁腾飞, 曾宇乔, 等. 多光谱目标检测综述[J]. 信息与控制, 2024, 53(3): 287-301. WANG Y Z, LIANG T F, ZENG Y Q, et al. Overview of multispectral target detection[J]. Information and Control, 2024, 53(3): 287-301. [12] LUO Y Y, LUO Z Q. Infrared and visible image fusion: methods, datasets, applications, and prospects[J]. Applied Sciences, 2023, 13(19): 10891. [13] MA W, WANG K, LI J, et al. Infrared and visible image fusion technology and application: a review[J]. Sensors (Basel), 2023, 23(2): 599. [14] JIAO T Z, GUO C P, FENG X Y, et al. A comprehensive survey on deep learning multi-modal fusion: methods, technologies and applications[J]. Computers, Materials & Continua, 2024, 80(1): 1-35. [15] WANG Z A, LIAO X H, YUAN J, et al. CDC-YOLOFusion: leveraging cross-scale dynamic convolution fusion for visible-infrared object detection[J]. IEEE Transactions on Intelligent Vehicles, 2024: 1-14. [16] LEE W Y, JOVANOV L, PHILIPS W. Multimodal pedestrian detection based on cross-modality reference search[J]. IEEE Sensors Journal, 2024, 24(10): 17291-17306. [17] LI Q, ZHANG C Q, HU Q H, et al. Stabilizing multispectral pedestrian detection with evidential hybrid fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(4): 3017-3029. [18] JIA X Y, ZHU C, LI M Z, et al. LLVIP: a visible-infrared paired dataset for low-light vision[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. Piscataway: IEEE, 2021: 3489-3497. [19] HUANG N C, LIU J N, MIAO Y Q, et al. Deep learning for visible-infrared cross-modality person re-identification: a comprehensive review[J]. Information Fusion, 2023, 91: 396-411. [20] LIU J Y, FAN X, JIANG J, et al. Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(1): 105-119. [21] ZHANG X, ZHANG X H, WANG J T, et al. TFDet: target-aware fusion for RGB-T pedestrian detection[J]. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(7): 13276-13290. [22] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 580-587. [23] WANG S W, LI Y, QIAO S H. ALF-YOLO: enhanced YOLOv8 based on multiscale attention feature fusion for ship detection[J]. Ocean Engineering, 2024, 308: 118233. [24] GIRSHICK R. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 1440-1448. [25] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149. [26] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 779-788. [27] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 6517-6525. [28] REDMON J, FARHADI A. YOLOv3: an incremental improvement[J]. arXiv:1804.02767, 2018. [29] BOCHKOVSKIY A, WANG C Y, LIAO H. YOLOv4: optimal speed and accuracy of object detection[J]. arXiv:2004. 10934, 2020. [30] CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 213-229. [31] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017: 6000-6010. [32] LI C, HEI Y Q, XI L H, et al. GL-DETR: global-to-local transformers for small ship detection in SAR images[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 3461212. [33] ZHAO Y A, LV W Y, XU S L, et al. DETRs beat YOLOs on real-time object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 16965-16974. [34] NIE J Y, SUN H, SUN X, et al. Cross-modal feature fusion and interaction strategy for CNN-transformer-based object detection in visual and infrared remote sensing imagery[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 1-5. [35] ALSHEHRI M, OUADOU A, SCOTT G J. Deep transformer-based network deforestation detection in the Brazilian Amazon using sentinel-2 imagery[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 1-5. [36] KEDDOUS F E, LLANZA A, SHVAI N, et al. Vision transformers inference acceleration based on adaptive layer normalization[J]. Neurocomputing, 2024, 610: 128524. [37] WANG J, LI X, CHEN R F, et al. Infrared and visible image fusion based on co-gradient edge-attention gate network[C]//Proceedings of the 9th International Conference on Control and Robotics Engineering. Piscataway: IEEE, 2024: 339-344. [38] FU H, WANG S, DUAN P, et al. LRAF-Net: long-range attention fusion network for visible-infrared object detection[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(10): 13232-13245. [39] DING R, YANG M, ZHENG N N. Selective transfer learning of cross-modality distillation for monocular 3D object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(10): 9925-9938. [40] BURT P J. The pyramid as a structure for efficient computation[M]. Cham: Springer, 1984. [41] YU M, CUI T, LU H Y, et al. VIFNet: an end-to-end visible-infrared fusion network for image dehazing[J]. Neurocomputing, 2024, 599: 128105. [42] LI X, HE H, SHI J. HDCCT: hybrid densely connected CNN and transformer for infrared and visible image fusion[J]. Electronics, 2024, 13(17): 3470. [43] CHEN X X, XU S W, HU S H, et al. ACFNet: an adaptive cross-fusion network for infrared and visible image fusion[J]. Pattern Recognition, 2025, 159: 111098. [44] TANG W, HE F Z, LIU Y. ITFuse: an interactive transformer for infrared and visible image fusion[J]. Pattern Recognition, 2024, 156: 110822. [45] 张宏钢, 杨海涛, 郑逢杰, 等. 特征级红外与可见光图像融合方法综述[J]. 计算机工程与应用, 2024, 60(18): 17-31. ZHANG H G, YANG H T, ZHENG F J, et al. Review of feature-level infrared and visible image fusion[J]. Computer Engineering and Applications, 2024, 60(18): 17-31. [46] LI Z, PAN H, ZHANG K, et al. MambaDFuse: a mamba-based dual-phase model for multi-modality image fusion[J]. arXiv:2404.08406, 2024. [47] ZHANG X, DEMIRIS Y. Visible and infrared image fusion using deep learning[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2023, 45(8): 10535-10554. [48] SHOPOVSKA I, JOVANOV L, PHILIPS W. Deep visible and thermal image fusion for enhanced pedestrian visibility[J]. Sensors (Basel), 2019, 19(17): e3727. [49] LIU J Y, FAN X, HUANG Z B, et al. Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 5792-5801. [50] HOU Z Q, LI X Y, YANG C, et al. Dual-branch network object detection algorithm based on dual-modality fusion of visible and infrared images[J]. Multimedia Systems, 2024, 30(6): 333. [51] HWANG S, PARK J, KIM N, et al. Multispectral pedestrian detection: benchmark dataset and baseline[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1037-1045. [52] FREE Teledyne FLIR thermal dataset for algorithm training[EB/OL]. (2018-02.22)[2024-11-21]. https://www.flir.com/oem/adas/adas-dataset-form/. [53] RAZAKARIVONY S, JURIE F. Vehicle detection in aerial imagery: a small target detection benchmark[J]. Journal of Visual Communication and Image Representation, 2016, 34: 187-203. [54] SUN Y M, CAO B, ZHU P F, et al. Drone-based RGB-infrared cross-modality vehicle detection via uncertainty-aware learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(10): 6700-6713. [55] SONG K C, XUE X T, WEN H W, et al. Misaligned visible-thermal object detection: a drone-based benchmark and baseline[J]. IEEE Transactions on Intelligent Vehicles, 2024, 9(11): 7449-7460. [56] LI C Y, SONG D, TONG R F, et al. Illumination-aware faster R-CNN for robust multispectral pedestrian detection[J]. Pattern Recognition, 2019, 85: 161-171. [57] YAN C Q, ZHANG H, LI X L, et al. Cross-modality complementary information fusion for multispectral pedestrian detection[J]. Neural Computing and Applications, 2023, 35(14): 10361-10386. [58] ZHANG L, LIU Z, ZHU X, et al. Weakly aligned feature fusion for multimodal object detection[J]. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(3): 4145-4159. [59] ZENG Y, LIANG T, JIN Y, et al. MMI-Det: exploring multi-modal integration for visible and infrared object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(11): 11198-11213. [60] SUN Y X, MENG Y Q, WANG Q B, et al. Visible and infrared image fusion for object detection: a survey[C]//Proceedings of the International Conference on Image, Vision and Intelligent Systems , 2024: 236-248. [61] WAGNER J, FISCHER V, HERMAN M, et al. Multispectral pedestrian detection using deep fusion convolutional neural networks[J]. arXiv:1611.02644, 2016. [62] PENG R H, LAI J, YANG X T, et al. Camouflaged target detection based on multimodal image input pixel-level fusion[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(9): 1226-1239. [63] SHEN J F, CHEN Y F, LIU Y, et al. ICAFusion: iterative cross-attention guided feature fusion for multispectral object detection[J]. Pattern Recognition, 2024, 145: 109913. [64] XIE Y M, ZHANG L W, YU X Y, et al. YOLO-MS: multispectral object detection via feature interaction and self-attention guided fusion[J]. IEEE Transactions on Cognitive and Developmental Systems, 2023, 15(4): 2132-2143. [65] ZHANG Y, YU H, HE Y J, et al. Illumination-guided RGBT object detection with inter- and intra-modality fusion[J]. IEEE Transactions on Instrumentation Measurement, 2023, 72: 3251414. [66] LI Q, ZHANG C Q, HU Q H, et al. Confidence-aware fusion using dempster-shafer theory for multispectral pedestrian dete-ction[J]. IEEE Transactions on Multimedia, 2023, 25: 3420-3431. [67] HU Z H, JING Y G, WU G Q. Decision-level fusion detection method of visible and infrared images under low light conditions[J]. EURASIP Journal on Advances in Signal Processing, 2023, 2023(1): 38. [68] KANG X D, YIN H, DUAN P H. Global local feature fusion network for visible infrared vehicle detection[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 1-5. [69] YU H Y, YANG H, GAO L R, et al. Hyperspectral image change detection based on gated spectral spatial temporal attention network with spectral similarity filtering[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 1-13. [70] SHI M N, LI H T, YAO Q, et al. Vision based nighttime pavement cracks pixel level detection by integrating infrared visible fusion and deep learning[J]. Construction and Buil-ding Materials, 2024, 442: 137662. [71] LIU J J, ZHANG S T, WANG S, et al. Multispectral deep neural networks for pedestrian detection[J]. arXiv:1611.02644, 2016. [72] ZHANG H, FROMONT E, LEFEVRE S, et al. Multispectral fusion for object detection with cyclic fuse-and-refine blocks[C]//Proceedings of the IEEE International Conference on Image Processing. Piscataway: IEEE, 2020: 276-280. [73] XIAO X W, WANG B, MIAO L J, et al. Infrared and visible image object detection via focused feature enhancement and cascaded semantic extension[J]. Remote Sensing, 2021, 13(13): 2538. [74] FENG Y, LUO E B, LU H, et al. Cross-modality feature fusion for night pedestrian detection[J]. Frontiers in Physics, 2024, 12: 1356248. [75] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[J]. arXiv:1506.02025, 2015. [76] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7132-7141. [77] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 3-19. [78] YANG Y, XU K X, WANG K Z. Cascaded information enhancement and cross-modal attention feature fusion for multispectral pedestrian detection[J]. Frontiers in Physics, 2023, 11: 1121311. [79] LIU Z, LIN Y T, CAO Y, et al. Swin Transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9992-10002. [80] LI R M, XIANG J J, SUN F X, et al. Multiscale cross-modal homogeneity enhancement and confidence-aware fusion for multispectral pedestrian detection[J]. IEEE Transactions on Multimedia, 2024, 26: 852-863. [81] HU S J, BONARDI F, BOUCHAFA S, et al. Rethinking self-attention for multispectral object detection[J]. IEEE Transactions on Intelligent Transportation Systems, 2024, 25(11): 16300-16311. [82] LIU X W, XU X Y, XIE J, et al. FDENet: fusion depth semantics and edge-attention information for multispectral pedestrian detection[J]. IEEE Robotics and Automation Letters, 2024, 9(6): 5441-5448. [83] 程清华, 鉴海防, 郑帅康, 等. 基于光照感知的红外/可见光融合目标检测[J]. 计算机科学, 2025, 52(2): 173-182. CHENG Q H, JIAN H F, ZHENG S K, et al. Illumination-aware infrared/visible fusion for object detection[J]. Computer Science, 2025, 52(2): 173-182. [84] ZHANG L, ZHU X Y, CHEN X Y, et al. Weakly aligned cross-modal learning for multispectral pedestrian detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 5126-5136. [85] CHEN Y X, GUAN Y, SHAO Z Z. Real-time multispectral pedestrian detection with weakly aligned cross-modal learning[C]//Proceedings of the IEEE International Conference on Real-time Computing and Robotics. Piscataway: IEEE, 2023: 829-834. [86] TIAN C, ZHOU Z K, HUANG Y Q, et al. Cross-modality proposal-guided feature mining for unregistered RGB-thermal pedestrian detection[J]. IEEE Transactions on Multimedia, 2024, 26: 6449-6461. [87] FU H L, LIU H H, YUAN J, et al. YOLO-Adaptor: a fast adaptive one-stage detector for non-aligned visible-infrared object detection[J]. IEEE Transactions on Intelligent Vehicles, 2024, 9(11): 7070-7083. [88] ZHOU K L, CHEN L S, CAO X. Improving multispectral pedestrian detection by addressing modality imbalance problems[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 787-803. [89] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[J]. arXiv:2010.11929, 2020. [90] CHEN X, YAN B, ZHU J, et al. High-performance transformer tracking[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2023, 45(7): 8507-8523. [91] FANG Q, HAN D, WANG Z K. Cross-modality fusion transformer for multispectral object detection[J]. arXiv:2010.11929, 2020. [92] LEE W Y, JOVANOV L, PHILIPS W. Cross-modality atte-ntion and multimodal fusion transformer for pedestrian dete-ction[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2023: 608-623. [93] YOU S, XIE X D, FENG Y J, et al. Multi-scale aggregation transformers for multispectral object detection[J]. IEEE Signal Processing Letters, 2023, 30: 1172-1176. [94] HAN K, XIAO A, WU E, et al. Transformer in transformer[C]//Advances in Neural Information Processing Systems, 2021: 15908-15919. [95] XIAO Y M, MENG F M, WU Q B, et al. GM-DETR: generalized muiltispectral detection Transformer with efficient fusion encoder for visible-infrared detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2024: 5541-5549. [96] GAO H W, WANG Y T, SUN J, et al. Efficient multi-level cross-modal fusion and detection network for infrared and visible image[J]. Alexandria Engineering Journal, 2024, 108: 306-318. [97] GU A, DAO T. Mamba: linear-time sequence modeling with selective state spaces[J]. arXiv:2312.00752, 2023. [98] YU W, WANG X. MambaOut: do we really need mamba for vision?[J]. arXiv:2405.07992, 2024. [99] ZHU L, LIAO B, ZHANG Q, et al. Vision Mamba: efficient visual representation learning with bidirectional state space model[J]. arXiv:2401.09417, 2024. [100] LIU Y, TIAN Y, ZHAO Y, et al. VMamba: visual state space model[C]//Advances in Neural Information Processing Systems, 2024: 103031-103063. [101] DONG W, ZHU H, LIN S, et al. Fusion-Mamba for cross-modality object detection[J]. arXiv:2404.09146, 2024. [102] LIANG J Y, CAO J Z, SUN G L, et al. SwinIR: image restoration using swin transformer[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. Piscataway: IEEE, 2021: 1833-1844. [103] WANG S M, WANG C P, SHI C Y, et al. Mask-guided Mamba fusion for drone-based visible-infrared vehicle dete-ction[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 3452550. [104] LI H Y, HU Q, YAO Y, et al. CFMW: cross-modality fusion Mamba for multispectral object detection under adverse weather conditions[J]. arXiv:2404.16302, 2024. [105] HE X H, CAO K, ZHANG J, et al. Pan-Mamba: effective pan-sharpening with state space model[J]. Information Fusion, 2025, 115: 102779. [106] REN K J, WU X, XU L M, et al. RemoteDet-Mamba: a hybrid Mamba-CNN network for multi-modal object detection in remote sensing images[J]. arXiv:2410.13532, 2024. [107] LIU C, MA X, YANG X C, et al. COMO: cross-Mamba interaction and offset-guided fusion for multimodal object detection[J]. arXiv:2412.18076, 2024. [108] ZHOU M, LI T, QIAO C, et al. DMM: disparity-guided multispectral Mamba for oriented object detection in remote sensing[J]. arXiv:2407.08132, 2024. |
| [1] | LI Shuhui, CAI Wei, WANG Xin, GAO Weijie, DI Xingyu. Review of Infrared and Visible Image Fusion Methods in Deep Learning Frameworks [J]. Computer Engineering and Applications, 2025, 61(9): 25-40. |
| [2] | CHEN Zhuo, LIU Dongqing, TANG Pinghua, HUANG Yan, ZHANG Wenxia, JIA Yan, CHENG Haifeng. Research Progress on Physical Adversarial Attacks for Target Detection [J]. Computer Engineering and Applications, 2025, 61(9): 80-101. |
| [3] | ZHANG Heng, HUANG Nongsen, DING Jiasong, HANG Qin. Physical Adversarial Attack Method for UAV Visual Recognition System [J]. Computer Engineering and Applications, 2025, 61(9): 211-220. |
| [4] | LI Ming, HE Zhiqi, DANG Qingxia, ZHU Shengli. Road Object Detection Algorithm for Outdoor Blind Navigation Scenariosc [J]. Computer Engineering and Applications, 2025, 61(9): 242-254. |
| [5] | WANG Jing, LI Yunxia. Research on Stock Return Forecast by NS-FEDformer Model [J]. Computer Engineering and Applications, 2025, 61(9): 334-342. |
| [6] | ZHOU Jiani, LIU Chunyu, LIU Jiapeng. Stock Price Trend Prediction Model Integrating Channel and Multi-Head Attention Mechanisms [J]. Computer Engineering and Applications, 2025, 61(8): 324-338. |
| [7] | ZHEN Tong, ZHANG Weizhen, LI Zhihui. Review of Classification Methods for Crop Structure in Remote Sensing Imagery [J]. Computer Engineering and Applications, 2025, 61(8): 35-48. |
| [8] | LI Tongwei, QIU Dawei, LIU Jing, LU Yinghang. Review of Human Behavior Recognition Based on RGB and Skeletal Data [J]. Computer Engineering and Applications, 2025, 61(8): 62-82. |
| [9] | WEN Hao, YANG Yang. Research on Clinical Short Text Classification by Integrating ERNIE with Knowledge Enhancement [J]. Computer Engineering and Applications, 2025, 61(8): 108-116. |
| [10] | XIE Binhong, TANG Biao, ZHANG Rui. UBA-OWDT: Novel Network of Open World Object Detection [J]. Computer Engineering and Applications, 2025, 61(8): 215-225. |
| [11] | WANG Yan, LU Pengyi, TA Xue. Normalized Convolutional Image Dehazing Network Combined with Feature Fusion Attention [J]. Computer Engineering and Applications, 2025, 61(8): 226-238. |
| [12] | XING Suxia, LI Kexian, FANG Junze, GUO Zheng, ZHAO Shihang. Survey of Medical Image Segmentation in Deep Learning [J]. Computer Engineering and Applications, 2025, 61(7): 25-41. |
| [13] | CHEN Yu, QUAN Jichuan. Camouflaged Object Detection:Developments and Challenges [J]. Computer Engineering and Applications, 2025, 61(7): 42-60. |
| [14] | ZHAI Huiying, HAO Han, LI Junli, ZHAN Zhifeng. Review of Research on Unmanned Aerial Vehicle Autonomous Inspection Algorithms for Railway Facilities [J]. Computer Engineering and Applications, 2025, 61(7): 61-80. |
| [15] | JIANG Wangyu, WANG Le, YAO Yepeng, MAO Guojun. Multi-Scale Feature Aggregation Diffusion and Edge Information Enhancement Small Object Detection Algorithm [J]. Computer Engineering and Applications, 2025, 61(7): 105-116. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||