[1] DENG T M, ZHANG X Y, CHENG X X. A new vehicle dete-ction framework based on feature-guided in the road scene[J]. Computers, Materials & Continua, 2024, 78(1): 533-549.
[2] ZHENG Q H, TIAN X Y, YU Z G, et al. MobileRaT: a lightweight radio transformer method for automatic modulation classification in drone communication systems[J]. Drones, 2023, 7(10): 596.
[3] LIANG T J, BAO H, PAN W G, et al. DetectFormer: category-assisted transformer for traffic scene object detection[J]. Sensors, 2022, 22(13): 4833.
[4] SUN Y, ZHANG Y H, WANG H Y, et al. SES-YOLOv8n: automatic driving object detection algorithm based on improved YOLOv8[J]. Signal, Image and Video Processing, 2024, 18(5): 3983-3992.
[5] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[6] HE K M, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2980-2988.
[7] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot MultiBox detector[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2016: 21-37.
[8] REDMON J, FARHADI A. YOLOv3: an incremental impro-vement[J]. arXiv:1804.02767, 2018.
[9] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[J]. arXiv:2004. 10934, 2020.
[10] LI C, LI L L, JIANG H L, et al. YOLOv6: a single-stage object detection framework for industrial applications[J]. arXiv: 2209.02976, 2022.
[11] WANG C Y, BOCHKOVSKIY A, LIAO H M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 7464-7475.
[12] WANG C Y, YEH I H, LIAO H Y M. YOLOv9: learning what you want to learn using programmable gradient information[J]. arXiv:2402.13616, 2024.
[13] GE Z, LIU S T, WANG F, et al. YOLOX: exceeding YOLO series in 2021[J]. arXiv:2107.08430, 2021.
[14] ZHENG Q H, SAPONARA S, TIAN X Y, et al. A real-time constellation image classification method of wireless communication signals based on the lightweight network MobileViT[J]. Cognitive Neurodynamics, 2024, 18(2): 659-671.
[15] 马俊燕, 常亚楠. MFE-YOLOX: 无人机航拍下密集小目标检测算法[J]. 重庆邮电大学学报(自然科学版), 2024, 36(1): 128-135.
MA J Y, CHANG Y N. MFE-YOLOX: dense small target detection algorithm under UAV aerial photography[J]. Journal of Chongqing University of Posts and Telecommunications (Natural Science Edition), 2024, 36(1): 128-135.
[16] WANG G B, ZHOU K, WANG L Z, et al. Context-aware and attention-driven weighted fusion traffic sign detection network[J]. IEEE Access, 2023, 11: 42104-42112.
[17] YE H C, WANG Y N. Residual transformer YOLO for detec-ting multi-scale crowded pedestrian[J]. Applied Sciences, 2023, 13(21): 12032.
[18] LI X, HE M, LIU Y, et al. SPCS: a spatial pyramid conv-olutional shuffle module for YOLO to detect occludedobject[J]. Complex & Intelligent Systems, 2023, 9(1): 301-315.
[19] SHAO X T, WANG Q, YANG W, et al. Multi-scale feature pyramid network: a heavily occluded pedestrian detection network based on resnet[J]. Sensors, 2021, 21(5): 1820-1820.
[20] 宦涣, 孙艳文. 改进YOLO算法的复杂交通场景目标检测[J]. 西安工程大学学报, 2022, 36(6): 86-92.
HUAN H, SUN Y W. Target detection method in complex traffic scene based on improved YOLO algorithm[J]. Journal of Xi’an Polytechnic University, 2022, 36(6): 86-92.
[21] JIANG H B, REN J H, LI A X. 3D object detection under urban road traffic scenarios based on dual-layer voxel features fusion augmentation[J]. Sensors, 2024, 24(11): 3267.
[22] LIN T Y, DOLLáR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 936-944.
[23] LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 8759-8768.
[24] WU T Y, TANG S, ZHANG R, et al. CGNet: a light-weight context guided network for semantic segmentation[J]. IEEE Transactions on Image Processing, 2020, 30: 1169-1179.
[25] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2012: 3354-3361.
[26] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2999-3007.
[27] TAN M X, PANG R M, LE Q V. EfficientDet: scalable and efficient object detection[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 10778-10787.
[28] ZHOU X Y, WANG D Q, KR?HENBüHL P. Objects as points[J]. arXiv:1904.07850, 2019. |