[1] 禹文奇, 程塨, 王美君,等. MAR20: 遥感图像军用飞机目标识别数据集[J]. 遥感学报, 2023, 27 (12): 2688-2696.
YU W Q, CHENG G, WANG M J, et al. MAR20: a benchmark for military aircraft recogntion in remote sensing images[J]. National Remote Sensing Bulletin, 2023, 27(12): 2688-2696.
[2] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J].?IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[3] HE K M, GKIOXARI G, DOLLAR P, et al. Mask R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 2961-2969.
[4] CAI Z W, VASCONCELOS N. Cascade R-CNN: delving into high quality object detection[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 6154-6162.
[5] DAI J F, LI Y, HE K M, et al. R-FCN: object detection via region-based fully convolutional networks[C]//Advances in Neural Information Processing Systems, 2016.
[6] TERVEN J, CORDOVA-ESPARZA D. A comprehensive review of YOLO: from YOLOv1 to YOLOv8 and beyond[J]. arXiv:2304.00501, 2023.
[7] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]//Proceedings of the IEEE Conference on Computer Vision, 2016.
[8] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE Conference on Computer Vision, 2017: 2980-2988.
[9] ZHOU X Y, WANG D Q, KR?HENBüHL P, et al. Objects as points[J].?arXiv:1904.07850, 2019.
[10] TAN M X, PANG R, LE Q V. EfficientDet: scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 10778-10787.
[11] CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[J]. arXiv:2005.12872, 2020.
[12] 苗茹, 岳明, 周珂, 等. 基于改进YOLOv7的遥感图像小目标检测方法[J]. 计算机工程与应用, 2024, 60(10): 246-255.
MIAO R, YUE M, ZHOU K. Small target detection method in remote sensing images based on improved YOLOv7[J]. Computer Engineering and Applications, 2024, 60(10): 246-255.
[13] 梁燕, 饶星晨. 改进YOLOX的遥感图像目标检测算法[J]. 计算机工程与应用, 2024, 60(12):181-188.
LIANG Y, RAO X C. Remote sensing image object detection algorithm with improved YOLOX[J]. Computer Engineering and Applications, 2024, 60(12):181-188.
[14] 张秀再, 沈涛, 许岱. 改进YOLOv8算法的遥感图像目标检测[J]. 激光与光电子学进展, 2024, 61(10): 1028001.
ZHANG X Z, SHEN T, XU D. Remote-sensing image object detection based on improved YOLOv8 algorithm[J]. Laser & Optoelectronics Progress, 2024, 61(10): 1028001.
[15] 张天骏, 刘玉怀, 李苏晨. 基于改进YOLOv4的遥感影像飞机目标检测[J]. 电光与控制, 2022, 29(12): 101-105.
ZHANG T J, LIU Y H, LI S C. Detection of aircrafts in remote sensing images based on improved YOLOv4[J]. Electronics Optics & Control, 2022, 29(12): 101-105.
[16] 王成龙, 赵倩, 赵琰, 等. 基于结构化剪枝的遥感飞机检测算法[J]. 电光与控制, 2022, 29(6): 37-41.
WANG C L, ZHAO Q, ZHAO Y, et al. Remote sensing aircraft detection algorithm based on structural pruning[J]. Electronics Optics & Control, 2022, 29(6): 37-41.
[17] 吴杰, 高策, 余毅, 等. 改进LDS_YOLO网络的遥感飞机检测算法研究[J]. 计算机工程与应用, 2022, 58(15): 210-219.
WU J, GAO C, YU Y, et al. Research on improved LDS_ YOLO network remote sensing aircraft detection algorithm[J]. Computer Engineering and Applications, 2022, 58(15): 210-219.
[18] 党玉龙, 叶成绪. 基于Faster R-CNN的轻量化遥感图像军用飞机检测模型[J/OL]. 激光杂志: 1-8[2024-02-04].http://kns.cnki.net/kcms/detail/50.1085.TN.20230920.0924.002.html.
DANG Y L, YE C X. A lightweight remote sensing image military aircraft detection model based on Faster R-CNN[J/OL]. Laser Journal: 1-8[2024-02-04]. http://kns.cnki.net/kcms/detail/50.1085.TN.20230920.0924.002.html.
[19] 王杰, 张上, 张岳, 等. 改进YOLOv5的军事飞机检测算法[J]. 无线电工程, 2024, 54(3): 589-596.
WANG J, ZHANG S, ZHANG Y, et al. Improved YOLOv5’s military aircraft detection algorithm[J]. Radio Engineering, 2024, 54(3): 589-596.
[20] CHEN J, KAO S, HE H, et al. Run, don’t walk: chasing higher FLOPS for faster neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 12021-12031.
[21] GEVORGYAN Z. SIoU loss: more powerful learning for bounding box regression[J]. arXiv:2205.12740, 2022.
[22] ZHANG H, XU C, ZHANG S J. Inner-IoU: more effective intersection over union loss with auxiliary bounding box[J]. arXiv:2311.02877, 2023.
[23] LEE J, PARK S, MO S, et al. Layer-adaptive sparsity for the magnitude-based pruning[C]//Proceedings of the International Conference on Learning Representations, 2020.
[24] SHU C Y, LIU W F, GAO J F, et al. Channel-wise knowledge distillation for dense prediction[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 5291-5300.
[25] SANDLER M, HOWARD A, ZHU M, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 4510-4520.
[26] HOWARD A, SANDLER M, CHU G, et al. Searching for MobileNetV3[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 1314-1324.
[27] HAN K, WANG Y, TIAN Q, et al. GhostNet: more features from cheap operations[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 1580-1589.
[28] MA N N, ZHANG X Y, ZHENG H T, et al. ShuffleNet V2: practical guidelines for efficient CNN architecture design[C]//Proceedings of the European Conference on Computer Vision, 2018.
[29] LIU X Y, PENG H W, ZHENG N X, et al. EfficientViT: memory efficient vision transformer with cascaded group attention[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 14420-14430.
[30] REZATOFIGHI H, TSOI N, GWAK J, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 658-666.
[31] ZHENG Z, WANG P, LIU W, et al. Distance-IoU loss: faster and better learning for bounding box regression[C]//Proceedings of the 2020 AAAI Conference on Artificial Intelligence, 2020: 12993-13000.
[32] ZHANG Y F, REN W Q, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. arXiv:2101.08158, 2021.
[33] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, 2017: 618-626. |