[1] ZHOU X, DING W, JIN W. Microwave-assisted extraction of lipids, carotenoids, and other compounds from marine resources[M//Innovative and emerging technologies in the bio-marine food sector.[S.l.]: Academic Press, 2022: 375-394.
[2] CHRISTENSEN L, DE GEA FERNáNDEZ J, HILDEBRANDT M, et al. Recent advances in AI for navigation and control of underwater robots[J]. Current Robotics Reports, 2022, 3(4): 165-175.
[3] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436.
[4] GIRSHICK R. Fast R-CNN[C]//Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 1440-1448.
[5] REN S, HE K, GIRSHICK R, et al. Faster RCNN: towards realtime object detection with region proposal networks[C]//Proceedings of Advances in Neural Information Processing Systems, 2015: 91-99.
[6] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]//European Conference on Computer Vision. Cham: Springer, 2016: 21-37.
[7] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 779-788.
[8] ZHANG M, XU S, SONG W, et al. Lightweight underwater object detection based on YOLO v4 and multi-scale attentional feature fusion[J]. Remote Sensing, 2021, 13(22): 4706.
[9] CHEN X, YUAN M, YANG Q, et al. Underwater-YCC: underwater target detection optimization algorithm based on YOLOv7[J]. Journal of Marine Science and Engineering, 2023, 11(5): 995.
[10] LI Y, BAI X, XIA C. An improved YOLOV5 based on triplet attention and prediction head optimization for marine organism detection on underwater mobile platforms[J]. Journal of Marine Science and Engineering, 2022, 10(9): 1230.
[11] 叶赵兵, 段先华, 赵楚. 改进YOLOv3-SPP水下目标检测研究[J]. 计算机工程与应用, 2023, 59(6): 231-240.
YE Z B, DUAN X H, ZHAO C. Research on underwater target detection by improved YOLOv3-SPP[J]. Computer Engineering and Applications, 2023, 59(6): 231-240.
[12] WANG C Y, BOCHKOVSKIY A, LIAO H. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[J]. arXiv:2207.02696, 2022.
[13] MA N, ZHANG X, ZHENG H T, et al. ShuffleNet V2: practical guidelines for efficient CNN architecture design[J]. arXiv:1807.11164, 2018.
[14] HOWARD A, ZHU M L, CHEN B, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications[J]. arXiv:1704.04861, 2017.
[15] TAN M, PANG R, LE Q V. EfficientDet: scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 10778-10787.
[16] ZHU L, WANG X, KE Z, et al. BiFormer: vision transformer with bi-level routing attention[J]. arXiv:2303.08810, 2023.
[17] DING X H, ZHANG X Y, MA N N, et al. RepVGG: making VGG style ConvNets great again[J]. arXiv:2101.03697, 2021.
[18] DOLLáR P, SINGH M, GIRSHICK R. Fast and accurate model scaling[J]. arXiv:2103.06877, 2021.
[19] BIAN P, ZHENG Z, ZHANG D. Light-weight multi-channel aggregation network for image super-resolution[C]//Chinese Conference on Pattern Recognition and Computer Vision, 2021.
[20] 楚玉春, 龚航, 王学芳, 等. 基于 YOLOv4 的目标检测知识蒸馏算法研究[J]. 计算机科学, 2022, 49(6A): 337-344.
CHU Y C, GONG H, WANG X F, et al. Study on konwledge distillation of target detection algorithm based on YOLOV4[J]. Computer Science, 2022, 49(6A): 337-344.
[21] NIU Z Y, ZHONG G Q, YU H. A review on the attention mechanism of deep learning[J]. Neurocomputing, 2021, 452: 48-62.
[22] ZHANG D, ZHENG Z, LI M, et al. CSART: channel and spatial attention-guided residual learning for real-time object tracking[J]. Neurocomputing, 2023, 436: 260-272.
[23] LIU Z, LIN Y, CAO Y, et al. Swin transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 10012-10022.
[24] LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation[C]//Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, 2018: 8759-8768.
[25] GUO Y, CHEN S, ZHAN R, et al. SAR ship detection based on YOLOv5 using CBAM and BiFPN[C]//IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, 2022: 2147-2150.
[26] CHEN J, MAI H S, LUO L, et al. Effective feature fusion network in BIFPN for small object detection[C]//2021 IEEE International Conference on Image Processing (ICIP), 2021: 699-703.
[27] 李翔, 张涛, 张哲, 等. Transformer在计算机视觉领域的研究综述[J]. 计算机工程与应用, 2023, 59(1): 1-14.
LI X, ZHANG T, ZHANG Z, et al. Survey of Transformer research in computer vision[J]. Computer Engineering and Applications, 2023, 59(1): 1-14.
[28] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 7132-7141.
[29] WANG Q L, WU B G, ZHU P F. ECA-Net: efficient channel attention for deep convolutional neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, June 13-19, 2020. Piscataway, NJ: IEEE, 2020: 11531-11539.
[30] HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 13713-13722.
[31] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 3-19.
[32] PAN X, GE C, LU R, et al. On the integration of self-attention and convolution[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 815-825.
[33] XI D, QIN Y, WANG S. YDRSNet: an integrated YOLOv5-Deeplabv3+real-time segmentation network for gear pitting measurement[J]. Journal of Intelligent Manufacturing, 2023, 34: 1585-1599.
[34] TIAN Z, SHEN C, CHEN H, et al. Fcos: fully convolutional one-stage object detection[C]//Proceedings of the IEEE International Conference on Computer Vision, 2019: 9627-9636.
[35] GE Z, LIU S, WANG F, et al. Yolox: exceeding yolo series in 2021[J]. arXiv:2107.08430, 2021. |