[1] CHEN M Y,TANG Y C,ZOU X J,et al.Three-dimensional perception of orchard banana central stock enhanced by adaptive multi-vision technology[J].Computers and Electronics in Agriculture,2020,174:105508.
[2] LOPEZ-MARIN J,GALVEZ A.Selecting vegetative/generative/dwarfing rootstocks for improving fruit yield and quality in water stressed sweet peppers[J].Scientia Horticulture,2017,214:9-17.
[3] VOULODIMOS A,DOULAMIS N,DOULAMIS A,et al.Deep learning for computer vision:a brief review[J].Computational Intelligence and Neuroscience,2018.
[4] GIRSHICK R,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Columbus,OH,USA,23-28 June 2014:580-587.
[5] GIRSHICK R.Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision,Santiago,Chile,7-13 December,2015:1440-1448.
[6] REN S,HE K,GIRSHICK R,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,39(6):1137-1149.
[7] 熊俊涛,刘振,汤林越,等.自然环境下绿色柑橘视觉检测技术研究[J].农业机械学报,2018,49(4):45-52.
XIONG J T,LIU Z,TANG L Y,et al.Visual detection technology of green citrus under natural environment[J].Transactions of the Chinese Society for Agricultural Machinery,2018,49(4):45-52.
[8] INKYU S,GE Z,D FERAS,et al.DeepFruits:a fruit detection system using deep neural networks[J].Sensors,2016,16(8):1222.
[9] LIU W,ANGUELOV D,ERHAN D,et al.SSD:single shot multibox detector[C]//European Conference on Computer Vision.Cham:Springer,2016:21-37.
[10] GE Z,LIU S T,WANG F,et al.YOLOX:exceeding YOLO series in 2021[J].arXiv:2107.08430,2021.
[11] REDMON J,FARHADI A.Yolov3:an incremental improvement[J].arXiv:1804.02767,2018.
[12] BOCHKOVSKIY A,WANG C Y,LIAO H Y M.Yolov4:optimal speed and accuracy of object detection[J].arXiv:2004.10934,2020.
[13] TAN M,PANG R,LE Q V.Efficientdet:scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,Seattle,WA,USA,13-19 June 2020:10781-10790.
[14] WANG F,SUN Z,CHEN Y,et al.Xiaomila green pepper target detection method under complex environment based on improved YOLOv5s[J].Agronomy,2022,12:1477.
[15] ZHENG Z H,XIONG J T,LIN H,et al.A method of green citrus detection in natural environments using a deep convolutional neural network[J].Frontiers in Plant Science,2021,12:705737.
[16] CAO Z,MEI F,ZHANG D,et al.Recognition and detection of persimmon in a natural environment based on an improved YOLOv5 model[J].Electronics,2023,12:785.
[17] 高新阳,魏晟,温志庆,等.改进YOLOv5轻量级网络的柑橘检测方法[J].计算机工程与应用,2023,59(11):212-221.
GAO X Y,WEI S,WEN Z Q,et al.Citrus detection method based on improved YOLOv5 lightweight network[J].Computer Engineering and Applications,2023,59(11):212-221.
[18] 何全令,杨静文,梁晋欣,等.面向嵌入式除草机器人的玉米田间杂草识别方法[J/OL].计算机工程与应用:1-12[2023?04?04].https://kns?cnki?net.webvpn.zafu.edu.cn/kcms/detail/11.2127.tp.20230328.1044.010.html.
HE Q L,YANG J W,LIANG J X,et al.Weed identification method in corn fields applied to embedded weeding robots[J/OL].Computer Engineering and Applications:1-12[2023-04-04].https://kns-cnki-net.webvpn.zafu.edu.cn/kcms/detail/11.2127.tp.20230328.1044.010.html.
[19] 王卓,王健,王枭雄,等.基于改进YOLO v4的自然环境苹果轻量级检测方法[J].农业机械学报,2022,53(8):294-302.
WANG Z,WANG J,WANG X X,et al.Lightweight real-time apple detection method based on improved YOLOv4[J].Transactions of the Chinese Society for Agricultural Machinery,2022,53(8):294-302.
[20] WANG C Y,BOCHKOVSKIY A,LIAO H Y M.YOLOv7:trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[J].arXiv:2207.02696,2022.
[21] LIN T Y.Feature pyramid networks for object detection[C]//30th IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2017:936-944.
[22] LIU S,QI L,QIN H,et al.Path aggregation network for instance segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018.
[23] GHIASI G,LIN T Y,LE Q V.NAS-FPN:learning scalable feature pyramid architecture for object detection[C]//32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:7029-7038.
[24] DING X H,ZHANG X Y,HAN J G,et al.Diverse branch block:building a convolution as an inception-like unit[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2021:10881-10890.
[25] YANG L X,ZHANG R Y,LI L D,et al.SimAM:a simple,parameter-free attention module for convolutional neural networks[C]//Proceedings of the 13th International Conference on Machine Learning,Pasadena,California,USA,2021:11863-11874.
[26] ZHANG X D,ZENG H,GUO S,et al.Efficient long-range attention network for image super-resolution[J].arXiv:2203.06697,2022.
[27] MA N N,ZHANG X Y,ZHENG H T,et al.Shufflenet practical guidelines for efficient CNN architecture design[C]//Proceedings of the European Conference on Computer Vision.Germany:ECCV,2018:116-131.
[28] DING X,GUO Y,DING G,et al.ACNet:strengthening the kernel skeletons for powerful CNN via asymmetric convolution blocks[C]//IEEE/CVF International Conference on Computer Vision,2019:1911-1920.
[29] WANG P Q,CHEN P F,YUAN Y,et al.Understanding convolution for semantic segmentation[C]//2018 IEEE Winter Conference on Applications of Computer Vision(WACV).Lake Tahoe,NV,USA:IEEE,2018:1451?1460.
[30] CEHN L C,ZHU Y K,PAPANDREOU G,et al.Encoder-decoder with atrous separable convolution for semantic image segmentation[C]//Proceedings of the 15th European Conference on Computer Vision.Munich,Germany:Springer,2018:833-851.
[31] CHEN L,PAPANDREOU G,SCHROFF F,et al.Rethinking atrous convolution for semantic image segmentation[J].arXiv:1706.05587,2017.
[32] 宋立业,刘帅,王凯,等.基于改进EfficientDet的电网元件及缺陷识别方法[J].电工技术学报,2022,37(9):2241-2251.
SONG L Y,LIU S,WANG K,et al.Identification method of power grid components and defects based on improved EfficientDet[J].Transactions of China Electrotechnical Society,2022,37(9):2241-2251.
[33] 崔卓栋,陈玮,尹钟.基于增强特征融合网络的安全帽佩戴检测[J].电子科技,2023(4):44-51.
CUI Z D,CHEN W,YIN Z.Helmet wearing detection based on enhanced feature fusion network[J].Electronic Science and Technology,2023(4):44-51.