计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (22): 1-17.DOI: 10.3778/j.issn.1002-8331.2404-0372
周志飞,李华,冯毅雄,陆见光,钱松荣,李少波
出版日期:
2024-11-15
发布日期:
2024-11-14
ZHOU Zhifei, LI Hua, FENG Yixiong, LU Jianguang, QIAN Songrong, LI Shaobo
Online:
2024-11-15
Published:
2024-11-14
摘要: 轻量化设计是解决深度卷积神经网络(deep convolutional neural network,DCNN)对设备性能和硬件资源依赖性的流行范式,轻量化的目的是在不牺牲网络性能的前提下,提高计算速度和减少内存占用。综述了DCNN的轻量化设计方法,着重回顾了近年来DCNN的研究进展,包括体系设计和模型压缩两大轻量化策略,深入比较了这两类方法的创新性、优势与局限性,并探讨了支撑轻量化模型的底层框架。此外,对轻量化网络已经成功应用的场景进行了描述,并对DCNN轻量化的未来发展趋势进行了预测,旨在为深度卷积神经网络的轻量化研究提供有益的见解和参考。
周志飞, 李华, 冯毅雄, 陆见光, 钱松荣, 李少波. 轻量化深度卷积神经网络设计研究进展[J]. 计算机工程与应用, 2024, 60(22): 1-17.
ZHOU Zhifei, LI Hua, FENG Yixiong, LU Jianguang, QIAN Songrong, LI Shaobo. Research Progress on Designing Lightweight Deep Convolutional Neural Networks[J]. Computer Engineering and Applications, 2024, 60(22): 1-17.
[1] SZEGEDY C, TOSHEV A, ERHAN D. Deep neural networks for object detection[C]//Advances in Neural Information Processing Systems 26, 2013. [2] ZHAO Z Q, ZHENG P, XU S T, et al. Object detection with deep learning: a review[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(11): 3212-3232. [3] VAILAYA A, FIGUEIREDO M A, JAIN A K, et al. Image classification for content-based indexing[J]. IEEE Transactions on Image Processing, 2001, 10(1): 117-130. [4] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834-848. [5] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1-9. [6] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778. [7] SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, Inception-ResNet and the impact of residual connections on learning[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017: 4278-4284. [8] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4700-4708. [9] ZOPH B, VASUDEVAN V, SHLENS J, et al. Learning transferable architectures for scalable image recognition[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 8697-8710. [10] TAN M, LE Q. EfficientNet: rethinking model scaling for convolutional neural networks[C]//Proceedings of the 36th International Conference on Machine Learning, 2019: 6105-6114. [11] 王军, 冯孙铖, 程勇. 深度学习的轻量化神经网络结构研究综述[J]. 计算机工程, 2021, 47(8): 1-13. WANG J, FENG S C, CHENG Y. A survey on lightweight neural network structures in deep learning[J]. Computer Engineering, 2021, 47(8): 1-13. [12] HOWARD A G, ZHU M, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[J]. arXiv:1704.04861, 2017. [13] SANDLER M, HOWARD A, ZHU M, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 4510-4520. [14] HOWARD A, SANDLER M, CHU G, et al. Searching for MobileNetV3[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, 2019: 1314-1324. [15] LECUN Y, DENKER J S, SOLLA S A. Optimal brain damage[C]//Proceedings of the 2nd International Conference on Neural Information Processing Systems, 1989: 598-605. [16] 李江昀, 赵义凯, 薛卓尔, 等. 深度神经网络模型压缩综述[J]. 北京科技大学学报, 2019, 41(10): 1229-1239. LI J Y, ZHAO Y K, XUE Z E, et al. A review of deep neural network model compression[J]. Journal of University of Science and Technology Beijing, 2019, 41(10): 1229-1239. [17] TAN M, CHEN B, PANG R, et al. MnasNet: platform-aware neural architecture search for mobile[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 2820-2828. [18] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [19] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409. 1556, 2014. [20] SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 2818-2826. [21] PENG C, ZHANG X, YU G, et al. Large kernel matters-improve semantic segmentation by global convolutional network[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4353-4361. [22] IANDOLA F N. SqueezeNet: AlexNet‐level accuracy with 50x fewer parameters and <1 MB model size[J]. arXiv:1602.07360, 2016. [23] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [24] ZHANG X, ZHOU X, LIN M, et al. ShuffleNet: an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 6848-6856. [25] MA N, ZHANG X, ZHENG H T, et al. ShuffleNet v2: practical guidelines for efficient cnn architecture design[C]//Proceedings of the 15th European Conference on Computer Vision, 2018: 116-131. [26] 何湘杰, 宋晓宁. YOLOv4-Tiny 的改进轻量级目标检测算法[J]. 计算机科学与探索, 2024, 18(1): 138-150. HE X J, SONG X N. Improved lightweight object detection algorithm of YOLOv4-Tiny[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(1): 138-150. [27] GHIASI G, LIN T Y, LE Q V. NAS-FPN: learning scalable feature pyramid architecture for object detection[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 7036-7045. [28] GOLDBERG D E. Genetic algorithms in search, optimization, and machine learning[J]. Ethnographic Praxis in Industry Conference Proceedings, 1988. [29] BAKER B, GUPTA O, NAIK N, et al. Designing neural network architectures using reinforcement learning[J]. arXiv:1611.02167, 2016. [30] 杨木润, 曹润柘, 杜权, 等. 神经网络结构搜索前沿综述[J]. 中文信息学报, 2023, 37(10): 1-15. YANG M R, CAO R Z, DU Q, et al. A review of the frontier of neural architecture search[J]. Journal of Chinese Information Processing, 2023, 37(10): 1-15. [31] HUANG L, WANG Z, FU X. Pedestrian detection using RetinaNet with multi-branch structure and double pooling attention mechanism[J]. Multimedia Tools and Applications, 2024, 83(2): 6051-6075. [32] LIU H, SIMONYAN K, VINYALS O, et al. Hierarchical representations for efficient architecture search[C]//Proceedings of the 2018 International Conference on Learning Representations, 2018. [33] REAL E, LIANG C, SO D, et al. AutoML-Zero: evolving machine learning algorithms from scratch[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 8007-8019. [34] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540): 529-533. [35] SHARMA S, LAKSHMINARAYANAN A S, RAVINDRAN B. Learning to repeat: fine grained action repetition for deep reinforcement learning[J]. arXiv:1702.06054, 2017. [36] SILVER D, SCHRITTWIESER J, SIMONYAN K, et al. Mastering the game of go without human knowledge[J]. Nature, 2017, 550(7676): 354-359. [37] BELLO I, ZOPH B, VASUDEVAN V, et al. Neural optimizer search with reinforcement learning[C]//Proceedings of the 34th International Conference on Machine Learning, 2017: 459-468. [38] LIU H, SIMONYAN K, YANG Y. DARTS: differentiable architecture search[J]. arXiv:1806.09055, 2018. [39] CAI H, ZHU L, HAN S. ProxylessNAS: direct neural architecture search on target task and hardware[J]. arXiv:1812.00332, 2018. [40] WU B, DAI X, ZHANG P, et al. FBNet: hardware-aware efficient convnet design via differentiable neural architecture search[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 10734-10742. [41] KLEIN A, FALKNER S, SPRINGENBERG J T, et al. Learning curve prediction with Bayesian neural networks[C]//Proceedings of the 5th International Conference on Learning Representations, 2017. [42] SNOEK J, RIPPEL O, SWERSKY K, et al. Scalable Bayesian optimization using deep neural networks[C]//Proceedings of the 32nd International Conference on Machine Learning, 2015: 2171-2180. [43] YANG J, LIU Y, XU H. HOTNAS: hierarchical optimal transport for neural architecture search[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 11990-12000. [44] CAI H, GAN C, WANG T, et al. Once for all: train one network and specialize it for efficient deployment[C]//Proceedings of the 8th International Conference on Learning Representations, 2020. [45] DUAN Y, CHEN X, XU H, et al. TransNAS-Bench-101: improving transferability and generalizability of cross-task neural architecture search[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 5251-5260. [46] ELSKEN T, METZEN J H, HUTTER F. Efficient multi-objective neural architecture search via Lamarckian evolution[C]//Proceedings of the 6th International Conference on Learning Representations, 2018. [47] JIN H, SONG Q, HU X. Auto-Keras: an efficient neural architecture search system[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019: 1946-1956. [48] WEI T, WANG C, RUI Y, et al. Network morphism[C]//Proceedings of the 33rd International Conference on Machine Learning, 2016: 564-572. [49] LIU Y, SUN Y, XUE B, et al. A survey on evolutionary neural architecture search[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 34(2): 550-570. [50] GAIER A, HA D. Weight agnostic neural networks[C]//Advances in Neural Information Processing Systems 32, 2019. [51] HUANG T, YOU S, WANG F, et al. GreedyNASv2: greedier search with a greedy path filter[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 11902-11911. [52] WANG H, GE C, CHEN H, et al. PreNAS: preferred one-shot learning towards efficient neural architecture search[C]//Proceedings of the 40th International Conference on Machine Learning, 2023: 35642-35654. [53] 耿丽丽, 牛保宁. 深度神经网络模型压缩综述[J]. 计算机科学与探索, 2020, 14(9): 1441-1455. GENG L L, NIU B N. Review of deep neural network model compression[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(9): 1441-1455. [54] HE Y, LIN J, LIU Z, et al. AMC: AutoML for model compression and acceleration on mobile devices[C]//Proceedings of the 15th European Conference on Computer Vision, 2018: 784-800. [55] SHUVO M M H, ISLAM S K, CHENG J, et al. Efficient acceleration of deep learning inference on resource-constrained edge devices: a review[J]. Proceedings of the IEEE, 2023, 111(1): 42-91. [56] HAN S, MAO H, DALLY W J. Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding[J]. arXiv:1510.00149, 2015. [57] CHENG H, ZHANG M, SHI J Q. A survey on deep neural network pruning-taxonomy, comparison, analysis, and recommendations[J]. arXiv:2308.06767, 2023. [58] CHENG Y, WANG D, ZHOU P, et al. Model compression and acceleration for deep neural networks: the principles, progress, and challenges[J]. IEEE Signal Processing Magazine, 2018, 35(1): 126-136. [59] YANG H, YIN H, SHEN M, et al. Global vision transformer pruning with Hessian-aware saliency[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 18547-18557. [60] ZHUANG Z, TAN M, ZHUANG B, et al. Discrimination-aware channel pruning for deep neural networks[C]//Advances in Neural Information Processing Systems 31, 2018. [61] SCULLEY D. Web-scale k-means clustering[C]//Proceedings of the 19th International Conference on World Wide Web, 2010: 1177-1178. [62] COURBARIAUX M, BENGIO Y, DAVID J P. Training deep neural networks with low precision multiplications[J]. arXiv:1412.7024, 2014. [63] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[J]. arXiv:1503.02531, 2015. [64] ADRIANA R, NICOLAS B, EBRAHIMI K S, et al. FitNets: hints for thin deep nets[C]//Proceedings of the 3rd International Conference on Learning Representations, 2015. [65] LEBEDEV V, GANIN Y, RAKHUBA M, et al. Speeding-up convolutional neural networks using fine-tuned cp-decomposition[J]. arXiv:1412.6553, 2014. [66] KIM Y D, PARK E, YOO S, et al. Compression of deep convolutional neural networks for fast and low power mobile applications[J]. arXiv:1511.06530, 2015. [67] SETIONO R, LIU H. Neural-network feature selector[J]. IEEE Transactions on Neural Networks, 1997, 8(3): 654-662. [68] LUO J H, WU J. An entropy-based pruning method for CNN compression[J]. arXiv:1706.05791, 2017. [69] YANG T J, CHEN Y H, SZE V. Designing energy-efficient convolutional neural networks using energy-aware pruning[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 5687-5695. [70] HU Y, SUN S, LI J, et al. A novel channel pruning method for deep neural network compression[J]. arXiv:1805.11394, 2018. [71] ANWAR S, SUNG W. Coarse pruning of convolutional neural networks with random masks[J]. arXiv:1610.09639, 2016. [72] WEN W, WU C, WANG Y, et al. Learning structured sparsity in deep neural networks[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016: 2082-2090. [73] HU H, PENG R, TAI Y W, et al. Network trimming: a data-driven neuron pruning approach towards efficient deep architectures[J]. arXiv:1607.03250, 2016. [74] BUCILUA C, CARUANA R, NICULESCU-MIZIL A. Model compression[C]//Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006: 535-541. [75] YIM J, JOO D, BAE J, et al. A gift from knowledge distillation: fast optimization, network minimization and transfer learning[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4133-4141. [76] CHEN Y, WANG N, ZHANG Z. DarkRank: accelerating deep metric learning via cross sample similarities transfer[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence, 2018: 2852-2859. [77] CZARNECKI W M, OSINDERO S, JADERBERG M, et al. Sobolev training for neural networks[J]. arXiv:1706.04859, 2017. [78] LOPES R G, FENU S, STARNER T. Data-free knowledge distillation for deep neural networks[J]. arXiv:1710.07535, 2017. [79] ZHOU G, FAN Y, CUI R, et al. Rocket launching: a universal and efficient framework for training well-performing light net[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence, 2018: 4580-4587. [80] CHEN W, WILSON J, TYREE S, et al. Compressing neural networks with the hashing trick[C]//Proceedings of the 2015 International Conference on Machine Learning, 2015: 2285-2294. [81] HAN S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural networks[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015: 1135-1143. [82] LIU B, WANG M, FOROOSH H, et al. Sparse convolutional neural networks[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 806-814. [83] JADERBERG M, VEDALDI A, ZISSERMAN A. Speeding up convolutional neural networks with low rank expansions[C]//Proceedings of the British Machine Vision Conference 2014, 2014. [84] PENG B, TAN W, LI Z, et al. Extreme network compression via filter group approximation[C]//Proceedings of the 15th European Conference on Computer Vision, 2018: 300-316. [85] WANG P, CHENG J. Accelerating convolutional neural networks for mobile applications[C]//Proceedings of the 24th ACM International Conference on Multimedia, 2016: 541-545. [86] DENTON E L, ZAREMBA W, BRUNA J, et al. Exploiting linear structure within convolutional networks for efficient evaluation[C]//Advances in Neural Information Processing Systems 27, 2014. [87] TAI C, XIAO T, ZHANG Y, et al. Convolutional neural networks with low-rank regularization[J]. arXiv:1511.06067, 2015. [88] ZHANG X, ZOU J, MING X, et al. Efficient and accurate approximations of nonlinear convolutional networks[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1984-1992. [89] GUO Y, YAO A, CHEN Y. Dynamic network surgery for efficient DNNs[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016: 1387-1395. [90] CHENG J, WU J, LENG C, et al. Quantized CNN: a unified approach to accelerate and compress convolutional networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2017, 29(10): 4730-4743. [91] YU X, LIU T, WANG X, et al. On compressing deep models by low rank and sparse decomposition[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 7370-7379. [92] WALAWALKAR D, SHEN Z, SAVVIDES M. Online ensemble model compression using knowledge distillation[C]//Proceedings of the 16th European Conference Computer Vision, Glasgow, 2020: 18-35. [93] SRINIVAS S, BABU R V. Data-free parameter pruning for deep neural networks[C]//Proceedings of the British Machine Vision Conference 2015, 2015: 31. [94] LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient ConvNets[C]//Proceedings of the 5th International Conference on Learning Representations, 2016. [95] PASZKE A, GROSS S, MASSA F, et al. PyTorch: an imperative style, high-performance deep learning library[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019: 8026-8037. [96] ABADI M, BARHAM P, CHEN J, et al. TensorFlow: a system for large-scale machine learning[C]//Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, 2016: 265-283. [97] JIA Y, SHELHAMER E, DONAHUE J, et al. CAFFE: convolutional architecture for fast feature embedding[C]//Proceedings of the 22nd ACM International Conference on Multimedia, 2014: 675-678. [98] WANG B, ZHANG C, LI Y, et al. An ultra-lightweight efficient network for image-based plant disease and pest infection detection[J]. Precision Agriculture, 2023, 24(5): 1836-1861. [99] HE Y, SAINATH T N, PRABHAVALKAR R, et al. Streaming end-to-end speech recognition for mobile devices[C]//Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing, 2019: 6381-6385. [100] WAN G, YAO L. LMFRNet: a lightweight convolutional neural network model for image analysis[J]. Electronics, 2023, 13(1): 129. [101] HE Q, XU A, YE Z, et al. Object detection based on lightweight YOLOX for autonomous driving[J]. Sensors, 2023, 23(17): 7596. [102] DU Y, LIU X, YI Y, et al. Optimizing road safety: advancements in lightweight YOLOv8 models and GhostC2f design for real-time distracted driving detection[J]. Sensors, 2023, 23(21): 8844. |
[1] | 窦智, 高浩然, 刘国奇, 常宝方. 轻量化YOLOv8的小样本钢板缺陷检测算法[J]. 计算机工程与应用, 2024, 60(9): 90-100. |
[2] | 蔡腾, 陈慈发, 董方敏. 结合Transformer和动态特征融合的低照度目标检测[J]. 计算机工程与应用, 2024, 60(9): 135-141. |
[3] | 曹钢钢, 王帮海, 宋雨. 结合数据增强的跨模态行人重识别轻量网络[J]. 计算机工程与应用, 2024, 60(8): 131-139. |
[4] | 章芮宁, 闫坤, 叶进. 轻量化YOLO-v7的数显仪表检测及读数[J]. 计算机工程与应用, 2024, 60(8): 192-201. |
[5] | 殷民, 贾新春, 张学立, 冯江涛, 范晓宇. 改进YOLOv4-Tiny的面向售货柜损害行为人体检测[J]. 计算机工程与应用, 2024, 60(8): 234-241. |
[6] | 胡伟超, 郭宇阳, 张奇, 陈艳艳. 基于改进YOLOX的轻量化交通监控目标检测算法[J]. 计算机工程与应用, 2024, 60(7): 167-174. |
[7] | 赖镜安, 陈紫强, 孙宗威, 裴庆祺. 基于YOLOv5的轻量级雾天目标检测方法[J]. 计算机工程与应用, 2024, 60(6): 78-88. |
[8] | 李瑞权, 朱路, 刘媛媛. 滤波器弹性的深度神经网络通道剪枝压缩方法[J]. 计算机工程与应用, 2024, 60(6): 163-171. |
[9] | 曲海成, 袁旭东, 李佳琦. 适用于约束环境的轻量级目标检测模型[J]. 计算机工程与应用, 2024, 60(6): 274-281. |
[10] | 牛鑫宇, 毛鹏军, 段云涛, 娄晓恒. 基于YOLOv5s室内目标检测轻量化改进算法研究[J]. 计算机工程与应用, 2024, 60(3): 109-118. |
[11] | 张鹏, 谢莉, 杨海麟. 用于宫颈细胞分类的轻量级网络ICA-Res2Net[J]. 计算机工程与应用, 2024, 60(3): 187-195. |
[12] | 赵佰亭, 程瑞丰, 贾晓芬. 融合多尺度特征的YOLOv8裂缝缺陷检测算法[J]. 计算机工程与应用, 2024, 60(22): 261-270. |
[13] | 林浩田, 李永昌, 江静, 秦广军. 用于6D姿态估计的轻量级全流双向融合网络[J]. 计算机工程与应用, 2024, 60(22): 282-291. |
[14] | 刘劲, 罗晓曙, 徐照兴. 权重推断与标签平滑的轻量级人脸表情识别[J]. 计算机工程与应用, 2024, 60(2): 254-263. |
[15] | 何全令, 杨静文, 梁晋欣, 傅雷扬, 滕杰, 李绍稳. 面向嵌入式除草机器人的玉米田间杂草识别方法[J]. 计算机工程与应用, 2024, 60(2): 304-313. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||