
Computer Engineering and Applications ›› 2025, Vol. 61 ›› Issue (11): 204-215.DOI: 10.3778/j.issn.1002-8331.2402-0221
• Pattern Recognition and Artificial Intelligence • Previous Articles Next Articles
LIU Zhoufeng, WU Wentao, LI Huanyu, SHAO Xinnan, LI Chunlei
Online:2025-06-01
Published:2025-05-30
刘洲峰,吴文涛,李环宇,邵昕楠,李春雷
LIU Zhoufeng, WU Wentao, LI Huanyu, SHAO Xinnan, LI Chunlei. Automatic Channel Pruning Method Based on Clustering and Swarm Intelligence Optimization Algorithm[J]. Computer Engineering and Applications, 2025, 61(11): 204-215.
刘洲峰, 吴文涛, 李环宇, 邵昕楠, 李春雷. 聚类和群智能优化算法的自动剪枝方法[J]. 计算机工程与应用, 2025, 61(11): 204-215.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2402-0221
| [1] PHUONG M, LAMPERT C. Towards understanding knowledge distillation[C]//Proceedings of the International Conference on Machine Learning, 2019: 5142-5151. [2] JI G, ZHU Z. Knowledge distillation in wide neural networks: risk bound, data efficiency and imperfect teacher[C]//Advances in Neural Information Processing Systems, 2020: 20823-20833. [3] DENTON E L, ZAREMBA W, BRUNA J, et al. Exploiting linear structure within convolutional networks for efficient evaluation[J]. arXiv:1404.0736, 2014. [4] HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[J]. arXiv:1704.04861, 2017. [5] ZHANG X Y, ZHOU X Y, LIN M X, et al. ShuffleNet: an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 6848-6856. [6] LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient convnets[J]. arXiv:1608.08710, 2016. [7] LIU Z, SUN M, ZHOU T, et al. Rethinking the value of network pruning[J]. arXiv:1810.05270, 2018. [8] HE Y H, LIN J, LIU Z J, et al. AMC: automl for model compression and acceleration on mobile devices[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 815-832. [9] LIU Z C, MU H Y, ZHANG X Y, et al. MetaPruning: meta learning for automatic neural network channel pruning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 3295-3304. [10] LIN M, JI R, ZHANG Y, et al. Channel pruning via automatic structure search[J]. arXiv:2001.08565, 2020. [11] WANG Y C, GUO S, GUO J C, et al. Towards performance-maximizing neural network pruning via global channel attention[J]. Neural Networks, 2024, 171: 104-113. [12] HOU Y N, MA Z, LIU C X, et al. Network pruning via resource reallocation[J]. Pattern Recognition, 2024, 145: 109886. [13] ZHENG X W, YANG C Y, ZHANG S K, et al. DDPNAS: efficient neural architecture search via dynamic distribution pruning[J]. International Journal of Computer Vision, 2023, 131(5): 1234-1249. [14] FRANKLE J, CARBIN M. The lottery ticket hypothesis: finding sparse, trainable neural networks[J]. arXiv:1803. 03635, 2018. [15] ZHANG Y, LIN M, ZHONG Y, et al. Lottery jackpots exist in pre-trained models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(12): 14990-15004. [16] KAUFMAN L, ROUSSEEUW P J. Finding groups in data: an introduction to cluster analysis[M]. New York: John Wiley & Sons, 2009. [17] BOUCHARD-C?Té A, PETROV S, KLEIN D. Randomized pruning: efficiently calculating expectations in large dynamic programs[C]//Proceedings of the 23rd International Conference on Neural Information Processing Systems, 2009: 144-152. [18] YE J, LU X, LIN Z, et al. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers[J]. arXiv:1802.00124, 2018. [19] HE Y, LIU P, WANG Z W, et al. Filter pruning via geometric median for deep convolutional neural networks acceleration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4335-4344. [20] MOLCHANOV P, MALLYA A, TYREE S, et al. Importance estimation for neural network pruning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11256-11264. [21] LUO J H, WU J X. Neural network pruning with residual-connections and limited-data[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1455-1464. [22] LIEBENWEIN L, BAYKAL C, LANG H, et al. Provable filter pruning for efficient neural networks[J]. arXiv:1911. 07412, 2019. [23] HE Y, LIU P, ZHU L, et al. Filter pruning by switching to neighboring CNNs with good attributes[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(10): 8044-8056. [24] GUO S, ZHANG L, ZHENG X, et al. Automatic network pruning via hilbert-schmidt independence criterion lasso under information bottleneck principle[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 17458-17469. [25] MA W K, LEWIS J P, KLEIJN W B. The HSIC bottleneck: deep learning without back-propagation[J]. arXiv:1908.01580, 2019. [26] LUO J H, WU J X. AutoPruner: an end-to-end trainable filter pruning method for efficient deep model inference[J]. Pattern Recognition, 2020, 107: 107461. [27] HUANG Z H, WANG N Y. Data-driven sparse structure selection for deep neural networks[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 317-334. [28] LIU J, ZHUANG B, ZHUANG Z, et al. Discrimination-aware network pruning for deep model compression[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(8): 4035-4051. [29] LIN M B, JI R R, WANG Y, et al. HRank: filter pruning using high-rank feature map[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1526-1535. [30] YE M, GONG C, NIE L, et al. Good subnetworks provably exist: pruning via greedy forward selection[C]//Proceedings of the International Conference on Machine Learning, 2020: 10820-10830. [31] TANCIK M, MILDENHALL B, WANG T, et al. Learned initializations for optimizing coordinate-based neural representations[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 2845-2854. [32] WU B C, KEUTZER K, DAI X L, et al. FBNet: hardware-aware efficient ConvNet design via differentiable neural architecture search[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 10726-10734. [33] ZOPH B, LE Q V, MATHUR V, et al. Neural architecture search with reinforcement learning[J]. arXiv:1611.01578, 2016. [34] BAKER B, GUPTA O, NAIK N, et al. Designing neural network architectures using reinforcement learning[J]. arXiv: 1611.02167, 2016. [35] REAL E, MOORE S, SELLE A, et al. Large-scale evolution of image classifiers[C]//Proceedings of the International Conference on Machine Learning, 2017: 2902-2911. [36] SHIBU A, KUMAR A, JUNG H, et al. Rewarded meta-pruning: meta learning with rewards for channel pruning[J]. arXiv:2301.11063, 2023. [37] CAI H, ZHU L, HAN S. ProxylessNAS: direct neural architecture search on target task and hardware[J]. arXiv:1812. 00332, 2018. [38] BROCK A, LIM T, RITCHIE J M, et al. SMASH: one-shot model architecture search through hypernetworks[J]. arXiv: 1708.05344, 2017. [39] 高媛媛, 余振华, 杜方, 等. 基于贝叶斯优化的无标签网络剪枝算法[J]. 计算机应用, 2023, 43(1): 30-36. GAO Y Y, YU Z H, DU F, et al. Unlabeled network pruning algorithm based on Bayesian optimization[J]. Journal of Computer Applications, 2023, 43(1): 30-36. [40] LI J Q, LI H R, CHEN Y R, et al. ABCP: automatic blockwise and channelwise network pruning via joint search[J]. IEEE Transactions on Cognitive and Developmental Systems, 2023, 15(3): 1560-1573. [41] SON S, NAH S, LEE K M. Clustering convolutional kernels to compress deep neural networks[C]//Proceedings of the European Conference on Computer Vision, 2018: 216-232. [42] DUGGAL R, XIAO C, VUDUC R, et al. CUP: cluster pruning for compressing deep neural networks[C]//Proceedings of the IEEE International Conference on Big Data. Piscataway: IEEE, 2021: 5102-5106. [43] LI B, WU B, SU J, et al. EagleEye: fast sub-net evaluation for efficient neural network pruning[C]//Proceedings of the European Conference on Computer Vision, 2020: 639-654. [44] KANG M, HAN B. Operation-aware soft channel pruning using differentiable masks[C]//Proceedings of the International Conference on Machine Learning, 2020: 5122-5131. [45] YAN Y C, GUO R Z, LI C, et al. Channel pruning via multi-criteria based on weight dependency[C]//Proceedings of the International Joint Conference on Neural Networks, Piscataway: IEEE, 2021: 1-8. [46] CAI L H, AN Z L, YANG C G, et al. Prior gradient mask guided pruning-aware fine-tuning[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022: 140-148. [47] LIN S H, JI R R, YAN C Q, et al. Towards optimal structured CNN pruning via generative adversarial learning[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2785-2794. [48] LIN M, JI R, LI S, et al. Network pruning using adaptive exemplar filters[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(12): 7357-7366. [49] LIN M, CAO L, LI S, et al. Filter sketch for network pruning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(12): 7091-7100. [50] LIN L, CHEN S, YANG Y, et al. AACP: model compression by accurate and automatic channel pruning[C]//Proceedings of the 26th International Conference on Pattern Recognition, 2022: 2049-2055. [51] YU S, MAZAHERI A, JANNESARI A. Auto graph encoder-decoder for neural network pruning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 6362-6372. [52] LUO J H, WU J, LIN W. THINET: a filter level pruning method for deep neural network compression[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 5058-5066. [53] LIN S, JI R, LI Y, et al. Accelerating convolutional networks via global & dynamic filter pruning[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018: 2425-2432. [54] GUAN Y, LIU N, ZHAO P, et al. DAIS: automatic channel pruning via differentiable annealing indicator search[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(12): 9847-9858. [55] YANG H X, LIANG Y S, LIU W, et al. Filter pruning via attention consistency on feature maps[J]. Applied Sciences, 2023, 13(3): 1964. |
| [1] | MA Zuxin, CUI Yunhe, QIN Yongbin, SHEN Guowei, GUO Chun, CHEN Yi, QIAN Qing. Fusion of Deep Reinforcement Learning in Joint Compression Method for Convolutional Neural Network [J]. Computer Engineering and Applications, 2025, 61(6): 210-219. |
| [2] | LI Xin, ZHANG Dan, GUO Xin, WANG Song, CHEN Enqing. Human Pose Estimation Based on Dual-Stream Fusion of CNN and Transformer [J]. Computer Engineering and Applications, 2025, 61(5): 187-199. |
| [3] | SUN Shilei, LI Ming, LIU Jing, MA Jingang, CHEN Tianzhen. Research Progress on Deep Learning in Field of Diabetic Retinopathy Classification [J]. Computer Engineering and Applications, 2024, 60(8): 16-30. |
| [4] | ZHANG Ruining, YAN Kun, YE Jin. Lightweight YOLO-v7 for Digital Instrumentation Detection and Reading [J]. Computer Engineering and Applications, 2024, 60(8): 192-201. |
| [5] | LI Ruiquan, ZHU Lu, LIU Yuanyuan. Deep Neural Network Channel Pruning Compression Method for Filter Elasticity [J]. Computer Engineering and Applications, 2024, 60(6): 163-171. |
| [6] | ZHOU Zhifei, LI Hua, FENG Yixiong, LU Jianguang, QIAN Songrong, LI Shaobo. Research Progress on Designing Lightweight Deep Convolutional Neural Networks [J]. Computer Engineering and Applications, 2024, 60(22): 1-17. |
| [7] | HE Quanling, YANG Jingwen, LIANG Jinxin, FU Leiyang, TENG Jie, LI Shaowen. Weed Identification Method in Corn Fields Applied to Embedded Weeding Robots [J]. Computer Engineering and Applications, 2024, 60(2): 304-313. |
| [8] | LU Lin, JI Fanfan, YUAN Xiaotong. Sparse Binary Programming Method for Pruning of Randomly Initialized Neural Networks [J]. Computer Engineering and Applications, 2023, 59(8): 138-147. |
| [9] | SONG Zhongshan, NIU Yue, ZHENG Lu, TIE Jun, JIANG Hai. Multiscale Double-Layer Convolution and Global Feature Text Classification Model [J]. Computer Engineering and Applications, 2023, 59(20): 103-110. |
| [10] | GAO Hui, DENG Miaolei, ZHAO Wenjun, CHEN Faquan, ZHANG Dexian. Application of Improved Transformer Based on Weakly Supervised in Crowd Localization [J]. Computer Engineering and Applications, 2023, 59(19): 92-98. |
| [11] | LI Jian, DU Jianqiang, ZHU Yanchen, GUO Yongkun. Survey of Transformer-Based Object Detection Algorithms [J]. Computer Engineering and Applications, 2023, 59(10): 48-64. |
| [12] | LI Xiang, ZHANG Tao, ZHANG Zhe, WEI Hongyang, QIAN Yurong. Survey of Transformer Research in Computer Vision [J]. Computer Engineering and Applications, 2023, 59(1): 1-14. |
| [13] | XIAO Jinzhuang, LI Ruipeng, JI Mengmeng. Speaker Verification Based on Teacher-Free Knowledge Distillation Model [J]. Computer Engineering and Applications, 2022, 58(8): 198-203. |
| [14] | GUO Zibo, GAO Yingke, HU Hangtian, GONG Duo, LIU Kai, WU Xianyun. Research on Acceleration of Convolutional Neural Network Algorithm Based on Hybrid Architecture [J]. Computer Engineering and Applications, 2022, 58(6): 88-94. |
| [15] | CAO Chaofan, LUO Zenan, XIE Jiaxin, LI Lu. Stock Price Prediction Based on MDT-CNN-LSTM Model [J]. Computer Engineering and Applications, 2022, 58(5): 280-286. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||