[1] CHOI J, SIM H, OH S, et al. MLogNet: a logarithmic quantization-based accelerator for depthwise separable convolution[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2022, 41(12): 5220-5231.
[2] JANG J G, QUAN C, LEE H D, et al. Falcon: lightweight and accurate convolution based on depthwise separable convolution[J]. Knowledge and Information Systems, 2023, 65(5): 2225-2249.
[3] DING X H, ZHANG X Y, MA N N, et al. RepVGG: making VGG-style ConvNets great again[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 13728-13737.
[4] YOUNG S I, ZHE W, TAUBMAN D, et al. Transform quantization for CNN compression[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 5700-5714.
[5] ZHOU D Q, WANG K, GU J Y, et al. Dataset quantization[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 17159-17170.
[6] LEE D, KIM C, KIM S, et al. Autoregressive image generation using residual quantization[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 11513-11522.
[7] XU J R, ZHAO Y F, XU F. RDPNet: a single-path lightweight CNN with re-parameterization for CPU-type edge devices[J]. Journal of Cloud Computing, 2022, 11(1): 54.
[8] 牛鑫宇, 毛鹏军, 段云涛, 等. 基于YOLOv5s室内目标检测轻量化改进算法研究[J]. 计算机工程与应用, 2024, 60(3): 109-118.
NIU X Y, MAO P J, DUAN Y T, et al. Research on lightweight improved algorithm for indoor target detection based on YOLOv5s[J]. Computer Engineering and Applications, 2024, 60(3): 109-118.
[9] LUO W, LI T, YANG W D, et al. Depthwise separable convolution based lightweight HSRRS image classification method[C]//Proceedings of the 2020 International Conference on Wireless Communications and Signal Processing. Piscataway: IEEE, 2020: 586-590.
[10] ZHAO B R, CUI Q, SONG R J, et al. Decoupled knowledge distillation[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 11943-11952.
[11] YANG Z D, LI Z, JIANG X H, et al. Focal and global knowledge distillation for detectors[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 4633-4642.
[12] BENATIA M A, AMARA Y, BOULAHIA S Y, et al. Block pruning residual networks using multi-armed bandits[J]. Journal of Experimental & Theoretical Artificial Intelligence, 2023. DOI:10.1080/0952813X.2023.2247412.
[13] NIU W, MA X L, LIN S, et al. PatDNN: achieving real-time DNN execution on mobile devices with pattern-based weight pruning[C]//Proceedings of the 25th International Conference on Architectural Support for Programming Languages and Operating Systems, 2020: 907-922.
[14] CHEN C P, GUO Z C, ZENG H E, et al. RepGhost: a hardware-efficient ghost module via re-parameterization[J]. arXiv:2211.06088, 2022.
[15] YANG H J, SHEN Z, ZHAO Y C. AsymmNet: towards ultralight convolution neural networks using asymmetrical bottle-necks[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2021: 2339-2348.
[16] LIAO Z, QUéTU V, NGUYEN V T, et al. Can unstructured pruning reduce the depth in deep neural networks?[C]//Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision Workshops. Piscataway: IEEE, 2023: 1394-1398.
[17] ZENG C Y, LIU L, ZHAO H C, et al. Causal unstructured pruning in linear networks using effective information[C]//Proceedings of the 2022 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery. Piscataway: IEEE, 2022: 294-302.
[18] SHAHHOSSEINI S, ALBAQSAMI A, JASEMI M, et al. Partition pruning: parallelization-aware pruning for dense neural networks[C]//Proceedings of the 2020 28th Euro-micro International Conference on Parallel, Distributed and Network-Based Processing. Piscataway: IEEE, 2020: 307-311.
[19] CHANG J F, LU Y, XUE P, et al. Global balanced iterative pruning for efficient convolutional neural networks[J]. Neural Computing and Applications, 2022, 34(23): 21119-21138.
[20] YE H C, ZHANG B, CHEN T, et al. Performance-aware approximation of global channel pruning for multitask CNNs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 10267-10284.
[21] KWON S J, LEE D, KIM B, et al. Structured compression by weight encryption for unstructured pruning and quantization[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1906-1915.
[22] PIETRO? M, ?UREK D, ?NIE?Y?SKI B. Speedup deep learning models on GPU by taking advantage of efficient unstructured pruning and bit-width reduction[J]. Journal of Computational Science, 2023, 67: 101971.
[23] CHANG J F, LU Y, XUE P, et al. Iterative clustering pruning for convolutional neural networks[J]. Knowledge-Based Systems, 2023, 265: 110386.
[24] MALLYA A, LAZEBNIK S. PackNet: adding multiple tasks to a single network by iterative pruning[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7765-7773.
[25] ZHAO C L, ZHANG Y X, NI B B. Exploiting channel similarity for network pruning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(9): 5049-5061.
[26] CHENG Y J, WANG X Q, XIE X L, et al. Channel pruning guided by global channel relation[J]. Applied Intelligence, 2022, 52(14): 16202-16213.
[27] HU W Z, CHE Z P, LIU N, et al. CATRO: channel pruning via class-aware trace ratio optimization[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(8): 11595-11607.
[28] ZHANG Y X, LIN M B, LIN C W, et al. Carrying out CNN channel pruning in a white box[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(10): 7946-7955.
[29] HAN F X, LI Y, WANG C S. Multi-threshold channel pruning method based on L1 regularization[J]. Journal of Physics: Conference Series, 2021, 1948(1): 012051.
[30] SHAIKH A M, ZHAO Y B, KUMAR A, et al. Efficient Bayesian CNN model compression using Bayes by backprop and L1-norm regularization[J]. Neural Processing Letters, 2024, 56(2): 140.
[31] WANG H, QIN C, ZHANG Y L, et al. Neural pruning via growing regularization[J]. arXiv:2012.09243, 2020.
[32] 李小宁. 面向卷积神经网络模型加速方法的研究[D]. 济南: 山东师范大学, 2023.
LI X N. Research on acceleration method of convolution neural network model[D]. Jinan: Shandong Normal University, 2023.
[33] JIANG D, CAO Y, YANG Q. On the channel pruning using graph convolution network for convolutional neural network acceleration[C]//Proceedings of the 31st International Conference on Artificial Intelligence. Palo Alto: AAAI, 2022: 3107-3113.
[34] CHEN T Y, DING T Y, ZHU Z H, et al. OTOv3: automatic architecture-agnostic neural network training and compression from structured pruning to erasing operators[J]. arXiv:2312.09411, 2023.
[35] LIU X C, CAO J, YAO H Y, et al. AdaPruner: adaptive channel pruning and effective weights inheritance[J]. arXiv:2109.06397, 2021.
[36] LIN L B, CHEN S J, YANG Y J, et al. AACP: model compression by accurate and automatic channel pruning[C]//Proceedings of the 2022 26th International Conference on Pattern Recognition. Piscataway: IEEE, 2022: 2049-2055.
[37] XIE Z Y, FU Y, TIAN S Z, et al. Pruning with compensation: efficient channel pruning for deep convolutional neural networks[J]. arXiv:2108.13728, 2021.
[38] LIN M B, CAO L J, LI S J, et al. Filter sketch for network pruning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12): 7091-7100.
[39] CHENG H J, WANG Z D, MA L F, et al. Multi-task pruning via filter index sharing: a many-objective optimization approach[J]. Cognitive Computation, 2021, 13(4): 1070-1084.
[40] ELKERDAWY S, ELHOUSHI M, ZHANG H, et al. Fire together wire together: a dynamic pruning approach with self-supervised mask prediction[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 12444-12453.
[41] RUAN X F, LIU Y F, LI B, et al. DPFPS: dynamic and progressive filter pruning for compressing convolutional neural networks from scratch[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(3): 2495-2503.
[42] ALWANI M, WANG Y, MADHAVAN V. DECORE: deep compression with reinforcement learning[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 12339-12349.
[43] HE Y, LIU P, ZHU L C, et al. Filter pruning by switching to neighboring CNNs with good attributes[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(10): 8044-8056.
[44] LIU L, ZHANG S, KUANG Z, et al. Group fisher pruning for practical network compression[C]//Proceedings of the 38th International Conference on Machine Learning, 2021: 7021-7032.
[45] CHENG H R, ZHANG M, SHI J Q. Influence function based second-order channel pruning-evaluating true loss changes for pruning is possible without retraining[J]. arXiv:2308.06755, 2023.
[46] LAI B L, XIANG H R, SHEN F R. Inf-CP: a reliable channel pruning based on channel influence[J]. arXiv:2112. 02521, 2021.
[47] LI G Q, LIU B W, CHEN A B. DDFP: a data driven filter pruning method with pruning compensation[J]. Journal of Visual Communication and Image Representation, 2023, 94: 103833.
[48] GUO S P, WANG Y J, LI Q Q, et al. DMCP: differentiable Markov channel pruning for neural networks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1536-1544.
[49] LIU Z C, MU H Y, ZHANG X Y, et al. MetaPruning: meta learning for automatic neural network channel pruning[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 3295-3304.
[50] TMAMNA J, AYED E B, FOURATI R, et al. A CNN pruning approach using constrained binary particle swarm optimization with a reduced search space for image classification[J]. Applied Soft Computing, 2024, 164: 111978.
[51] LIN M B, JI R R, WANG Y, et al. HRank: filter pruning using high-rank feature map[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1526-1535.
[52] YEOM S K, SHIM K H, HWANG J H. Toward compact deep neural networks via energy-aware pruning[J]. arXiv: 2103.10858, 2021. |