[1] PHUONG M, LAMPERT C. Towards understanding knowledge distillation[C]//Proceedings of the International Conference on Machine Learning, 2019: 5142-5151.
[2] JI G, ZHU Z. Knowledge distillation in wide neural networks: risk bound, data efficiency and imperfect teacher[C]//Advances in Neural Information Processing Systems, 2020: 20823-20833.
[3] DENTON E L, ZAREMBA W, BRUNA J, et al. Exploiting linear structure within convolutional networks for efficient evaluation[J]. arXiv:1404.0736, 2014.
[4] HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[J]. arXiv:1704.04861, 2017.
[5] ZHANG X Y, ZHOU X Y, LIN M X, et al. ShuffleNet: an extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 6848-6856.
[6] LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient convnets[J]. arXiv:1608.08710, 2016.
[7] LIU Z, SUN M, ZHOU T, et al. Rethinking the value of network pruning[J]. arXiv:1810.05270, 2018.
[8] HE Y H, LIN J, LIU Z J, et al. AMC: automl for model compression and acceleration on mobile devices[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 815-832.
[9] LIU Z C, MU H Y, ZHANG X Y, et al. MetaPruning: meta learning for automatic neural network channel pruning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 3295-3304.
[10] LIN M, JI R, ZHANG Y, et al. Channel pruning via automatic structure search[J]. arXiv:2001.08565, 2020.
[11] WANG Y C, GUO S, GUO J C, et al. Towards performance-maximizing neural network pruning via global channel attention[J]. Neural Networks, 2024, 171: 104-113.
[12] HOU Y N, MA Z, LIU C X, et al. Network pruning via resource reallocation[J]. Pattern Recognition, 2024, 145: 109886.
[13] ZHENG X W, YANG C Y, ZHANG S K, et al. DDPNAS: efficient neural architecture search via dynamic distribution pruning[J]. International Journal of Computer Vision, 2023, 131(5): 1234-1249.
[14] FRANKLE J, CARBIN M. The lottery ticket hypothesis: finding sparse, trainable neural networks[J]. arXiv:1803. 03635, 2018.
[15] ZHANG Y, LIN M, ZHONG Y, et al. Lottery jackpots exist in pre-trained models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(12): 14990-15004.
[16] KAUFMAN L, ROUSSEEUW P J. Finding groups in data: an introduction to cluster analysis[M]. New York: John Wiley & Sons, 2009.
[17] BOUCHARD-C?Té A, PETROV S, KLEIN D. Randomized pruning: efficiently calculating expectations in large dynamic programs[C]//Proceedings of the 23rd International Conference on Neural Information Processing Systems, 2009: 144-152.
[18] YE J, LU X, LIN Z, et al. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers[J]. arXiv:1802.00124, 2018.
[19] HE Y, LIU P, WANG Z W, et al. Filter pruning via geometric median for deep convolutional neural networks acceleration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4335-4344.
[20] MOLCHANOV P, MALLYA A, TYREE S, et al. Importance estimation for neural network pruning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11256-11264.
[21] LUO J H, WU J X. Neural network pruning with residual-connections and limited-data[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1455-1464.
[22] LIEBENWEIN L, BAYKAL C, LANG H, et al. Provable filter pruning for efficient neural networks[J]. arXiv:1911. 07412, 2019.
[23] HE Y, LIU P, ZHU L, et al. Filter pruning by switching to neighboring CNNs with good attributes[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(10): 8044-8056.
[24] GUO S, ZHANG L, ZHENG X, et al. Automatic network pruning via hilbert-schmidt independence criterion lasso under information bottleneck principle[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 17458-17469.
[25] MA W K, LEWIS J P, KLEIJN W B. The HSIC bottleneck: deep learning without back-propagation[J]. arXiv:1908.01580, 2019.
[26] LUO J H, WU J X. AutoPruner: an end-to-end trainable filter pruning method for efficient deep model inference[J]. Pattern Recognition, 2020, 107: 107461.
[27] HUANG Z H, WANG N Y. Data-driven sparse structure selection for deep neural networks[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 317-334.
[28] LIU J, ZHUANG B, ZHUANG Z, et al. Discrimination-aware network pruning for deep model compression[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(8): 4035-4051.
[29] LIN M B, JI R R, WANG Y, et al. HRank: filter pruning using high-rank feature map[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1526-1535.
[30] YE M, GONG C, NIE L, et al. Good subnetworks provably exist: pruning via greedy forward selection[C]//Proceedings of the International Conference on Machine Learning, 2020: 10820-10830.
[31] TANCIK M, MILDENHALL B, WANG T, et al. Learned initializations for optimizing coordinate-based neural representations[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 2845-2854.
[32] WU B C, KEUTZER K, DAI X L, et al. FBNet: hardware-aware efficient ConvNet design via differentiable neural architecture search[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 10726-10734.
[33] ZOPH B, LE Q V, MATHUR V, et al. Neural architecture search with reinforcement learning[J]. arXiv:1611.01578, 2016.
[34] BAKER B, GUPTA O, NAIK N, et al. Designing neural network architectures using reinforcement learning[J]. arXiv: 1611.02167, 2016.
[35] REAL E, MOORE S, SELLE A, et al. Large-scale evolution of image classifiers[C]//Proceedings of the International Conference on Machine Learning, 2017: 2902-2911.
[36] SHIBU A, KUMAR A, JUNG H, et al. Rewarded meta-pruning: meta learning with rewards for channel pruning[J]. arXiv:2301.11063, 2023.
[37] CAI H, ZHU L, HAN S. ProxylessNAS: direct neural architecture search on target task and hardware[J]. arXiv:1812. 00332, 2018.
[38] BROCK A, LIM T, RITCHIE J M, et al. SMASH: one-shot model architecture search through hypernetworks[J]. arXiv: 1708.05344, 2017.
[39] 高媛媛, 余振华, 杜方, 等. 基于贝叶斯优化的无标签网络剪枝算法[J]. 计算机应用, 2023, 43(1): 30-36.
GAO Y Y, YU Z H, DU F, et al. Unlabeled network pruning algorithm based on Bayesian optimization[J]. Journal of Computer Applications, 2023, 43(1): 30-36.
[40] LI J Q, LI H R, CHEN Y R, et al. ABCP: automatic blockwise and channelwise network pruning via joint search[J]. IEEE Transactions on Cognitive and Developmental Systems, 2023, 15(3): 1560-1573.
[41] SON S, NAH S, LEE K M. Clustering convolutional kernels to compress deep neural networks[C]//Proceedings of the European Conference on Computer Vision, 2018: 216-232.
[42] DUGGAL R, XIAO C, VUDUC R, et al. CUP: cluster pruning for compressing deep neural networks[C]//Proceedings of the IEEE International Conference on Big Data. Piscataway: IEEE, 2021: 5102-5106.
[43] LI B, WU B, SU J, et al. EagleEye: fast sub-net evaluation for efficient neural network pruning[C]//Proceedings of the European Conference on Computer Vision, 2020: 639-654.
[44] KANG M, HAN B. Operation-aware soft channel pruning using differentiable masks[C]//Proceedings of the International Conference on Machine Learning, 2020: 5122-5131.
[45] YAN Y C, GUO R Z, LI C, et al. Channel pruning via multi-criteria based on weight dependency[C]//Proceedings of the International Joint Conference on Neural Networks, Piscataway: IEEE, 2021: 1-8.
[46] CAI L H, AN Z L, YANG C G, et al. Prior gradient mask guided pruning-aware fine-tuning[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022: 140-148.
[47] LIN S H, JI R R, YAN C Q, et al. Towards optimal structured CNN pruning via generative adversarial learning[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2785-2794.
[48] LIN M, JI R, LI S, et al. Network pruning using adaptive exemplar filters[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(12): 7357-7366.
[49] LIN M, CAO L, LI S, et al. Filter sketch for network pruning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(12): 7091-7100.
[50] LIN L, CHEN S, YANG Y, et al. AACP: model compression by accurate and automatic channel pruning[C]//Proceedings of the 26th International Conference on Pattern Recognition, 2022: 2049-2055.
[51] YU S, MAZAHERI A, JANNESARI A. Auto graph encoder-decoder for neural network pruning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 6362-6372.
[52] LUO J H, WU J, LIN W. THINET: a filter level pruning method for deep neural network compression[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 5058-5066.
[53] LIN S, JI R, LI Y, et al. Accelerating convolutional networks via global & dynamic filter pruning[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018: 2425-2432.
[54] GUAN Y, LIU N, ZHAO P, et al. DAIS: automatic channel pruning via differentiable annealing indicator search[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(12): 9847-9858.
[55] YANG H X, LIANG Y S, LIU W, et al. Filter pruning via attention consistency on feature maps[J]. Applied Sciences, 2023, 13(3): 1964. |