[1] SINGH B, NAJIBI M, DAVIS L S. Sniper: efficient multi-scale training[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018.
[2] HE K M, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[3] WANG J, XU C, YANG X, et al. A novel pruning algorithm for smoothing feedforward neural networks based on group lasso method[J]. IEEE Transactions on Neural Networks and Learning Systems, 2017, 29(5): 2012-2024.
[4] FRANKLE J, CARBIN M. The lottery ticket hypothesis: finding sparse, trainable neural networks[EB/OL].(2018-03-09). https://doi.org/10.48550/arXiv.1803.03635.
[5] KRISHNAMOORTHI R. Quantizing deep convolutional networks for efficient inference: a whitepaper[EB/OL].(2018-06-21).https://doi.org/10.48550/arXiv.1806.08342.
[6] KIM H, KARIM M U, KYUNG C M. Efficient neural network compression[EB/OL].(2019-04-12).https://doi.org/10.
48550/arXiv.1811.12781.
[7] CROWLEY E J, GRAG G, STORKEY A J. Moonshine: distilling with cheap convolutions[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018: 2893-2903.
[8] XU L, ZHU X, GONG S. Knowledge distillation by on-the-fly native ensemble[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018: 7517-7527.
[9] HOWARD A G, ZHU M, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[EB/OL].(2017?04?17).https://doi.org/10.48550/arXiv.1704.
04861.
[10] MURAVEV A, RAITOHARJU J, GABBOUJ M. Neural architecture search by estimation of network structure distributions[J]. IEEE Access, 2021, 9: 15304-15319.
[11] HE Y M, ZHANG X, SUN J. Channel pruning for accelerating very deep neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 1389-1397.
[12] SRINIVAS S, SUBRAMANYA A. Training sparse neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017: 138-145.
[13] LIU Z, LI J, SHEN Z, et al. Learning efficient convolutional networks through network slimming[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 2736-2744.
[14] HAN S, MAO H, DALLY W J. Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding[EB/OL]. (2015-10-01).https://doi.org/10.48550/arXiv.1510.00149.
[15] LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient convnets[EB/OL]. (2016-08-31). https://doi.org/10.48550/arXiv.1608.08710.
[16] LIU Z, SUN M, ZHOU T, et al. Rethinking the value of network pruning[EB/OL]. (2018-10-11).https://doi.org/10.48550/
arXiv.1810.05270.
[17] HAN B, ZHANG Z, XU C, et al. Deep face model compression using entropy-based filter selection[C]//International Conference on Image Analysis and Processing. Cham:Springer, 2017: 127-136.
[18] HE Y M, ZHANG X, SUN J. Channel pruning for accelerating very deep neural networks[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 1389-1397.
[19] LUO J H, WU J, LIN W. Thinet: a filter level pruning method for deep neural network compression[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 5058-5066.
[20] HUANG J, SUN W, HUANG L. Deep neural networks compression learning based on multi-objective evolutionary algorithms[J]. Neurocomputing, 2020, 378:260-269.
[21] LI T, WU B, YANG Y, et al. Compressing convolutional neural networks via factorized convolutional filters[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 3977-3986.
[22] ZHUANG Z W, TAN M K, ZHUANG B. Discrimination-aware channel pruning for deep neural networks[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS), 2018: 883-894.
[23] DONG X, HUANG J, YANG Y, et al. More is less: a more complicated network with less inference complexity[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 5840-5848.
[24] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Learning multiple layers of features from tiny images[J]. Communications of the ACM, 2017, 60(6): 84-90.
[25] LIN S, JI R, YAN C, et al. Towards optimal structured DNN pruning via generative adversarial learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 2790-2799.
[26] HE Y, LIU P, WANG Z, et al. Filter pruning via geometric median for deep convolutional neural networks acceleration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 4340-4349.
[27] LIN M, JI R, WANG Y, et al. Hrank: filter pruning using high-rank feature map[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 1529-1538.
[28] LIN M, JI R, LI S, et al. Network pruning using adaptive exemplar filters[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12): 7357-7366.
[29] LIN M, CAO L, ZHANG Y, et al. Pruning networks with cross-layer ranking & k-reciprocal nearest filters[J].IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(11): 9139-9148. |