GE Haibo, ZHOU Ting, HUANG Chaofeng, LI Qiang. Research on Feature Distribution Distillation Algorithm Under Multiple Tasks[J]. Computer Engineering and Applications, 2023, 59(21): 83-90.
[1] CHEN J Y,LIN X,GAO S T D.A fast evolutionary learning to optimize CNN[J].Chinese Journal of Electronics,2020,29(6):1061-1073.
[2] ZHANG L,HUANG S,LIU W.Intra-class part swapping for fine-grained image classification[C]//2021 IEEE Winter Conference on Applications of Computer Vision(WACV),2021:3208-3217.
[3] GONG X,XIA X,ZHU W,et al.Deformable gabor feature networks for biomedical image classification[C]//2021 IEEE Winter Conference on Applications of Computer Vision(WACV),2021:4003-4011.
[4] BUDACK E,SPRINGSTEIN M,HAKIMOV S,et al.Ontology-driven event type classification in images[C]//2021 IEEE Winter Conference on Applications of Computer Vision(WACV),2021:2927-2937.
[5] GE Z,LIU S,LI Z,et al.OTA:optimal transport assignment for object detection[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),Virtual,2021:303-312.
[6] WANG C Y,BOCHKOVSKIY A,LIAO H Y.Scaled-YOLOv4:scaling cross stage partial network[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2021:13024-13033.
[7] SU T,LIANG Q,ZHANG J,et al.Attention-based feature interaction for efficient online knowledge distillation[C]//2021 IEEE International Conference on Data Mining(ICDM),2021:579-588.
[8] 唐武海,董博,陈华,等.深度神经网络模型压缩方法综述[J].智能物联技术,2021,53(6):1-15.
TANG W H,DONG B,CHEN H,et al.Survey of model compression methods for deep neural networks[J].Technology of IoT & AI,2021,53(6):1-15.
[9] CHOUDHARY T,MISHRA V,GOSWAMI A,et al.A comprehensive survey on model compression and acceleration[J].Artificial Intelligence Review,2020,53(3):5113-5155.
[10] HINTON G,VINYALS O,DEAN J.Distilling the knowledge in a neural network[J].Computer Science,2015,14(7):38-39.
[11] ROMERO A,BALLAS N,KAHOU S E,et al.FitNets:hints for thin deep nets[C]//International Conference on Learning Representations(ICLR),San Diego,CA,USA,2015:1018-1039.
[12] ZAGORUYKO S,KOMODAKIS N.Paying more attention to attention:improving the performance of convolutional neural networks via attention transfer[C]//International Conference on Learning Representations(ICLR),San Diego,CA,USA,2017:573-587.
[13] LIU Y,ZHANG W,WANG J.Adaptive multi-teacher multi-level knowledge distillation[J].Neurocomputing,2020,4(15):106-113.
[14] INSEOP C,SEONGU P,NOJUN J.Feature-map-level online adversarial knowledge distillation[C]//International Conference on Machine Learning(ICML).New York:ACM,2020:2006-2015.
[15] WEI L,DRAGOMIR A,DUMITRU E,et al.SSD:single shot multibox detector[C]//European Conference on Computer Vision,Amsterdam,the Netherlands,2016:21-37.