[1] 杨朋波, 桑基韬, 张彪, 等. 面向图像分类的深度模型可解释性研究综述[J]. 软件学报, 2023, 34(1): 230-254.
YANG P B, SANG J T, ZHANG B, et al. Survey on interpretability of deep models for image classification[J]. Journal of Software, 2023, 34(1): 230-254.
[2] ZHOU B, KHOSLA A, LAPEDRIZA A, et al. Learning deep features for discriminative localization[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 2016: 2921-2929.
[3] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]//International Conference on Computer Vision, Piscataway, 2017: 618-626.
[4] SPRINGENBERG J T, DOSOVITSKIY A, BROX T, et al. Striving for simplicity: the all convolutional net[C]//International Conference on Learning Representations, San Diego, 2015: 1.
[5] CHATTOPADHYAY A, SARKAR A, HOWLADER P, et al. Grad-CAM++: improved visual explanations for deep convolutional networks[C]//Conference on Computer Vision and Pattern Recognition, Salt Lake City, 2018: 839-847.
[6] OMEIZA D, SPEAKMAN S, CINTAS C, et al. Smooth Grad-CAM++: an enhanced inference level visualization technique for deep convolutional neural network models[EB/OL]. (2019-08-03). https://arxiv.org/pdf/1908.01224.pdf.
[7] WANG H, WANG Z, DU M, et al. Score-CAM: score-weighted visual explanations for convolutional neural networks[C]//Conference on Computer Vision and Pattern Recognition, Seattle, 2020: 24-25.
[8] NAIDU R, GHOSH A, MAURYA Y, et al. IS-CAM: integrated Score-CAM for axiomatic-based explanations[EB/OL]. (2020-10-06). https://arxiv.org/pdf/2010.03023v1.pdf.
[9] LEE J R, KIM S, PARK I, et al. Relevance-CAM: your model already knows where to look[C]//Computer Vision and Pattern Recognition, 2021: 14944-14953.
[10] ZHANG Q, RAO L, YANG Y. Group-CAM: group score-weighted visual explanations for deep convolutional networks[EB/OL].(2021-06-19). https://arxiv.org/pdf/2103. 13859.pdf.
[11] ZHENG Q, WANG Z, ZHOU J, et al. Shap-CAM: visual explanations for convolutional neural networks based on shapley value[C]//European Conference on Computer Vision, Tel Aviv, 2022: 459-474.
[12] 梁先明, 倪帆, 陈文洁, 等. 基于时频Grad-CAM的调制识别网络可解释研究[J/OL]. 西南交通大学学报: 1-9(2022-06-08).https://kns.cnki.net/kcms/detail/51.1277.u.20220608.
1636.008.html.
LIANG X M, NI F, CHEN W J, et al. Interpretability of modulation recognition network based on time-frequency Grad-CAM[J/OL]. Journal of Southwest Jiaotong University: 1-9(2022-06-08). https: //kns.cnki.net/kcms/detail/51.1277.u.20220608.1636.008.html.
[13] 张宇, 梁凤梅, 刘建霞. 融合类激活映射和视野注意力的皮肤病变分割[J]. 计算机工程与应用, 2023, 59(21): 187-194.
ZHANG Y, LIANG F M, LIU J X. Skin lesion segmentation of based on classification activation mapping and visual field attention[J]. Computer Engineering and Applications, 2023, 59(21): 187-194.
[14] RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252.
[15] ZEILER M D, FERGUS R. Visualizing and understanding convolutional networks[C]//European Conference on Computer Vision. Switzerland: Springer, 2014: 818-833.
[16] YANG L, JIANG H, CAI R, et al. CondenseNet V2: sparse feature reactivation for deep networks[EB/OL].[2021-04-09]. https: //arxiv.org/pdf/2104.04382. pdf.
[17] SUNDARARAJAN M, TALY A, YAN Q. Axiomatic attribution for deep networks[C]//International Conference on Machine Learning, Sydney, 2017: 3319-3328.
[18] STURMFELS P, LUNDBERG S, LEE S I. Visualizing the impact of feature attribution baselines[J]. Distill, 2020, 5(1): 1.
[19] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]//International Conference on Learning Representations, San Diego, 2015: 4.
[20] POPPI S, CORNIA M, BARALDI L, et al. Revisiting the evaluation of class activation mapping for explainability: a novel metric and experimental analysis[C]//Conference on Computer Vision and Pattern Recognition, 2021: 2299-2304.
[21] JIANG P T, ZHANG C B, HOU Q, et al. LayerCAM: exploring hierarchical class activation maps for localization[J]. IEEE Transactions on Image Processing, 2021, 30: 5875-5888.
[22] PETSIUK V, DAS A, SAENKO K. RISE: randomized input sampling for explanation of black-box models[C]//British Machine Vision Conference, Newcastle, 2018: 151.
[23] NIE W, YANG Z, PATEL A. A theoretical explanation for perplexing behaviors of backpropagation-based visualizations[C]//International Conference on Machine Learning, Stockholm, 2018: 3809-3818. |