[1] YUAN Y, ZHOU X F, PAN S R, et al. A relation-specific attention network for joint entity and relation extraction[C]//Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020: 4054-4060.
[2] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[3] HO N H, YANG H J, KIM S H, et al. Multimodal approach of speech emotion recognition using multi-level multi-head fusion attention?based recurrent neural network[J]. IEEE Access, 2020, 8: 61672-61686.
[4] WANG G. A perspective on deep imaging[J]. IEEE Access, 2016, 4: 8914-8924.
[5] ZAMIR S W, ARORA A, KHAN S, et al. Restormer: efficient transformer for high?resolution image restoration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 5728-5739.
[6] 赵延玉, 赵晓永, 王磊, 等. 可解释人工智能研究综述[J]. 计算机工程与应用, 2023, 59(14): 1-14.
ZHAO Y Y, ZHAO X Y, WANG L, et al. Review of explainable artificial intelligence[J]. Computer Engineering and Applications, 2023, 59(14): 1-14.
[7] 曾春艳, 严康, 王志锋, 等. 深度学习模型可解释性研究综述[J]. 计算机工程与应用, 2021, 57(8): 1-9.
ZENG C Y, YAN K, WANG Z F, et al. Survey of interpretability research on deep learning models[J]. Computer Engineering and Applications, 2021, 57(8): 1-9.
[8] FAN F L, XIONG J J, LI M Z, et al. On interpretability of artificial neural networks: a survey[J]. IEEE Transactions on Radiation and Plasma Medical Sciences, 2021, 5(6): 741-760.
[9] GUO Z, LI X, HUANG H, et al. Deep learning-based image segmentation on multimodal medical imaging[J]. IEEE Transactions on Radiation and Plasma Medical Sciences, 2019, 3(2): 162-169.
[10] HATT M, PARMAR C, QI J Y, et al. Machine (deep) learning methods for image processing and radiomics[J]. IEEE Transactions on Radiation and Plasma Medical Sciences, 2019, 3(2): 104-108.
[11] DOSHI-VELEZ F, KIM B. Towards a rigorous science of interpretable machine learning[J]. arXiv:1702.08608, 2017.
[12] RUDIN C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[J]. Nature Machine Intelligence, 2019, 1(5): 206-215.
[13] GOODMAN B, FLAXMAN S. European union regulations on algorithmic decision making and a “right to explanation”[J]. AI Magazine, 2017, 38(3): 50-57.
[14] CHU L Y, HU X, HU J H, et al. Exact and consistent interpretation for piecewise linear neural networks: a closed form solution[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. New York: ACM, 2018: 1244-1253.
[15] HOLZINGER A, GOEBEL R, FONG R, et al. xxAI-beyond explainable artificial intelligence[M]//xxAI-beyond explainable AI. Cham: Springer International Publishing, 2022: 3-10.
[16] ANDREWS R, DIEDERICH J, TICKLE A B. Survey and critique of techniques for extracting rules from trained artificial neural networks[J]. Knowledge-Based Systems, 1995, 8(6): 373-389.
[17] DASH S, GüNLüK O, WEI D. Boolean decision rules via column generation[C]//Proceedings of the Neural Information Processing Systems, 2018.
[18] WANG T, RUDIN C, DOSHI-VELEZ F, et al. A Bayesian framework for learning rule sets for interpretable classification[J]. Journal of Machine Learning Research, 2017, 18: 1-37.
[19] KE G L, MENG Q, FINLEY T, et al. LightGBM: a highly efficient gradient boosting decision tree[C]//Proceedings of the Neural Information Processing Systems, 2017.
[20] YANG Y, MORILLO I G, HOSPEDALES T M. Deep neural decision trees[J]. arXiv:1806.06988, 2018.
[21] YANG H, RUDIN C, SELTZER M. Scalable Bayesian rule lists[C]//Proceedings of the International Conference on Machine Learning, 2017: 3921-3930.
[22] FROSST N, HINTON G. Distilling a neural network into a soft decision tree[J]. arXiv:1711.09784, 2017.
[23] RIBEIRO M T, SINGH S, GUESTRIN C. “Why should I trust you?”: explaining the predictions of any classifier[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2016: 1135-1144.
[24] WANG Z, ZHANG W, LIU N, et al. Transparent classification with multilayer logical perceptrons and random binarization[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 6331-6339.
[25] BECK F, FURNKRANZ J. An investigation into mini-batch rule learning[J]. arXiv:2106.10202, 2021.
[26] BECK F, FüRNKRANZ J. An empirical investigation into deep and shallow rule learning[J]. Frontiers in Artificial Intelligence, 2021, 4: 689398.
[27] DIERCKX L, VERONEZE R, NIJSSEN S. RL-Net: interpretable rule learning withNeural networks[C]//Advances in Knowledge Discovery and Data Mining. Cham: Springer, 2023: 95-107.
[28] HUANG K, AVIYENTE S. Sparse representation for signal classification[C]//Advances in Neural Information Processing Systems, 2007: 609-616.
[29] QIAO L T, WANG W J, LIN B. Learning accurate and interpretable decision rule sets from neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 4303-4311.
[30] CHAKRABORTY M, BISWAS S K, PURKAYASTHA B. Rule extraction from neural network trained using deep belief network and back propagation[J]. Knowledge and Information Systems, 2020, 62(9): 3753-3781.
[31] LAL G R, MITHAL V. NN2Rules: extracting rule list from neural networks[J]. arXiv:2207.12271, 2022.
[32] BENGIO Y, LEONARD N, COURVILLE A. Estimating or propagating gradients through stochastic neurons for conditional computation[J]. arXiv:1308.3432, 2013.
[33] GANTER B, WILLE R. Formal concept analysis: mathematical foundations[M]. Cham: Springer Nature Switzerland, 2024.
[34] LOUIZOS C, WELLING M, KINGMA D P. Learning sparse neural networks through L_0 regularization[J]. arXiv:1712.01312, 2017.
[35] ARYA V, BELLAMY R K E, CHEN P Y, et al. One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques[J]. arXiv:1909.03012, 2019. |