[1] 赵延玉, 赵晓永, 王磊, 等. 可解释人工智能研究综述[J]. 计算机工程与应用, 2023, 59(14): 1-14.
ZHAO Y Y, ZHAO X Y, WANG L, et al. Review of explainable artificial intelligence[J]. Computer Engineering and Applications, 2023, 59(14): 1-14.
[2] TJOA E, GUAN C T. A survey on explainable artificial intelligence (XAI): toward medical XAI[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(11): 4793-4813.
[3] BREIMAN L. Random forests[J]. Machine Learning, 2001, 45(1): 5-32.
[4] ZHOU Z H, FENG J. Deep forest[J]. National Science Review, 2019, 6(1): 74-86.
[5] 徐杨宇, 高宝元, 郭杰龙, 等. 尺度不变的条件数约束的模型鲁棒性增强算法[J]. 计算机工程与应用, 2024, 60(8): 140-147.
XU Y Y, GAO B Y, GUO J L, et al. Model robustness enhancement algorithm with scale invariant condition number constraint[J]. Computer Engineering and Applications, 2024, 60(8): 140-147.
[6] COHEN J M, ROSENFELD E, KOLTER J Z. Certified adversarial robustness via randomized smoothing[J]. arXiv:1902.02918, 2019.
[7] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv:1706.06083, 2017.
[8] SALMAN H, LI J, RAZENSHTEYN I, et al. Provably robust deep learning via adversarially trained smoothed classifiers[C]//Advances in Neural Information Processing Systems, 2019: 11289-11300.
[9] ZHANG H Y, YU Y D, JIAO J T, et al. Theoretically principled trade-off between robustness and accuracy[J]. arXiv:1901.08573, 2019.
[10] RAGHUNATHAN A, XIE S M, YANG F, et al. Adversarial training can hurt generalization[J]. arXiv:1906.06032, 2019.
[11] YANG Y Y, RASHTCHIAN C, ZHANG H, et al. A closer look at accuracy vs. robustness[C]//Advances in Neural Information Processing Systems, 2020: 8588-8601.
[12] KANTCHELIAN A, TYGAR J D, JOSEPH A D. Evasion and hardening of tree ensemble classifiers[J]. arXiv:1509. 07892, 2015.
[13] CHEN H G, ZHANG H, BONING D, et al. Robust decision trees against adversarial examples[J]. arXiv:1902.10660, 2019.
[14] ANDRIUSHCHENKO M, HEIN M. Provably robust boosted decision stumps and trees against adversarial attacks[C]//Proceedings of the Neural Information Processing Systems, 2019.
[15] VOS D, VERWER S. Robust optimal classification trees against adversarial examples[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022: 8520-8528.
[16] CHEN Y Z, WANG S Q, JIANG W F, et al. Cost-aware robust tree ensembles for security applications[J]. arXiv:1912.01149, 2019.
[17] GUO J Q, TENG M Z, GAO W, et al. Fast provably robust decision trees and boosting[C]//Proceedings of the 39th International Conference on Machine Learning, 2022: 8127-8144.
[18] RANZATO F, ZANELLA M. Genetic adversarial training of decision trees[C]//Proceedings of the Genetic and Evolutionary Computation Conference. New York: ACM, 2021: 358-367.
[19] ?YCHOWSKI A, PERRAULT A, MA?DZIUK J. Coevolutionary algorithm for building robust decision trees under minimax regret[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 21869-21877.
[20] BIGGIO B, CORONA I, MAIORCA D, et al. Evasion attacks against machine learning at test time[C]//Proceedings of the Machine Learning and Knowledge Discovery in Databases. Berlin: Springer, 2013: 387-402.
[21] QUINLAN J R. Induction of decision trees[J]. Machine Learning, 1986, 1(1): 81-106.
[22] BREIMAN L. Classification and regression trees[M]. New York: Chapman and Hall, 2017.
[23] 张锟滨, 陈玉明, 吴克寿, 等. 粒向量驱动的随机森林分类算法研究[J]. 计算机工程与应用, 2024, 60(3): 148-156.
ZHANG K B, CHEN Y M, WU K S, et al. Research on granule vectors random forest classification algorithm[J]. Computer Engineering and Applications, 2024, 60(3): 148-156.
[24] DIETTERICH T G. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization[J]. Machine Learning, 2000, 40(2): 139-157.
[25] CHENG M H, LE T, CHEN P Y, et al. Query-efficient hard-label black-box attack: an optimization-based approach[J]. arXiv:1807.04457, 2018. |