[1] ZHANG J, ZHANG K, AN Y, et al. An integrated multitasking intelligent bearing fault diagnosis scheme based on representation learning under imbalanced sample condition[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(1): 1-12.
[2] DEPTO D S, RIZVEE M M, RAHMAN A, et al. Quantifying imbalanced classification methods for leukemia detection[J]. Computers in Biology and Medicine, 2023, 152: 106372.
[3] ZHU H, ZHOU M C, LIU G, et al. NUS: noisy-sample-removed undersampling scheme for imbalanced classification and application to credit card fraud detection[J]. IEEE Transactions on Computational Social Systems, 2023, 53(5): 624-629.
[4] 王乐, 韩萌, 李小娟, 等. 不平衡数据集分类方法综述[J]. 计算机工程与应用, 2021, 57(22): 42-52.
WANG L, HANG M, LI X J, et al. Review of classification methods for unbalanced data sets[J]. Computer Engineering and Applications, 2021, 57(22): 42-52.
[5] TAO H, YUN L, KE W, et al. A new weighted SVDD algorithm for outlier detection[C]//Proceedings of the 2016 Chinese Control and Decision Conference, Yinchuan, May 28-30, 2016. Piscataway: IEEE, 2016: 5456-5461.
[6] ZHAO X, WU Y, LEE D L, et al. iForest: interpreting random forests via visual analytics[J]. IEEE Transactions on Visualization and Computer Graphics, 2018, 25(1): 407-416.
[7] YANG Y, HUANG S, HUANG W, et al. Privacy-preserving cost-sensitive learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(5): 2105-2116.
[8] SVETNIK V, LIAW A, TONG C, et al. Random forest: a classification and regression tool for compound classification and QSAR modeling[J]. Journal of Chemical Information and Modeling, 2003, 43(6): 1947-1958.
[9] FRIEDMAN J H. Greedy function approximation: a gradient boosting machine[J]. Annals of Statistics, 2001, 29(5): 1189-1232.
[10] CHEN T, GUESTRIN C. XGBoost: a scalable tree boosting system[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, Aug 13-17, 2016. New York: ACM, 2016: 785-794.
[11] KRAWCZYK B, GALAR M, JELEN ?, et al. Evolutionary undersampling boosting for imbalanced classification of breast cancer malignancy[J]. Applied Soft Computing, 2016, 38: 714-726.
[12] YEN S J, LEE Y S. Cluster-based under-sampling approaches for imbalanced data distributions[J]. Expert Systems with Applications, 2009, 36(3): 5718-5727.
[13] DAI Q, LIU J, YANG J P. SWSEL: sliding window-based selective ensemble learning for class-imbalance problems[J]. Engineering Applications of Artificial Intelligence, 2023, 121: 105959.
[14] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: synthetic minority over-sampling technique[J]. Journal of Artificial Intelligence Research, 2002, 16(1): 321-357.
[15] SANDHAN T, CHOI J Y. Handling imbalanced datasets by partially guided hybrid sampling for pattern recognition[C]//Proceedings of the 2014 22nd International Conference on Pattern Recognition, Stockholm, Aug 24-28, 2014. Piscataway: IEEE, 2014: 1449-1453.
[16] DOUZAS G, BACAO F, LAST F. Improving imbalanced learning through a heuristic oversampling method based on k-means and SMOTE[J]. Information Sciences, 2018, 465: 1-20.
[17] LI J, ZHU Q, WU Q, et al. SMOTE-NaN-DE: addressing the noisy and borderline examples problem in imbalanced classification by natural neighbors and differential evolution[J]. Knowledge-Based Systems, 2021, 223: 107056.
[18] WEI Z, ZHANG L, ZHAO L. Minority-prediction-probability-based oversampling technique for imbalanced learning[J]. Information Sciences, 2023, 622: 1273-1295.
[19] VUTTIPITTAYAMONGKOL P, ELYAN E. Neighbourhood-based undersampling approach for handling imbalanced and overlapped data[J]. Information Sciences, 2020, 509: 47-70.
[20] KINGMA D P, WELLING M. Auto-encoding variational Bayes[C]//Proceedings of the 2nd International Conference on Learning Representations, Banff, Apr 14-16, 2014.
[21] CRESWELL A, WHITE T, DUMOULIN V, et al. Generative adversarial networks: an overview[J]. IEEE Signal Processing Magazine, 2018, 35(1): 53-65.
[22] LARSEN A B L, SONDERBY S K, LAROCHELLE H, et al. Autoencoding beyond pixels using a learned similarity metric[C]//Proceedings of the 33rd International Conference on Machine Learning, New York, 2016, 48: 1558-1566.
[23] ZHENG M, LI T, ZHU R, et al. Conditional Wasserstein generative adversarial network-gradient penalty-based approach to alleviating imbalanced data classification[J]. Information Sciences, 2020, 512: 1009-1023.
[24] HUANG K, WANG X. ADA-INCVAE: improved data generation using variational autoencoder for imbalanced classification[J]. Applied Intelligence, 2022, 52(3): 2838-2853.
[25] DING H, SUN Y, HUANG N, et al. RVGAN-TL: a generative adversarial networks and transfer learning-based hybrid approach for imbalanced data classification[J]. Information Sciences, 2023, 629: 184-203.
[26] AI Q, WANG P, HE L, et al. Generative oversampling for imbalanced data via majority-guided VAE[C]//Proceedings of the 2023 International Conference on Artificial Intelligence and Statistics, 2023: 3315-3330.
[27] WANG S, LUO H, HUANG S, et al. Counterfactual-based minority oversampling for imbalanced classification[J]. Engineering Applications of Artificial Intelligence, 2023, 122: 106024.
[28] STURDIVANT R X. Applied logistic regression[J]. Technometrics, 2013, 34(3): 358-359.
[29] JANIK P, LOBOS T. Automated classification of power-quality disturbances using SVM and RBF networks[J]. IEEE Transactions on Power Delivery, 2006, 21(3): 1663-1669.
[30] SVETNIK V, LIAW A, TONG C, et al. Random forest: a classification and regression tool for compound classification and QSAR modeling[J]. Journal of Chemical Information and Computer Sciences, 2003, 43(6): 1947-1958.
[31] GARCIA S, FERNANDEZ A, LUENGO J, et al. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power[J]. Information Sciences, 2010, 180(10): 2044-2064.
[32] TAHERI S M, HESAMIAN G. A generalization of the Wilcoxon signed-rank test and its applications[J]. Statistical Papers, 2013, 54(2): 457-470.
[33] PEREIRA D G, AFONSO A, MEDEIROS F M. Overview of Friedman’s test and post-hoc analysis[J]. Communications in Statistics-Simulation and Computation, 2015, 44(10): 2636-2653.
[34] PEDREGOSA F, VAROQUAUX G, GRAMFORT A, et al. Scikit-learn: machine learning in Python[J]. The Journal of Machine Learning research, 2011, 12: 2825-2830. |