[1] SUN L, LI M M, DING W P, et al. AFNFS: adaptive fuzzy neighborhood-based feature selection with adaptive synthetic over-sampling for imbalanced data[J]. Information Sciences, 2022, 612: 724-744.
[2] 严远亭, 马迎澳, 任艳平, 等. 基于构造性神经网络与全局密度信息的不平衡数据欠采样方法[J]. 计算机科学, 2023, 50(10): 48-58.
YAN Y T, MA Y A, REN Y P, et al. Imbalanded undersampling based on constructive neural network and golbal desity information[J]. Computer Science, 2023, 50(10): 48-58.
[3] 李京泰, 王晓丹. 基于代价敏感激活函数XGBoost的不平衡数据分类方法[J]. 计算机科学, 2022, 49(5): 135-143.
LI J T, WANG X D. XGBoost for imbalanced data based on cost-sensitive activation function[J]. Computer Science, 2022, 49(5): 135-143.
[4] 侯天宝, 王爱银. 基于Stacking特征增强多粒度联级Logistic的个人信用评估[J]. 河南师范大学学报(自然科学版), 2023, 51(3): 111-122.
HOU T B, WANG A Y. Personal credit evaluation based on Stacking feature enhancing multi-grained cascade logistic[J]. Journal of Henan Normal University (Natural Science Edition), 2023, 51(3): 111-122.
[5] SUN L, ZHANG J X, DING W P, et al. Feature reduction for imbalanced data classification using similarity-based feature clustering with adaptive weighted k-nearest neighbors[J].Information Sciences, 2022, 593: 591-613.
[6] 杨洁, 匡俊成, 王国胤, 等. 代价敏感的多粒度邻域粗糙模糊集的近似表示[J]. 计算机科学, 2023, 50(5): 137-145.
YANG J, KUANG J C, WANG G Y, et al. Cost-sensitive multigranulation approximation of neighhorhood rough fuzzy sets[J]. Computer Science, 2023, 50(5): 137-145.
[7] 刘艳, 程璐, 孙林. 基于K-S检验和邻域粗糙集的特征选择方法[J]. 河南师范大学学报(自然科学版), 2019, 47(2): 21-28.
LIU Y, CHENG L, SUN L. Feature selection method based on K-S test and neighborhood rough set[J]. Journal of Henan Normal University (Natural Science Edition), 2019, 47(2): 21-28.
[8] 孙林, 徐枫, 李硕, 等. 基于ReliefF和最大相关最小冗余的多标记特征选择[J]. 河南师范大学学报(自然科学版), 2023, 51(6): 21-29.
SUN L, XU F, LI S, et al. Multilabel feature selection algorithm using ReliefF and mRMR[J]. Journal of Henan Normal University (Natural Science Edition), 2023, 51(6): 21-29.
[9] 陈盼盼, 林梦雷, 刘景华, 等. 基于邻域粗糙集的多标记属性约简算法[J]. 闽南师范大学学报(自然科学版), 2018, 31(4): 1-11.
CHEN P P, LIN M L, LIU J H, et al. Multi-label attribute reduction algorithm based on neighborhood rough set[J]. Journal of Minnan Normal University (Natural Science), 2018, 31(4): 1-11.
[10] SUN L, YIN T Y, DING W P, et al. Feature selection with missing labels using multilabel fuzzy neighborhood rough sets and maximum relevance minimum redundancy[J]. IEEE Transactions on Fuzzy Systems, 2021, 30(5): 1197-1211.
[11] 李顺勇, 王改变, 余曼. 基于相似性特征聚类的加权无监督特征选择算法[J]. 贵州师范大学学报(自然科学版), 2021, 39(1): 49-57.
LI S Y, WANG G B, YU M. Weighted unsupervised feature selection algorithm based on similarity feature clustering[J]. Journal of Guizhou Normal University (Natural Sciences), 2021, 39(1): 49-57.
[12] CAPO M, PEREZ A, LOZANO J A. A cheap feature selection approach for the k-means algorithm[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(5): 2195-2208.
[13] 王琛, 董永权. 基于二进制灰狼优化的特征选择及文本聚类[J]. 计算机工程与设计, 2021, 42(9): 2526-2535.
WANG C, DONG Y Q. Feature selection based on binary grey wolf optimization and text clustering[J]. Computer Engineering and Design, 2021, 42(9): 2526-2535.
[14] CHATTERJEE I, GHOSH M, SINGH P K, et al. A clustering-based feature selection framework for handwritten indic script classification[J].Expert Systems, 2019, 36(6): 12459.
[15] ZHANG Y, WANG Y H, GONG D W, et al. Clustering-guided particle swarm feature selection algorithm for high-dimensional imbalanced data with missing values[J].IEEE Transactions on Evolutionary Computation, 2022, 26(4): 616-630.
[16] SUN L,WANG L Y,DING W P, et al.Feature selection using fuzzy neighborhood entropy-based uncertainty measures for fuzzy neighborhood multigranulation rough sets[J]. IEEE Transactions on Fuzzy Systems, 2021, 29(1): 19-33.
[17] KENNEDY J.Bare bones particle swarm[C]//Proceedings of the IEEE Swarm Intelligence Symposium, 2003: 80-87.
[18] 李冰晓, 万睿之, 朱永杰, 等. 基于种群分区的多策略综合粒子群优化算法[J].河南师范大学学报 (自然科学版), 2022, 50(3): 85-94.
LI B X, WAN R Z, ZHU Y J, et al. Multi-strategy comprehensive article swarm optimization algorithm based on population partition[J].Journal of Henan Normal University (Natural Science Edition), 2022, 50(3): 85-94.
[19] SUN L, QIN X Y, DING W P. Nearest neighbors-based adaptive density peaks clustering with optimized allocation strategy[J]. Neurocomputing, 2022, 473: 159-181.
[20] 谢娟英, 丁丽娟. 完全自适应的谱聚类算法[J]. 电子学报, 2019, 47(5): 1000-1008.
XIE J Y, DING L J. The true self-adaptive spectral clustering algorithms[J].Acta Electronica Sinica, 2019, 47(5): 1000-1008.
[21] 刘晓金, 陈文武, 王庆锋. 基于优化核函数带宽SVDD的机械振动预警模型[J]. 机电工程, 2023, 40(11):1641-1654.
LIU X J, CHEN W W, WANG Q F. Mechanical vibration warning model based on optimized kernel bandwidth SVDD[J]. Journal of Mechanical & Electrical Engineering, 2023,40(11):1641-1654.
[22] SONG X F, ZHANG Y, GONG D W, et al. A fast hybrid feature selection based on correlation?guided clustering and particle swarm optimization for high?dimensional data[J]. IEEE Trans Cybernetics, 2022, 52(9): 9573-9586.
[23] LI A D, XUE B, ZHANG M J. Improved binary particle swarm optimization for feature selection with new initialization and search space reduction strategies[J].Applied Soft Computing,2021,106: 107302.
[24] CHEN H M, LI T R, FAN X, et al. Feature selection for imbalanced data based on neighborhood rough sets[J]. Information Sciences, 2019, 483: 1-20.
[25] YU K,WU X D,DING W. Scalable and accurate online feature selection for big data[J].ACM Transactions on Knowledge Discover from Data, 2016, 11(2): 1-39.
[26] ZHOU P, HU X G, LI P P, et al. Online streaming feature selection using adapted neighborhood rough set[J]. Information Sciences, 2019, 481: 258-279.
[27] ZHOU P, HU X G, LI P P, et al. OFS-density: a novel online streaming feature selection method[J]. Pattern Recognition, 2018, 86(2): 48-61.
[28] ZHOU J,FOSTER D P, STINE R A, et al. Streamwise feature selection[J]. The Journal of Machine Learning Research, 2006, 7(9): 1861-1885.
[29] HU Q H,YU D R, LIU J F, et al. Neighborhood rough set based heterogeneous feature subset selection[J]. Information Sciences, 2008, 178(18): 3577-3594.
[30] LIU J F, HUAI Q H, YU D R. A weighted rough set based method developed for class imbalance learning[J].Information Sciences, 2008, 178(4): 1235-1256.
[31] FRANK E,HALL M A,IAN H.The weka workbench[M]. The data mining and knowledge discovery handbook. Cham: Springer, 2016.
[32] MOAYEDIKIA A, ONG K L, BOO Y L, et al. Feature selection for high dimensional imbalanced class data using harmony search[J]. Engineering Applications of Artificial Intelligence, 2017, 57: 38-49.
[33] WU X D,YU K,DING W,et al.Online feature selection with streaming features[J]. IEEE Transcations on Pattern Analysis Machine Intelligence, 2013, 35(5): 1178-1192.
[34] SUN L,ZHANG J X,DING W P,et al. Mixed measure-based feature selection using the fisher score and neighborhood rough sets[J]. Applied Intelligence, 2022, 52(15): 17264-17288. |