[1] CAMBRIA E, HOWARD N. Common and common-sense knowledge integration for concept-level sentiment analysis[C]//Proceedings of the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference (FLAIRS 2014), 2014.
[2] YANG Z, YANG D, DYER C, et al. Hierarchical attention networks for document classification[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 1480-1489.
[3] SUN R, JIANG J, TAN Y F, et al. Using syntactic and semantic relation analysis in question answering[C]//Fourteenth Text Retrieval Conference, 2005.
[4] KUMAR A, IRSOY O, ONDRUSKA P, et al. Ask me anything: dynamic memory networks for natural language processing[J]. arXiv:1506.07285, 2015.
[5] 肖琳, 陈博理, 黄鑫. 基于标签语义注意力的多标签文本分类[J]. 软件学报, 2020, 31(4): 1079-1089.
XIAO L, CHEN B L, HUANG X. Multi-label text classification based on label semantic information[J]. Journal of Software, 2020, 31(4): 1079-1089.
[6] WEI T, LI Y F. Does tail label help for large-scale multi-label learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 31(7): 2315-2324.
[7] LIU Z, MIAO Z, ZHAN X, et al. Large-scale long-tailed recognition in an open world[J]. arXiv:1904.05160, 2019.
[8] CAO K, WEI C, GAIDON A, et al. Learning imbalanced datasets with label-distribution-aware margin loss[C]//Advances in Neural Information Processing Systems, 2019: 1567-1578.
[9] YUAN M, XU J, LI Z. Long tail multi-label learning[C]//2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), 2019.
[10] XIAO L, ZHANG X, JING L, et al. Does head label help for long-tailed multi-label text classification[J]. arXiv:2101.
09704, 2021.
[11] 王浩镔, 胡平. 采用多级特征的多标签长文本分类算法[J]. 计算机工程与应用, 2021, 57(15): 193-199.
WANG H B, HU P. Multi-label long text classification algorithm using multi-level features[J]. Computer Engineering and Applications, 2021, 57(15): 193-199.
[12] KURATA G, XIANG B, ZHOU B. Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 521-526.
[13] ZHANG W, YAN J, WANG X, et al. Deep extreme multi-label learning[J]. arXiv:1704.03718, 2017.
[14] DU C, CHEN Z, FENG F, et al. Explicit interaction model towards text classification[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 6359-6366.
[15] PAPPAS N, HENDERSON J. GILE: a generalized input-label embedding for text classification[J]. Transactions of the Association for Computational Linguistics, 2019, 7(1): 139-155.
[16] MACAVANEY S, DERNONCOURT F, CHANG W, et al. Interaction matching for long-tail multi-label classification[J]. arXiv:2005.08805, 2020.
[17] BYRD J, LIPTON Z C. What is the effect of importance weighting in deep learning[C]//Proceedings of the 36th International Conference on Machine Learning, 2019.
[18] BOWYER K W, HALL L O, CHAWLA N V, et al. SMOTE: synthetic minority over-sampling technique[J]. Journal of Artificial Intelligence Research, 2002, 16(1): 321-357.
[19] CUI Y, JIA M , LIN T Y, et al. Class-balanced loss based on effective number of samples[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019: 9268-9277.
[20] ZHOU B, CUI Q, WEI X S, et al. BBN: bilateral-branch network with cumulative learning for long-tailed visual recognition[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020: 9719-9728.
[21] KANG B, XIE S, ROHRBACH M, et al. Decoupling represe ntation and classifier for long-tailed recognition[C]//International Conference on Learning Representations, 2019.
[22] HARIHARAN B, GIRSHICK R. Low-shot visual recognition by shrinking and hallucinating features[J]. arXiv:1606.
02819, 2016.
[23] GIDARIS S, KOMODAKIS N. Dynamic few-shot visual learning without forgetting[J]. arXiv:1804.09458, 2018.
[24] PENG Z, QI Z, ZHENG S, et al. Text classification improved by integrating bidirectional LSTM with two-dimen sional max pooling[J]. arXiv:1611.06639, 2016.
[25] TAN Z, WANG M, XIE J, et al. Deep semantic role labeling with self-attention[C]//Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[26] SNELL J, SWERSKY K, ZEMEL R S. Prototypical networks for few-shot learning[J]. arXiv:1703.05175, 2017.
[27] QI H, BROWN M, LOWE D G. Low-shot learning with imprinted weights[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 5822-5830. |