[1] BAEVSKI A, ZHOU H, MOHAMED A, et al. wav2vec 2.0: a framework for self-supervised learning of speech representations s[C]//Advances in Neural Information Processing Systems 33, 2020: 12449-12460.
[2] HIGUCHI Y, OGAWA T, KOBAYASHI T, et al. BECTRA: transducer-based end-to-end ASR with BERT-enhanced encoder[C]//Proceedings of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2023: 1-5.
[3] TIAN J C, YU J W, WENG C, et al. Improving mandarin end-to-end speech recognition with word N-gram language model[J]. IEEE Signal Processing Letters, 2022, 29: 812-816.
[4] XUE J, WANG P D, LI J Y, et al. Large-scale streaming end-to-end speech translation with neural transducers[J]. arXiv: 2204.05352, 2022.
[5] HAN M L, DONG L H, LIANG Z L, et al. Improving end-to-end contextual speech recognition with fine-grained contextual knowledge selection[C]//Proceedings of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2022: 8532-8536.
[6] RADFORD A, KIM J W, XU T, et al. Robust speech recognition via large-scale weak supervision[C]//Proceedings of the 40th International Conference on Machine Learning, 2023: 28492-28518.
[7] LIU D N, SPANAKIS G, NIEHUES J. Low-latency sequence-to-sequence speech recognition and translation by partial hypothesis selection[J]. arXiv:2005.11185, 2020.
[8] GULATI A, QIN J, CHIU C C, et al. Conformer: convolution-augmented transformer for speech recognition[J]. arXiv:2005. 08100, 2020.
[9] HE Y Z, SAINATH T N, PRABHAVALKAR R, et al. Streaming end-to-end speech recognition for mobile devices[C]//Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2019: 6381-6385.
[10] LIU A H, HSU W N, AULI M, et al. Towards end-to-end unsupervised speech recognition[C]//Proceedings of the 2022 IEEE Spoken Language Technology Workshop. Piscataway: IEEE, 2023: 221-228.
[11] CHAN W, PARK D, LEE C, et al. SpeechStew: simply mix all available speech recognition data to train one large neural network[J]. arXiv:2104.02133, 2021.
[12] KIM K, WU F, PENG Y F, et al. E-branchformer: branchformer with enhanced merging for speech recognition[C]//Proceedings of the 2022 IEEE Spoken Language Technology Workshop. Piscataway: IEEE, 2023: 84-91.
[13] 沈逸文, 孙俊. 结合Transformer的轻量化中文语音识别[J]. 计算机应用研究, 2023, 40(2): 424-429.
SHEN Y W, SUN J. Lightweight Chinese speech recognition with transformer[J]. Application Research of Computers, 2023, 40(2): 424-429.
[14] 张瑞珍, 韩跃平, 张晓通. 基于深度LSTM的端到端的语音识别[J]. 中北大学学报(自然科学版), 2020, 41(3): 244-248.
ZHANG R Z, HAN Y P, ZHANG X T. End-to-end speech recognition based on depth-gated LSTM[J]. Journal of North University of China (Natural Science Edition), 2020, 41(3): 244-248.
[15] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017.
[16] VANHOLDER H. Efficient inference with TensorRT[C]//GPU Technology Conference, 2016: 1-24.
[17] FANG J R, YU Y, ZHAO C D, et al. TurboTransformers: an efficient GPU serving system for transformer models[C]//Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York: ACM, 2021: 389-402.
[18] WANG X H, XIONG Y, WEI Y, et al. LightSeq: a high performance inference library for transformers[J]. arXiv:2010. 13887, 2020.
[19] ASUDANI D S, NAGWANI N K, SINGH P. Impact of word embedding models on text analytics in deep learning environment: a review[J]. Artificial Intelligence Review, 2023, 56(9): 10345-10425.
[20] SELVA BIRUNDA S, KANNIGA DEVI R. A review on word embedding techniques for text classification[C]//Proceedings of the 2020 International Conference on Innovative Data Communication Technologies and Application. Singapore: Springer, 2021: 267-281.
[21] 黄诚, 赵倩锐. 基于语言模型词嵌入和注意力机制的敏感信息检测方法[J]. 计算机应用, 2022, 42(7): 2009-2014.
HUANG C, ZHAO Q R. Sensitive information detection method based on attention mechanism-based ELMo[J]. Journal of Computer Applications, 2022, 42(7): 2009-2014.
[22] SHARMIN S, CHAKMA D. Attention-based convolutional neural network for Bangla sentiment analysis[J]. AI & Society, 2021, 36(1): 381-396.
[23] LIU Y, YANG C Y, YANG J. A graph convolutional network-based sensitive information detection algorithm[J]. Complexity, 2021(1): 6631768.
[24] 张泽锋, 毛存礼, 余正涛, 等. 融入领域术语词典的司法舆情敏感信息识别[J]. 中文信息学报, 2022, 36(9): 76-83.
ZHANG Z F, MAO C L, YU Z T, et al. Sensitive judicial public opinion information recognition with the domain terminology dictionary[J]. Journal of Chinese Information Processing, 2022, 36(9): 76-83.
[25] 金秋, 林馥, 裴斐. 基于层次聚类的敏感信息安全过滤模型研究[J]. 计算机仿真, 2023, 40(10): 296-299.
JIN Q, LIN F, PEI F. Research on security filtering model of sensitive information based on hierarchical clustering[J]. Computer Simulation, 2023, 40(10): 296-299.
[26] 刘聪, 王永利, 周子韬, 等. 结合触发事件及词性分析的敏感信息识别方法[J]. 计算机工程与应用, 2020, 56(20): 132-137.
LIU C, WANG Y L, ZHOU Z T, et al. Sensitive information recognition method combining trigger event and part of speech analysis[J]. Computer Engineering and Applications, 2020, 56(20): 132-137. |