[1] 杨梦晨, 陈旭栋, 蔡鹏, 等. 早期时间序列分类方法研究综述[J]. 华东师范大学学报 (自然科学版), 2021(5): 115-133. YANG M C, CHEN X D, CAI P, et al. An overview of early time series classification methods[J]. Journal of East China Normal University (Natural Science), 2021(5): 115-133.
[2] 张雅雯, 王志海, 刘海洋, 等. 基于多尺度残差FCN的时间序列分类算法[J]. 软件学报, 2022, 33(2): 555-570.
ZHANG Y W, WANG Z H, LIU H Y, et al. Time series classification algorithm based on multi-scale residual FCN[J]. Journal of Software, 202, 33(2): 555-570.
[3] 李海林, 贾瑞颖, 谭观音. 基于K-Shape的时间序列模糊分类方法[J]. 电子科技大学学报, 2021, 50(6): 899-906.
LI H L, JIA R Y, TAN G Y. Fuzzy classification method of time series based on K-Shape[J]. Journal of University of Electronic Science and Technology of China, 2021, 50(6): 899-906.
[4] XIAO Z, XU X, XING H, et al. RTFN: a robust temporal feature network for time series classification[J]. Information Sciences, 2021, 571: 65-86.
[5] KWON D H, KIM J B, HEO J S, et al. Time series classification of cryptocurrency price trend based on a recurrent LSTM neural network[J]. Journal of Information Processing Systems, 2019, 15(3): 694-706.
[6] ELSAYED N, MAIDA A S, BAYOUMI M. Deep gated recurrent and convolutional network hybrid model for univariate time series classification[J]. International Journal of Advanced Computer Science and Applications, 2019, 10(5).
[7] 李向伟, 刘思言, 高昆仑. 基于双向长短时记忆网络和卷积神经网络的电力系统暂态稳定评估[J]. 科学技术与工程, 2020, 20(7): 2733-2739.
LI X W, LIU S Y, GAO K L. Transient stability assessment of power system based on bidirectional long and short memory network and convolutional neural network[J]. Science Technology and Engineering, 2020, 20(7): 2733-2739.
[8] LI T, ZHANG Y, WANG T. SRPM-CNN: a combined model based on slide relative position matrix and CNN for time series classification[J]. Complex & Intelligent Systems, 2021, 7(3): 1619-1631.
[9] DE SANTANA CORREIA A, COLOMBINI E L. Attention, please! A survey of neural attention models in deep learning[J]. Artificial Intelligence Review, 2022, 55: 6037-6124.
[10] TAY Y, DEHGHANI M, BAHRI D, et al. Efficient transformers: a survey[J]. ACM Computing Surveys, 2022, 55(6): 109.
[11] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017.
[12] YANG Z, DAI Z, YANG Y, et al. Xlnet: generalized autoregressive pretraining for language understanding[C]//Advances in Neural Information Processing Systems 32, 2019.
[13] SHAW P, USZKOREIT J, VASWANI A. Self-attention with relative position representations[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2, Short Papers), 2018: 464-468.
[14] HE P, LIU X, GAO J, et al. DEBERTA: decoding-enhanced BERT with disentangled attention[C]//Proceedings of the 2020 International Conference on Learning Representations, 2020.
[15] KE G, HE D, LIU T Y. Rethinking positional encoding in language pre-training[C]//Proceedings of the 2020 International Conference on Learning Representations, 2020.
[16] GOODFELLOW I, BENGIO Y, COURVILLE A. Deep learning[M]. Cambridge: MIT Press, 2016.
[17] YU F, KOLTUN V, FUNKHOUSER T. Dilated residual networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 472-480. |