Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (4): 57-74.DOI: 10.3778/j.issn.1002-8331.2305-0102
• Research Hotspots and Reviews • Previous Articles Next Articles
ZHANG Junjin, TIAN Yonghong, SONG Zheyu, HAO Yufeng
Online:
2024-02-15
Published:
2024-02-15
章钧津,田永红,宋哲煜,郝宇峰
ZHANG Junjin, TIAN Yonghong, SONG Zheyu, HAO Yufeng. Survey of Neural Machine Translation[J]. Computer Engineering and Applications, 2024, 60(4): 57-74.
章钧津, 田永红, 宋哲煜, 郝宇峰. 神经机器翻译综述[J]. 计算机工程与应用, 2024, 60(4): 57-74.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2305-0102
[1] 邓涵铖, 熊德意. 机器翻译译文质量估计综述[J]. 中文信息学报, 2022, 36(11): 20-37. DENG H C, XIONG D Y. A survey on machine translation quality estimation[J]. Journal of Chinese Information Processing, 2022, 36(11): 20-37. [2] 孙海鹏, 赵铁军. 无监督神经机器翻译综述[J]. 智能计算机与应用, 2021, 11(2): 1-6. SUN H P, ZHAO T J. A survey on unsupervised neural machine translation[J]. Intelligent Computer and Applications, 2021, 11(2): 1-6. [3] 苏劲松, 陈骏轩, 陆紫耀, 等. 篇章神经机器翻译综述[J]. 情报工程, 2020, 6(5): 4-14. SU J S, CHEN J X, LU Z Y, et al. A survey of document-level neural machine translation[J]. Intelligence Engineering, 2020, 6(5): 4-14. [4] 冯洋, 邵晨泽. 神经机器翻译前沿综述[J]. 中文信息学报,2020, 34(7): 1-18. FENG Y, SHAO C Z. Frontiers in neural machine translation: a literature review[J]. Journal of Chinese Information Processing, 2020, 34(7): 1-18. [5] 张健. 基于实例的机器翻译的泛化方法研究[D]. 北京: 中国科学院研究生院(计算技术研究所), 2001. ZHANG J. Research on generalization methods of case-based machine translation[D]. Beijing: Graduate University of Chinese Academy of Sciences (Institute of Computing Technology), 2001. [6] HUTCHINS J. Machine translation: a concise history[J]. Computer Aided Translation: Theory and Practice, 2007, 13: 11. [7] 刘群, 骆卫华. 跨语言检索中机器翻译技术的应用和进展[J]. 数字图书馆论坛, 2006(9): 12-19. LIU Q, LUO W H. Application and advance of machine translation technology in information retrieval[J]. Digital Library Forum, 2006(9): 12-19. [8] HINTON G, DENG L, YU D, et al. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Processing Magazine, 2012, 29(6): 82-97. [9] 袁小于. 基于规则的机器翻译技术综述[J]. 重庆文理学院学报(自然科学版), 2011, 30(3): 56-59. YUAN X Y. Rule-based machine translation technology review[J]. Journal of Chongqing University of Arts and Sciences(Natural Science), 2011, 30(3): 56-59. [10] HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527-1554. [11] ROSENBLATT F. The perceptron: a probabilistic model for information storage and organization in the brain[J]. Psychological Review, 1958, 65(6): 386. [12] WILLIAMS R J, ZIPSER D. A learning algorithm for continually running fully recurrent neural networks[J]. Neural Computation, 1989, 1(2): 270-280. [13] ELMAN J L. Finding structure in time[J]. Cognitive Science, 1990, 14(2): 179-211. [14] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[J]. arXiv:1409.0473, 2014. [15] RUMELHART D E, HINTON G E, WILLIAMS R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088): 533-536. [16] BENGIO Y, SIMARD P, FRASCONI P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE Transactions on Neural Networks, 1994, 5(2): 157-166. [17] CHUNG J, GULCEHRE C, CHO K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[J]. arXiv:1412. 3555, 2014. [18] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444. [19] MENG F, LU Z, WANG M, et al. Encoding source language with convolutional neural network for machine translation[J]. arXiv:1503.01838, 2015. [20] GEHRING J, AULI M, GRANGIER D, et al. A convolutional encoder model for neural machine translation[J]. arXiv:1611.02344, 2016. [21] FUKUSHIMA K, MIYAKE S. Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition[M]//Competition and cooperation in neural nets. Berlin, Heidelberg: Springer, 1982: 267-285. [22] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [23] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Image- Net classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [24] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1-9. [25] HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916. [26] YAMAMOTO Y, NAM J, TERASAWA H. Deformable CNN and imbalance-aware feature learning for singing technique classification[J]. arXiv:2206.12230, 2022. [27] MAAZ M, SHAKER A, CHOLAKKAL H, et al. Edge-NeXt: efficiently amalgamated CNN-transformer architecture for Mobile vision applications[J]. arXiv:2206.10589, 2022. [28] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017. [29] CAGLAYAN O, BARRAULT L, BOUGARES F. Multimodal attention for neural machine translation[J]. arXiv:1609.03976, 2016. [30] CALIXTO I, LIU Q, CAMPBELL N. Doubly-attentive decoder for multi-modal neural machine translation[J]. arXiv:1702.01287, 2017. [31] LIBOVICKY J, HELCL J. Attention strategies for multi-source sequence-to-sequence learning[J]. arXiv:1704.06567, 2017. [32] ZHOU M, CHENG R, LEE Y J, et al. A visual attention grounding neural model for multimodal machine translation[J]. arXiv:1808.08266, 2018. [33] VE J, MADHYASTHA P, SPECIA L. Distilling translations with visual awareness[J]. arXiv:1808.08266, 2018. [34] TOYAMA J, MISONO M, SUZUKI M, et al. Neural machine translation with latent semantic of image and text[J]. arXiv:1611.08459, 2016. [35] CALIXTO I, RIOS M, AZIZ W. Latent variable model for multi-modal translation[J]. arXiv:1811.00357, 2018. [36] YANG P, CHEN B, ZHANG P, et al. Visual agreement regularized training for multi-modal machine translation[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020: 9418-9425. [37] SHI X, YU Z. Adding visual information to improve multimodal machine translation for low-resource language[J]. Mathematical Problems in Engineering, 2022. DOI: 10.1155/ 2022/5483535. [38] WANG H, ZHANG Y, JI Z, et al. Consensus-aware visual-semantic embedding for image-text matching[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 18-34. [39] WEI B, WANG M, ZHOU H, et al. Imitation learning for non-autoregressive neural machine translation[J]. arXiv:1906.02041, 2019. [40] GUO J, TAN X, XU L, et al. Fine-tuning by curriculum learning for non-autoregressive neural machine translation[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020: 7839-7846. [41] LI Z, LIN Z, HE D, et al. Hint-based training for non-auto- regressive machine translation[J]. arXiv:1909.06708, 2019. [42] ZHOU J, KEUNG P. Improving non-autoregressive neural machine translation with monolingual data[J]. arXiv:2005. 00932, 2020. [43] REN Y, RUAN Y, TAN X, et al. FasTspeech: fast, robust and controllable text to speech[C]//Advances in Neural Information Processing Systems 32, 2019. [44] GU J, WANG C, ZHAO J. Levenshtein transformer[C]//Advances in Neural Information Processing Systems 32, 2019. [45] JEAN S, LAULY S, FIRAT O, et al. Does neural machine translation benefit from larger context?[J]. arXiv:1704. 05135, 2017. [46] WANG L, TU Z, WAY A, et al. Exploiting cross-sentence context for neural machine translation[J]. arXiv:1704.04347, 2017. [47] VOITA E, SERDYUKOV P, SENNRICH R, et al. Context-aware neural machine translation learns anaphora resolution[J]. arXiv:1805.10163, 2018. [48] ZHANG J, LUAN H, SUN M, et al. Improving the transformer translation model with document-level context[J]. arXiv:1810.03581, 2018. [49] MICULICICH L, RAM D, PAPPAS N, et al. Document-level neural machine translation with hierarchical attention networks[J]. arXiv:1809.01576, 2018. [50] MARUF S, MARTINS A F T, HAFFARI G. Selective attention for context-aware neural machine translation[J]. arXiv:1903.08788, 2019. [51] YANG Z, ZHANG J, MENG F, et al. Enhancing context modeling with a query-guided capsule network for document-level translation[J]. arXiv:1909.00564, 2019. [52] TU Z, LIU Y, SHI S, et al. Learning to remember translation history with a continuous cache[J]. Transactions of the Association for Computational Linguistics, 2018, 6: 407-420. [53] MARUF S, HAFFARI G. Document context neural machine translation with memory networks[J]. arXiv:1711. 03688, 2017. [54] DONG D, WU H, HE W, et al. Multi-task learning for multiple language translation[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2015: 1723-1732. [55] LUONG M T, LE Q V, SUTSKEVER I, et al. Multi-task sequence to sequence learning[J]. arXiv:1511.06114, 2015. [56] FIRAT O, CHO K, BENGIO Y. Multi-way, multilingual neural machine translation with a shared attention mechanism[J]. arXiv:1601.01073, 2016. [57] LEE J, CHO K, HOFMANN T. Fully character-level neural machine translation without explicit segmentation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 365-378. [58] HA T L, NIEHUES J, WAIBEL A. Toward multilingual neural machine translation with universal encoder and decoder[J]. arXiv:1611.04798, 2016. [59] JOHNSON M, SCHUSTER M, LE Q V, et al. Google’s multilingual neural machine translation system: enabling zero-shot translation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 339-351. [60] BLACKWOOD G, BALLESTEROS M, WARD T. Multilingual neural machine translation with task-specific attention[J]. arXiv:1806.03280, 2018. [61] SACHAN D S, NEUBIG G. Parameter sharing methods for multilingual self-attentional translation models[J]. arXiv:1809.00252, 2018. [62] PLATANIOS E A, SACHAN M, NEUBIG G, et al. Contextual parameter generation for universal neural machine translation[J]. arXiv:1808.08493, 2018. [63] LU Y, KEUNG P, LADHAK F, et al. A neural interlingua for multilingual machine translation[J]. arXiv:1804.08198, 2018. [64] WANG Y, ZHANG J, ZHAI F, et al. Three strategies to improve one-to-many multilingual translation[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018: 2955-2960. [65] WANG X, PHAM H, ARTHUR P, et al. Multilingual neural machine translation with soft decoupled encoding[J]. arXiv:1902.03499, 2019. [66] TAN X, REN Y, HE D, et al. Multilingual neural machine translation with knowledge distillation[J]. arXiv:1902. 10461, 2019. [67] MURTHY V R, KUNCHUKUTTAN A, BHATTACHARYYA P. Addressing word-order divergence in multilingual neural machine translation for extremely low resource languages[J]. arXiv:1811.00383, 2018. [68] GU J, WANG Y, CHEN Y, et al. Meta-learning for low- resource neural machine translation[J]. arXiv:1808.08437, 2018. [69] NEUBIG G, HU J. Rapid adaptation of neural machine translation to new languages[J]. arXiv:1808.04189, 2018. [70] WANG X, NEUBIG G. Target conditioned sampling: optimizing data selection for multilingual neural machine translation[J]. arXiv:1905.08212, 2019. [71] WANG X, PHAM H, MICHEL P, et al. Optimizing data usage via differentiable rewards[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 9983-9995. [72] 何乌云, 秀芝, 包晶晶, 等. 结合BERT数据增强的基于词切分的蒙汉神经机器翻译系统[J]. 厦门大学学报(自然科学版), 2022, 61(4): 667-674. HE W Y, XIU Z, BAO J J, et al. Mongolian-Chinese neural machine translation system based on word segmentation with BERT data enhancement[J]. Journal of Xiamen University (Natural Science), 2022, 61(4): 667-674. [73] SUGIYAMA A, YOSHINAGA N. Data augmentation using back-translation for context-aware neural machine translation[C]//Proceedings of the 4th Workshop on Discourse in Machine Translation, 2019: 35-44. [74] 尤丛丛, 高盛祥, 余正涛, 等. 基于同义词数据增强的汉越神经机器翻译方法[J]. 计算机工程与科学, 2021, 43(8):1497-1502. YOU C C, GAO S X, YU Z T, et al. A Chinese-Vietnamese neural machine translation method based on synonym data augmentation[J]. Computer Engineering and Science, 2021,43(8): 1497-1502. [75] 贾承勋. 面向汉越神经机器翻译的伪平行语料生成方法研究[D]. 昆明: 昆明理工大学, 2020. JIA C X. Research on pseudo-parallel corpus generation for Chinese-Vietnamese neural Machine translation[D]. Kunming: Kunming University of Science and Technology, 2020. [76] 白天罡. 基于强化学习的蒙汉神经网络机器翻译的研究[D]. 呼和浩特: 内蒙古大学, 2020. BAI T G. Mongolian-Chinese neural machine translation based on reinforcement learning[D]. Hohhot: Inner Mongolia University, 2020. [77] 何建树. 基于深度学习的神经机器翻译技术研究[D]. 成都: 电子科技大学, 2021. HE J S. Research on neural machine translation based on deep learning[D]. Chengdu: University of Electronic Science and Technology of China, 2021. [78] SOHN K, LI C L, YOON J, et al. Learning and evaluating representations for deep one-class classification[J]. arXiv:2011.02578, 2020. [79] ELSAYED G, KRISHNAN D, MOBAHI H, et al. Large margin deep networks for classification[C]//Advances in Neural Information Processing Systems 31, 2018. [80] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018. [81] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019. [82] 潘泽高. 基于数据增强技术的维汉机器翻译研究与实现[D]. 乌鲁木齐: 新疆大学, 2021. PAN Z G. Research and implementation of Uyghur-Chinese machine translation based on data augmentation[D].Urumchi: Xinjiang University, 2021. [83] 郭紫月. 基于生成式方法的蒙汉机器翻译研究[D]. 呼和浩特: 内蒙古大学, 2021. GUO Z Y. Research on Mongolian-Chinese machine translation based on generative methods[D]. Hohhot: Inner Mongolia University, 2021. [84] 卞乐乐. 基于单语语料与强化学习的蒙汉神经机器翻译的研究[D]. 呼和浩特: 内蒙古工业大学, 2021. BIAN L L. Mongolian-Chinese neural machine translation based on Monolingual corpus and reinforcement learning[D]. Hohhot: Inner Mongolia University of Technology, 2021. [85] 头旦才让. 汉藏神经机器翻译关键技术研究[D]. 拉萨: 西藏大学, 2021. TOUDANCAIRANG. Research on key technologies of Chinese-Tibetan neural machine translation[D]. Lhasa: Tibet University, 2021. [86] 赵丹丹, 黄德根, 孟佳娜, 等. 基于BERT-GRU-ATT模型的中文实体关系分类[J]. 计算机科学, 2022, 49(6): 319-325. ZHAO D D, HUANG D G, MENG J N, et al. Chinese entity relations classification based on BERT-GRU-ATT[J]. Computer Science, 2022, 49(6): 319-325. [87] 赖文. 低资源语言神经机器翻译关键技术研究[D]. 北京: 中央民族大学, 2020. LAI W. Research on key techniques of neural machine translation for low-resource languages[D]. Beijing: Minzu University of China, 2020. [88] 桑杰端珠. 稀疏资源条件下的藏汉机器翻译研究[D]. 西宁: 青海师范大学, 2019. SANGJIEDUANZHU. Tibetan-Chinese machine translation with sparse resources[D]. Xining: Qinghai Normal University, 2019. [89] 谭敏. 若干低资源条件下的神经机器翻译研究[D]. 苏州: 苏州大学, 2020. TAN M. Neural machine translation research under several low-resource conditions[D]. Suzhou: Soochow University, 2020. [90] 庞蕊. 融合先验知识的蒙汉神经机器翻译研究[D]. 呼和浩特: 内蒙古工业大学, 2021. PANG R. Research on Mongolian-Chinese neural machine translation based on prior knowledge[D]. Hohhot: Inner Mongolia University of Technology, 2021. [91] DI GANGI M A, FEDERICO M. Monolingual embeddings for low resourced neural machine translation[C]//Proceedings of the 14th International Conference on Spoken Language Translation, 2017: 97-104. [92] CHOUDHARY H, RAO S, ROHILLA R. Neural machine translation for low-resourced Indian languages[J]. arXiv:2004.13819, 2020. [93] LAKEW S M, FEDERICO M, NEGRI M, et al. Multilingual neural machine translation for low-resource languages[J]. Italian Journal of Computational Linguistics, 2018, 4: 11-25. [94] MACé V, SERVAN C. Using whole document context in neural machine translation[J]. arXiv:1910.07481, 2019. [95] MA S, ZHANG D, ZHOU M. A simple and effective unified encoder for document-level machine translation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 3505-3511. [96] KUANG S, XIONG D, LUO W, et al. Modeling coherence for neural machine translation with dynamic and topic caches[J]. arXiv:1711.11221, 2017. [97] VOITA E, SENNRICH R, TITOV I. Context-aware monolingual repair for neural machine translation[J]. arXiv:1909.01383, 2019. [98] XIONG H, HE Z, WU H, et al. Modeling coherence for discourse neural machine translation[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019: 7338-7345. [99] LI X, ZHANG J, ZONG C. Towards zero unknown word in neural machine translation[C]//Proceedings of the 25th International Joint Conference on Artificial Intelligence, 2016: 2852-2858. [100] HIRSCHMANN F, NAM J, FüRNKRANZ J. What makes word-level neural machine translation hard: a case study on English-German translation[C]//Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, 2016: 3199-3208. [101] SENNRICH R, HADDOW B, BIRCH A. Neural machine translation of rare words with subword units[J]. arXiv:1508.07909, 2015. [102] 成洁. 一种神经机器翻译中稀有词模糊语义表示方法[J]. 信息技术, 2020, 44(12): 1-7. CHENG J. A fuzzy semantic representation method of rare words in neural machine translation[J]. Information Technology, 2020, 44(12): 1-7. [103] 赵旭. 基于ULR与元学习策略的蒙汉神经机器翻译的研究[D]. 呼和浩特: 内蒙古工业大学, 2021. ZHAO X. Mongolian-Chinese neural machine translation based on ULR and meta-learning strategy[D]. Hohhot: Inner Mongolia University of Technology, 2021. [104] 吉亚图. 低资源神经机器翻译中关键问题的研究[D]. 呼和浩特: 内蒙古大学, 2020. JI Y T. Research on key issues in low resource neural machine translation[D]. Hohhot: Inner Mongolia University, 2020. [105] HADDAD H, FADAEI H, FAILI H. Handling OOV words in NMT using unsupervised bilingual embedding[C]//Proceedings of the 2018 9th International Symposium on Telecommunications, 2018: 569-574. [106] HUCK M, HANGYA V, FRASER A. Better OOV translation with bilingual terminology mining[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 5809-5815. [107] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[J]. OpenAI (2018) [2023-04-12]. https://api.semanticscholar.org/CorpusID:49313245. [108] TOUVRON H, LAVRIL T, IZACARD G, et al. LLama: open and efficient foundation language models[J]. arXiv:2302.13971, 2023. [109] TAORI R, GULRAJANI I, ZHANG T, et al. Alpaca: a strong, replicable instruction-following model[EB/OL]. Stanford Center for Research on Foundation Models [2023-04-12]. https://crfm.stanford.edu/2023/03/13/alpaca. html. [110] DU Z, QIAN Y, LIU X, et al. GLM: general language model pretraining with autoregressive blank infilling[J]. arXiv:2103.10360, 2021. |
[1] | ZHANG Huiyun, HUANG Heming. Speech Emotion Recognition for Imbalanced Datasets [J]. Computer Engineering and Applications, 2024, 60(4): 122-132. |
[2] | SONG Yu, WANG Banghai, CAO Ganggang. Cross-Modality Person Re-identification Combined with Data Augmentation and Feature Fusion [J]. Computer Engineering and Applications, 2024, 60(4): 133-141. |
[3] | YANG Wei, ZHONG Mingfeng, YANG Gen, HOU Zhicheng, WANG Weijun, YUAN Hai. Few Samples Data Augmentation Method Based on NVAE and OB-Mix [J]. Computer Engineering and Applications, 2024, 60(2): 103-112. |
[4] | CHEN Jingxia, TANG Zhezhe, LIN Wentao, HU Kailei, XIE Jia. Self-Attention GAN for EEG Data Augmentation and Emotion Recognition [J]. Computer Engineering and Applications, 2023, 59(5): 160-168. |
[5] | DING Kai, YANG Jiaxi, YANG Yao, NA Chongning. Multi-Label Car Damage Image Generation Based on Few Shot StyleGAN [J]. Computer Engineering and Applications, 2023, 59(23): 202-210. |
[6] | LI Conglin, WANG Qibing, LU Jiawei, ZHAO Guojun, HU Hao, XIAO Gang. Modeling and Recognition Method of Elevator Passenger Abnormal Behavior Based on Digital Twin [J]. Computer Engineering and Applications, 2023, 59(19): 274-284. |
[7] | WANG Xinpeng, WANG Xiaoqiang, LIN Hao, LI Leixiao, LI Kecen, TAO Yihao. Driver’s Mobile Phone Usage Detection Model:Optimizing Yolov5n Algorithm [J]. Computer Engineering and Applications, 2023, 59(18): 129-136. |
[8] | ZHANG Jialin, Mairidan Wushouer, Gulanbaier Tuerhong. Review of Speech Synthesis Methods Under Low-Resource Condition [J]. Computer Engineering and Applications, 2023, 59(15): 1-16. |
[9] | LI Shuo, GU Yijun, TAN Hao, PENG Shufan. Research on Voiceprint Adversarial Detection of Improved Xception Network [J]. Computer Engineering and Applications, 2023, 59(14): 232-241. |
[10] | LIU Tao, DING Xueyan, ZHANG Bingbing, ZHANG Jianxin. Improved YOLOv5 for Remote Sensing Image Detection [J]. Computer Engineering and Applications, 2023, 59(10): 253-261. |
[11] | DENG Xue, ZHAO Hao, ZHANG Jing, MEI Boping, ZHANG Hua. Research on Offline Data Augmentation Method Jointed with Cannikin’s Law [J]. Computer Engineering and Applications, 2023, 59(1): 207-212. |
[12] | Alim Samat, Sirajahmat Ruzmamat, Maihefureti, Aishan Wumaier, Wushuer Silamu, Turgun Ebrayim. Research on Sentence Length Sensitivity in Neural Network Machine Translation [J]. Computer Engineering and Applications, 2022, 58(9): 195-200. |
[13] | CHEN Yidong, LU Zhonghua. Forecasting CPI Based on Convolutional Neural Network and Long Short-Term Memory Network [J]. Computer Engineering and Applications, 2022, 58(9): 256-262. |
[14] | SUN Xiaodong, YANG Dongqiang. Review of Application of Data Augmentation Strategy in English Grammar Error Correction [J]. Computer Engineering and Applications, 2022, 58(7): 43-54. |
[15] | ZHANG Ming, LU Qinghua, HUANG Yuanzhong, LI Ruixuan. Recent Advances and Challenges on Grammatical Error Correction in Natural Language [J]. Computer Engineering and Applications, 2022, 58(6): 29-41. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||