Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (12): 18-33.DOI: 10.3778/j.issn.1002-8331.2309-0497
• Research Hotspots and Reviews • Previous Articles Next Articles
WEN Mingqi, REN Luqian, CHEN Zhenqin, YANG Zhuo, ZHAN Yinwei
Online:
2024-06-15
Published:
2024-06-14
温铭淇,任路乾,陈镇钦,杨卓,战荫伟
WEN Mingqi, REN Luqian, CHEN Zhenqin, YANG Zhuo, ZHAN Yinwei. Survey of Deep Learning Based Approaches for Gaze Estimation[J]. Computer Engineering and Applications, 2024, 60(12): 18-33.
温铭淇, 任路乾, 陈镇钦, 杨卓, 战荫伟. 基于深度学习的视线估计方法综述[J]. 计算机工程与应用, 2024, 60(12): 18-33.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2309-0497
[1] KAZEMI V, SULLIVAN J. One millisecond face alignment with an ensemble of regression trees[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2014: 1867-1874. [2] ZHANG K P, ZHANG Z P, LI Z F, et al. Joint face detection and alignment using multitask cascaded convolutional networks[J]. IEEE Signal Processing Letters, 2016, 23(10): 1499-1503. [3] KOWALSKI M, NARUNIEC J, TRZCINSKI T. Deep alignment network: a convolutional neural network for robust face alignment[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Washington DC: IEEE Computer Society, 2017: 2034-2043. [4] BALTRUSAITIS T, ZADEH A, MORENCY L, et al. Openface 2.0: facial behavior analysis toolkit[C]//Proceedings of the 13th IEEE International Conference on Automatic Face Gesture Recognition. Washington DC: IEEE Computer Society, 2018: 59-66. [5] GUO J Z, ZHU X Y, YANG Y, et al. Towards fast, accurate and stable 3D dense face alignment[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 152-168. [6] ZHANG X C, SUGANO Y, FRITZ M, et al. MPIIGaze: real-world dataset and deep appearance-based gaze estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 41(1): 162-175. [7] SUGANO Y, MATSUSHITA Y, SATO Y. Learning-by-synthesis for appearance-based 3D gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2014: 1821-1828. [8] ZHANG X C, SUGANO Y, FRITZ M, et al. Appearance-based gaze estimation in the wild[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2015: 4511-4520. [9] SUN T C, BARRON J, TSAI Y, et al. Single image portrait relighting[J]. ACM Transactions on Graphics, 2019, 38(4): 1-12. [10] CHEN Z K, BERTRAM E S. Towards high performance low complexity calibration in appearance based gaze estimation[J].?IEEE Transactions on Pattern Analysis and Machine Intelligence,?2023, 45(1) : 1174-1188. [11] DING Y, LU L, LIU Z, WU S, et al. FAU-Gaze: fast and accurate user-specific gaze estimation framework[C]//Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), 2023: 1-8. [12] YANG S, JIN M, HE Y. Continuous gaze tracking with implicit saliency-aware calibration on mobile devices[J]. IEEE Transactions on Mobile Computing, 2023, 22(10): 5816-5828 [13] BAO J, LIU B Y, YU J. An Individual-difference-aware model for cross-person gaze estimation[J]. IEEE Transactions on Image Processing, 2022, 31: 3322-3333. [14] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv.1409. 1556, 2014. [15] PARK S, SPURR A, HILLIGES O. Deep pictorial gaze estimation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2018: 741-757. [16] CHEN Z K, BERTRAM E S. Appearance-based gaze estimation using dilated-convolutions[C]//Proceedings of the 14th Asian Conference on Computer Vision. Cham: Springer, 2019: 309-324. [17] 戴忠东, 任敏华. 基于表观的归一化坐标系分类视线估计方法[J].计算机工程, 2022, 48(2): 230-236. DAI Z D, REN M H. Gaze estimation method using normalized coordinate system classification based on apparent[J].Computer Engineering, 2022, 48(2): 230-236. [18] FISCHER T, CHANG J H, DEMIRIS Y. RT-GENE: real-time eye gaze estimation in natural environments[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 339-357. [19] CHENG Y H, LU F, ZHANG X C. Appearance-based gaze estimation via evaluation-guided asymmetric regression[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 105-121. [20] HUANG H X, REN L Q, YANG Z, et al. GazeAttentionNet: gaze estimation with attentions[C]//Proceedings of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, 2022: 2435-2439. [21] WANG H, OH J O, CHANG H J, et al. GazeCaps: gaze estimation with self-attention-routed capsules[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 2668-2676. [22] GU S, WANG L H, HE L, et al. Gaze estimation via a differential eyes’ appearances network with a reference grid[J]. Engineering, 2021, 7(6): 162-182. [23] WANG K, ZHAO R, SU H, et al. Generalizing eye tracking with bayesian adversarial learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 11907-11916. [24] KIM J, JEONG J. Gaze estimation in the dark with generative adversarial networks[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2020: 1-3. [25] RANGESH A, ZHANG B, TRIVEDI M. Driver gaze estimation in the real world: overcoming the eyeglass challenge[C]//Proceedings of the IEEE Intelligent Vehicles Symposium. Piscataway, NJ: IEEE Press, 2020: 1054-1059. [26] YU Y, ODOBEZ J. Unsupervised representation learning for gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2020: 7312-7322. [27] ZHANG X C, SUGANO Y, FRITZ M, et al. It’s written all over your face: full-face appearance-based gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Washington DC: IEEE Computer Society, 2017: 2299-2308. [28] OGUSU R, YAMANAKA T. LPM: learnable pooling module for efficient full-face gaze estimation[C]//Proceedings of the 14th IEEE International Conference on Automatic Face Gesture Recognition. Piscataway, NJ: IEEE Press, 2019: 1-5. [29] CHENG Y H, BAO Y W, LU F. PureGaze: purifying gaze feature for generalizable gaze estimation[J].arXiv:2103.13173, 2021. [30] YU Z H, HUANG X H, ZHANG X B, et al. A multi-modal approach for driver gaze prediction to remove identity bias[C]//Proceedings of the International Conference on Multimodal Interaction. New York: ACM Press, 2020: 768-776. [31] ZHANG Y J, YANG X H, MA Z. Driver’s gaze zone estimation method: a four-channel convolutional neural network model[C]//Proceedings of the International Conference on Big-Data Service and Intelligent Computation. New York: ACM Press, 2020: 20-24. [32] WANG Z C, ZHAO J, LU C, et al. Learning to detect head movement in unconstrained remote gaze estimation in the wild[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Piscataway, NJ: IEEE Press, 2020: 3443-3452. [33] KRAFKA K, KHOSLA A, KELLNHOFER P, et al. Eye tracking for everyone[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2016: 2176-2184. [34] DENG H P, ZHU W J. Monocular free-head 3D gaze tracking with deep learning and geometry constraints[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington DC: IEEE Computer Society, 2017: 3162-3171. [35] CHENG Y H, HUANG S Y, WANG F, et al. A coarse-to-fine adaptive network for appearance-based gaze estimation[C]//Proceedings of the AAAI Conference on Artificial Inte-lligence, 2020: 10623-10630. [36] KELLNHOFER P, RECASENS A, STENT S, et al. Gaze360: physically unconstrained gaze estimation in the wild[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE Press, 2019: 6911-6920. [37] ZHOU X L, LIN J N, JIANG J Q, et al. Learning a 3D gaze estimator with improved itracker combined with bidirectional LSTM[C]//Proceedings of the IEEE International Conference on Multimedia and Expo. Piscataway, NJ: IEEE Press, 2019: 850-855. [38] PALMERO C, SELVA J, BAGHERI M, et al. Recurrent CNN for 3D gaze estimation using appearance and shape cues[C]//Proceedings of the British Machine Vision Conference, Newcastle, UK, 2018: 251-263. [39] WANG Z Y, CHAI J X, XIA S H. Realtime and accurate 3D eye gaze capture with DCNN-based iris and pupil segmentation[J]. IEEE Transactions on Visualization and Computer Graphics, 2021, 27(1): 190-203. [40] WANG K, SU H, JI Q. Neuro-inspired eye tracking with eye movement dynamics[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 9831-9840. [41] NONAKA S, NOBUHARA S, NISHINO K. Dynamic 3D gaze from AFAR: deep gaze estimation from temporal eye-head-body coordination[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2022: 2192-2201. [42] CHEN Y, HU R, XU D, et al. Hidden follower detection via refined gaze and walking state estimation[C]//Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), 2023: 2081-2086. [43] LIU S, LIU D P, WU H Y. Gaze estimation with multi-scale channel and spatial attention[C]//Proceedings of the International Conference on Computing and Pattern Recognition. New York: ACM Press, 2020: 303-309. [44] MAHANAMA B, JAYAWARDANA Y, JAYARATHNA S. Gaze-Net: appearance-based gaze estimation using capsule networks[C]//Proceedings of the 11th Augmented Human International Conference. New York: ACM Press, 2020: 1-4. [45] LEMLEY J, KAR A, DRIMBAREAN A, et al. Convolutional neural network implementation for eye-gaze estimation on low-quality consumer imaging systems[J]. IEEE Transactions on Consumer Electronics, 2019, 65(2): 179-187. [46] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2016: 770-778. [47] ZHANG X C, PARK S, BEELER T, et al. ETH-XGaze: a large scale dataset for gaze estimation under extreme head pose and gaze variation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2020: 365-381. [48] SWIRSKI L, DODGSON N. Rendering synthetic ground truth images for eye tracker evaluation[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2014: 219-222. [49] PORTA S, BOSSAVIT B, CABEZA R, et al. U2eyes: a binocular dataset for eye tracking and gaze estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop. Piscataway, NJ: IEEE Press, 2019: 3660-3664. [50] FURUKAWA Y, PONCE J. Accurate, dense, and robust multiview stereopsis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 32(8): 1362-1376. [51] WOOD E, BALTRUSAITIS T, ZHANG X C, et al. Rendering of eyes for eye-shape registration and gaze estimation[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington DC: IEEE Computer Society, 2015: 3756-3764. [52] WOOD E, MORENCY L, ROBINSON P, et al. Learning an appearance-based gaze estimator from one million synthesised images[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2016: 131-138. [53] ZHU Z S, ZHANG D, CHI C L, LI M, et al. A complementary dual-branch network for appearance-based gaze estimation from low-resolution facial image[J]. IEEE Transactions on Cognitive and Developmental Systems, 2023,15(3): 1323-1334. [54] HUANG G H, SHI J Y, XU J, et al. Gaze estimation by attention-induced hierarchical variational auto-encoder[J]. IEEE Transactions on Cybernetics, 2024, 54(4): 2592-2605. [55] RANGESH A, ZHANG B, TRIVEDI M M. Gaze preserving CycleGANs for eyeglass removal and persistent gaze estimation[J]. IEEE Transactions on Intelligent Vehicles, 2022, 7(2): 377-386. [56] 成浩维, 资文杰, 彭双, 等. 基于半监督学习的三维Mesh建筑物立面提取与语义分割方法[J]. 郑州大学学报 (理学版), 2023, 55(4): 8-15. CHENG H W, ZI W J, PENG S, et al. Semi-supervised learning based 3D mesh building facade extraction and semantic segmentation method[J]. Journal of Zhengzhou University (Natural Science Edition), 2023, 55(4): 8-15. [57] 吕昊远, 俞璐, 周星宇, 等. 半监督深度学习图像分类方法研究综述[J]. 计算机科学与探索, 2021, 15(6): 1038-1048. LYU H Y, YU L, ZHOU X Y, et al. Review of semi-supervised deep learning image classification methods[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(6): 1038-1048. [58] CHENG Y H, BAO Y W, LU F. Puregaze: purifying gaze feature for generalizable gaze estimation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022: 436-443.? [59] XIONG Y Y, KIM H, SINGH V. Mixed effects neural networks(MeNets) with applications to gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 7743-7752. [60] WU Y, LI G, LIU Z, et al. Gaze estimation via modulation-based adaptive network with auxiliary self-learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(8): 5510-5520. [61] DUBEY N, GHOSH S, DHALL A. Unsupervised learning of eye gaze representation from the web[C]//Proceedings of the International Joint Conference on Neural Networks. Piscataway, NJ: IEEE Press, 2019: 1-7. [62] WANG Y M, JIANG Y Z, JIN L, et al. Contrastive regression for domain adaptation on gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2022: 19376-19385. [63] CAI X, ZENG J, SHAN S, CPHEN X. Source-free adaptive gaze estimation by uncertainty reduction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 22035-22045. [64] ZHANG J, CHEN J, TANG H, et al. Unsupervised high-resolution portrait gaze correction and animation[J]. IEEE Transactions on Image Processing, 2022, 31: 5272-5286. [65] LIAN D Z, HU L N, LUO W X, et al. Multiview multitask gaze estimation with deep convolutional neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(10): 3010-3023. [66] LIAN D Z, ZHANG Z H, LUO W X, et al. RGB-D based gaze estimation via multi-task CNN[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 2488-2495. [67] YU Y, LIU G, ODOBEZ J. Deep multitask gaze estimation with a constrained landmark-gaze model[C]//Proceedings of the European Conference on Computer Vision Workshop. Cham: Springer, 2018: 456-474. [68] ZHANG M F, LIU Y F, LU F. Gazeonce: real-time multi-person gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2022: 4197-4206. [69] RECASENS A, VONDRICK C, KHOSLA A, et al. Following gaze in video[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington DC: IEEE Computer Society, 2017: 1444-1452. [70] KRUTHIVENTI S, GUDISA V, DHOLAKIYA J, et al. Saliency unified: a deep architecture for simultaneous eye fixation prediction and salient object segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2016: 5781-5790. [71] WANG W G, SHEN J B, DONG X P, et al. Inferring salient objects from human fixations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 1913-1927. [72] CHONG E, RUIZ N, WANG Y X, et al. Connecting gaze, scene, and attention: generalized attention estimation via joint modeling of gaze and scene saliency[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2018: 397-412. [73] HU D, HUANG K. GFNet: gaze focus network using attention for gaze estimation[C]//Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), 2023: 2399-2404. [74] CHEN W, XU H, ZHU C, et al. Gaze estimation via the joint modeling of multiple cues[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(3): 1390-1402. [75] TU D, SHEN W, SUN W, et al. Un-Gaze: a unified Transformer for joint gaze-location and gaze-object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023(1). [76] CHO K, MERRIENBOER B, BAHDANAU D, et al. On the properties of neural machine translation: encoder-decoder approaches[C]//Proceedings of the 8th Workshop on Syntax, Semantics and Structure in Statistical Translation, 2014: 103-111. [77] SCHUSTER M, PALIWAL K K. Bidirectional recurrent neural networks[J]. IEEE Transactions on Signal Processing, 1997,45(11):2673-2681. [78] LI G, DAI L, GAO Q, GAO H, et al. Improvement of unconstrained appearance-based gaze tracking with LSTM[C]//Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2022:1254-1259. [79] WANG L, LI S. Wheelchair-centered omnidirectional gaze-point estimation in the wild[J]. IEEE Transactions on Human-Machine Systems, 2023, 53(3): 466-478. [80] CHEN Z K, SHI B. Offset calibration for appearance-based gaze estimation via gaze decomposition[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Piscataway, NJ: IEEE Press, 2020: 259-268. [81] 苟超,卓莹,王康, 等.眼动跟踪研究进展与展望[J].自动化学报, 2022, 48(5): 1173-1192. GOU C, ZHUO Y, WANG K, et al. Research advances and prospects of eye tracking[J]. Acta?Automatica Sinica, 2022, 48(5): 1173-1192. [82] CHENG Y H, ZHANG X C, LU F, et al. Gaze estimation by exploring two-eye asymmetry[J]. IEEE Transactions on Image Processing, 2020, 29: 5259-5272. [83] 王燕, 吕艳萍. 混合深度CNN联合注意力的高光谱图像分类[J]. 计算机科学与探索, 2023, 17(2): 385-395. WANG Y, LYU Y P. Hybrid deep CNN-attention for hyperspectral image classification[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(2): 385-395. [84] LI J, CHEN Z J, ZHONG Y H, et al. Appearance-based gaze estimation for ASD diagnosis[J]. IEEE Transactions on Cybernetics, 2022, 52(7): 6504-6517. [85] HU Z, ZHAO K, ZHOU B, et al. Gaze target estimation inspired by interactive attention[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(12): 8524-8536. [86] SMITH B A, YIN Q, FEINER S, et al. Gaze locking: passive eye contact detection for human-object interaction[C]//Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology. New York: ACM Press, 2013: 271-280. [87] FUNES M K A, MONAY F, ODOBEZ J. EyeDiap: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2014: 255-258. [88] HUANG Q, VEERARAGHAVAN A, SABHARWAL A. TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets[J]. Machine Vision and Applications, 2017, 28(5): 445-461. [89] PARK S, AKSAN E, ZHANG X, et al. Towards end-to-end video-based eye-tracking[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 2020:747-763. [90] GHOSH S, DHALL A, SHARMA G, et al. Speak2label: using domain knowledge for creating a large scale driver gaze zone estimation dataset[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021:2896-2905. [91] HE J F, PHAM K, VALLIAPPAN N, et al. On-device few-shot personalization for real-time gaze estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop. Piscataway, NJ: IEEE Press, 2019: 1149-1158. [92] GUO T C, LIU Y C, ZHANG H, et al. A generalized and robust method towards practical gaze estimation on smart phone[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop. Piscataway, NJ: IEEE Press, 2019: 1131-1139. [93] BAO Y W, CHENG Y H, LIU Y F, et al. Adaptive feature fusion network for gaze tracking in mobile tablets[C]//Proceedings of the 25th International Conference on Pattern Recognition. Piscataway, NJ: IEEE Press, 2020: 9936-9943. [94] GOODFELLOW I, ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems, 2014: 2672-2680. [95] GAO J Y, ZHANG T Z, XU C S. Graph convolutional tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 4644-4654. |
[1] | LI Houjun, WEI Boquan. Attribute Distillation for Zero-Shot Recognition [J]. Computer Engineering and Applications, 2024, 60(9): 219-227. |
[2] | CHE Yunlong, YUAN Liang, SUN Lihui. 3D Object Detection Based on Strong Semantic Key Point Sampling [J]. Computer Engineering and Applications, 2024, 60(9): 254-260. |
[3] | QIU Yunfei, WANG Yifan. Multi-Level 3D Point Cloud Completion with Dual-Branch Structure [J]. Computer Engineering and Applications, 2024, 60(9): 272-282. |
[4] | YE Bin, ZHU Xingshuai, YAO Kang, DING Shangshang, FU Weiwei. Binocular Depth Measurement Method for Desktop Interaction Scene [J]. Computer Engineering and Applications, 2024, 60(9): 283-291. |
[5] | WANG Cailing, YAN Jingjing, ZHANG Zhidong. Review on Human Action Recognition Methods Based on Multimodal Data [J]. Computer Engineering and Applications, 2024, 60(9): 1-18. |
[6] | LIAN Lu, TIAN Qichuan, TAN Run, ZHANG Xiaohang. Research Progress of Image Style Transfer Based on Neural Network [J]. Computer Engineering and Applications, 2024, 60(9): 30-47. |
[7] | YANG Chenxi, ZHUANG Xufei, CHEN Junnan, LI Heng. Review of Research on Bus Travel Trajectory Prediction Based on Deep Learning [J]. Computer Engineering and Applications, 2024, 60(9): 65-78. |
[8] | SONG Jianping, WANG Yi, SUN Kaiwei, LIU Qilie. Short Text Classification Combined with Hyperbolic Graph Attention Networks and Labels [J]. Computer Engineering and Applications, 2024, 60(9): 188-195. |
[9] | ZHOU Dingwei, HU Jing, ZHANG Liangrui, DUAN Feiya. Collaborative Correction Technology of Label Omission in Dataset for Object Detection [J]. Computer Engineering and Applications, 2024, 60(8): 267-273. |
[10] | ZHOU Bojun, CHEN Zhiyu. Survey of Few-Shot Image Classification Based on Deep Meta-Learning [J]. Computer Engineering and Applications, 2024, 60(8): 1-15. |
[11] | SUN Shilei, LI Ming, LIU Jing, MA Jingang, CHEN Tianzhen. Research Progress on Deep Learning in Field of Diabetic Retinopathy Classification [J]. Computer Engineering and Applications, 2024, 60(8): 16-30. |
[12] | WANG Weitai, WANG Xiaoqiang, LI Leixiao, TAO Yihao, LIN Hao. Review of Construction and Applications of Spatio-Temporal Graph Neural Network in Traffic Flow Prediction [J]. Computer Engineering and Applications, 2024, 60(8): 31-45. |
[13] | XIE Weiyu, ZHANG Qiang. Review on Detection of Drones and Birds in Photoelectric Images Based on Deep Learning Convolutional Neural Network [J]. Computer Engineering and Applications, 2024, 60(8): 46-55. |
[14] | SHEN Haiyun, HUANG Zhongyi, WANG Haichuan, YU Honghao. Improved Tracktor-Based Pedestrian Multi-Objective Tracking Algorithm [J]. Computer Engineering and Applications, 2024, 60(8): 242-249. |
[15] | CHANG Xilong, LIANG Kun, LI Wentao. Review of Development of Deep Learning Optimizer [J]. Computer Engineering and Applications, 2024, 60(7): 1-12. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||