计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (12): 18-33.DOI: 10.3778/j.issn.1002-8331.2309-0497
温铭淇,任路乾,陈镇钦,杨卓,战荫伟
出版日期:
2024-06-15
发布日期:
2024-06-14
WEN Mingqi, REN Luqian, CHEN Zhenqin, YANG Zhuo, ZHAN Yinwei
Online:
2024-06-15
Published:
2024-06-14
摘要: 视线估计是一种预测人眼注视位置或注视方向的技术,在人机交互和计算机视觉的应用中发挥重要作用。近几年,深度学习的飞速发展改变了许多计算机视觉任务,利用深度学习进行基于外观的视线估计已成为关注热点。围绕深度学习模型的训练流程,从视线数据预处理、视线特征提取、视线学习策略、视线估计模型结构四个方面对近年基于深度学习的视线估计方法进行了综述和分析;然后介绍视线估计领域主流公开数据集,并对常用数据集分别进行2D和3D视线估计方法的对比分析。最后,探讨了当前视线估计领域的研究难点与挑战,并对未来的发展趋势进行总结与展望。
温铭淇, 任路乾, 陈镇钦, 杨卓, 战荫伟. 基于深度学习的视线估计方法综述[J]. 计算机工程与应用, 2024, 60(12): 18-33.
WEN Mingqi, REN Luqian, CHEN Zhenqin, YANG Zhuo, ZHAN Yinwei. Survey of Deep Learning Based Approaches for Gaze Estimation[J]. Computer Engineering and Applications, 2024, 60(12): 18-33.
[1] KAZEMI V, SULLIVAN J. One millisecond face alignment with an ensemble of regression trees[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2014: 1867-1874. [2] ZHANG K P, ZHANG Z P, LI Z F, et al. Joint face detection and alignment using multitask cascaded convolutional networks[J]. IEEE Signal Processing Letters, 2016, 23(10): 1499-1503. [3] KOWALSKI M, NARUNIEC J, TRZCINSKI T. Deep alignment network: a convolutional neural network for robust face alignment[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Washington DC: IEEE Computer Society, 2017: 2034-2043. [4] BALTRUSAITIS T, ZADEH A, MORENCY L, et al. Openface 2.0: facial behavior analysis toolkit[C]//Proceedings of the 13th IEEE International Conference on Automatic Face Gesture Recognition. Washington DC: IEEE Computer Society, 2018: 59-66. [5] GUO J Z, ZHU X Y, YANG Y, et al. Towards fast, accurate and stable 3D dense face alignment[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 152-168. [6] ZHANG X C, SUGANO Y, FRITZ M, et al. MPIIGaze: real-world dataset and deep appearance-based gaze estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 41(1): 162-175. [7] SUGANO Y, MATSUSHITA Y, SATO Y. Learning-by-synthesis for appearance-based 3D gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2014: 1821-1828. [8] ZHANG X C, SUGANO Y, FRITZ M, et al. Appearance-based gaze estimation in the wild[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2015: 4511-4520. [9] SUN T C, BARRON J, TSAI Y, et al. Single image portrait relighting[J]. ACM Transactions on Graphics, 2019, 38(4): 1-12. [10] CHEN Z K, BERTRAM E S. Towards high performance low complexity calibration in appearance based gaze estimation[J].?IEEE Transactions on Pattern Analysis and Machine Intelligence,?2023, 45(1) : 1174-1188. [11] DING Y, LU L, LIU Z, WU S, et al. FAU-Gaze: fast and accurate user-specific gaze estimation framework[C]//Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), 2023: 1-8. [12] YANG S, JIN M, HE Y. Continuous gaze tracking with implicit saliency-aware calibration on mobile devices[J]. IEEE Transactions on Mobile Computing, 2023, 22(10): 5816-5828 [13] BAO J, LIU B Y, YU J. An Individual-difference-aware model for cross-person gaze estimation[J]. IEEE Transactions on Image Processing, 2022, 31: 3322-3333. [14] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv.1409. 1556, 2014. [15] PARK S, SPURR A, HILLIGES O. Deep pictorial gaze estimation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2018: 741-757. [16] CHEN Z K, BERTRAM E S. Appearance-based gaze estimation using dilated-convolutions[C]//Proceedings of the 14th Asian Conference on Computer Vision. Cham: Springer, 2019: 309-324. [17] 戴忠东, 任敏华. 基于表观的归一化坐标系分类视线估计方法[J].计算机工程, 2022, 48(2): 230-236. DAI Z D, REN M H. Gaze estimation method using normalized coordinate system classification based on apparent[J].Computer Engineering, 2022, 48(2): 230-236. [18] FISCHER T, CHANG J H, DEMIRIS Y. RT-GENE: real-time eye gaze estimation in natural environments[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 339-357. [19] CHENG Y H, LU F, ZHANG X C. Appearance-based gaze estimation via evaluation-guided asymmetric regression[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 105-121. [20] HUANG H X, REN L Q, YANG Z, et al. GazeAttentionNet: gaze estimation with attentions[C]//Proceedings of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing, 2022: 2435-2439. [21] WANG H, OH J O, CHANG H J, et al. GazeCaps: gaze estimation with self-attention-routed capsules[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 2668-2676. [22] GU S, WANG L H, HE L, et al. Gaze estimation via a differential eyes’ appearances network with a reference grid[J]. Engineering, 2021, 7(6): 162-182. [23] WANG K, ZHAO R, SU H, et al. Generalizing eye tracking with bayesian adversarial learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 11907-11916. [24] KIM J, JEONG J. Gaze estimation in the dark with generative adversarial networks[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2020: 1-3. [25] RANGESH A, ZHANG B, TRIVEDI M. Driver gaze estimation in the real world: overcoming the eyeglass challenge[C]//Proceedings of the IEEE Intelligent Vehicles Symposium. Piscataway, NJ: IEEE Press, 2020: 1054-1059. [26] YU Y, ODOBEZ J. Unsupervised representation learning for gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2020: 7312-7322. [27] ZHANG X C, SUGANO Y, FRITZ M, et al. It’s written all over your face: full-face appearance-based gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Washington DC: IEEE Computer Society, 2017: 2299-2308. [28] OGUSU R, YAMANAKA T. LPM: learnable pooling module for efficient full-face gaze estimation[C]//Proceedings of the 14th IEEE International Conference on Automatic Face Gesture Recognition. Piscataway, NJ: IEEE Press, 2019: 1-5. [29] CHENG Y H, BAO Y W, LU F. PureGaze: purifying gaze feature for generalizable gaze estimation[J].arXiv:2103.13173, 2021. [30] YU Z H, HUANG X H, ZHANG X B, et al. A multi-modal approach for driver gaze prediction to remove identity bias[C]//Proceedings of the International Conference on Multimodal Interaction. New York: ACM Press, 2020: 768-776. [31] ZHANG Y J, YANG X H, MA Z. Driver’s gaze zone estimation method: a four-channel convolutional neural network model[C]//Proceedings of the International Conference on Big-Data Service and Intelligent Computation. New York: ACM Press, 2020: 20-24. [32] WANG Z C, ZHAO J, LU C, et al. Learning to detect head movement in unconstrained remote gaze estimation in the wild[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Piscataway, NJ: IEEE Press, 2020: 3443-3452. [33] KRAFKA K, KHOSLA A, KELLNHOFER P, et al. Eye tracking for everyone[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2016: 2176-2184. [34] DENG H P, ZHU W J. Monocular free-head 3D gaze tracking with deep learning and geometry constraints[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington DC: IEEE Computer Society, 2017: 3162-3171. [35] CHENG Y H, HUANG S Y, WANG F, et al. A coarse-to-fine adaptive network for appearance-based gaze estimation[C]//Proceedings of the AAAI Conference on Artificial Inte-lligence, 2020: 10623-10630. [36] KELLNHOFER P, RECASENS A, STENT S, et al. Gaze360: physically unconstrained gaze estimation in the wild[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE Press, 2019: 6911-6920. [37] ZHOU X L, LIN J N, JIANG J Q, et al. Learning a 3D gaze estimator with improved itracker combined with bidirectional LSTM[C]//Proceedings of the IEEE International Conference on Multimedia and Expo. Piscataway, NJ: IEEE Press, 2019: 850-855. [38] PALMERO C, SELVA J, BAGHERI M, et al. Recurrent CNN for 3D gaze estimation using appearance and shape cues[C]//Proceedings of the British Machine Vision Conference, Newcastle, UK, 2018: 251-263. [39] WANG Z Y, CHAI J X, XIA S H. Realtime and accurate 3D eye gaze capture with DCNN-based iris and pupil segmentation[J]. IEEE Transactions on Visualization and Computer Graphics, 2021, 27(1): 190-203. [40] WANG K, SU H, JI Q. Neuro-inspired eye tracking with eye movement dynamics[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 9831-9840. [41] NONAKA S, NOBUHARA S, NISHINO K. Dynamic 3D gaze from AFAR: deep gaze estimation from temporal eye-head-body coordination[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2022: 2192-2201. [42] CHEN Y, HU R, XU D, et al. Hidden follower detection via refined gaze and walking state estimation[C]//Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), 2023: 2081-2086. [43] LIU S, LIU D P, WU H Y. Gaze estimation with multi-scale channel and spatial attention[C]//Proceedings of the International Conference on Computing and Pattern Recognition. New York: ACM Press, 2020: 303-309. [44] MAHANAMA B, JAYAWARDANA Y, JAYARATHNA S. Gaze-Net: appearance-based gaze estimation using capsule networks[C]//Proceedings of the 11th Augmented Human International Conference. New York: ACM Press, 2020: 1-4. [45] LEMLEY J, KAR A, DRIMBAREAN A, et al. Convolutional neural network implementation for eye-gaze estimation on low-quality consumer imaging systems[J]. IEEE Transactions on Consumer Electronics, 2019, 65(2): 179-187. [46] HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2016: 770-778. [47] ZHANG X C, PARK S, BEELER T, et al. ETH-XGaze: a large scale dataset for gaze estimation under extreme head pose and gaze variation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2020: 365-381. [48] SWIRSKI L, DODGSON N. Rendering synthetic ground truth images for eye tracker evaluation[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2014: 219-222. [49] PORTA S, BOSSAVIT B, CABEZA R, et al. U2eyes: a binocular dataset for eye tracking and gaze estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop. Piscataway, NJ: IEEE Press, 2019: 3660-3664. [50] FURUKAWA Y, PONCE J. Accurate, dense, and robust multiview stereopsis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 32(8): 1362-1376. [51] WOOD E, BALTRUSAITIS T, ZHANG X C, et al. Rendering of eyes for eye-shape registration and gaze estimation[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington DC: IEEE Computer Society, 2015: 3756-3764. [52] WOOD E, MORENCY L, ROBINSON P, et al. Learning an appearance-based gaze estimator from one million synthesised images[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2016: 131-138. [53] ZHU Z S, ZHANG D, CHI C L, LI M, et al. A complementary dual-branch network for appearance-based gaze estimation from low-resolution facial image[J]. IEEE Transactions on Cognitive and Developmental Systems, 2023,15(3): 1323-1334. [54] HUANG G H, SHI J Y, XU J, et al. Gaze estimation by attention-induced hierarchical variational auto-encoder[J]. IEEE Transactions on Cybernetics, 2024, 54(4): 2592-2605. [55] RANGESH A, ZHANG B, TRIVEDI M M. Gaze preserving CycleGANs for eyeglass removal and persistent gaze estimation[J]. IEEE Transactions on Intelligent Vehicles, 2022, 7(2): 377-386. [56] 成浩维, 资文杰, 彭双, 等. 基于半监督学习的三维Mesh建筑物立面提取与语义分割方法[J]. 郑州大学学报 (理学版), 2023, 55(4): 8-15. CHENG H W, ZI W J, PENG S, et al. Semi-supervised learning based 3D mesh building facade extraction and semantic segmentation method[J]. Journal of Zhengzhou University (Natural Science Edition), 2023, 55(4): 8-15. [57] 吕昊远, 俞璐, 周星宇, 等. 半监督深度学习图像分类方法研究综述[J]. 计算机科学与探索, 2021, 15(6): 1038-1048. LYU H Y, YU L, ZHOU X Y, et al. Review of semi-supervised deep learning image classification methods[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(6): 1038-1048. [58] CHENG Y H, BAO Y W, LU F. Puregaze: purifying gaze feature for generalizable gaze estimation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022: 436-443.? [59] XIONG Y Y, KIM H, SINGH V. Mixed effects neural networks(MeNets) with applications to gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 7743-7752. [60] WU Y, LI G, LIU Z, et al. Gaze estimation via modulation-based adaptive network with auxiliary self-learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(8): 5510-5520. [61] DUBEY N, GHOSH S, DHALL A. Unsupervised learning of eye gaze representation from the web[C]//Proceedings of the International Joint Conference on Neural Networks. Piscataway, NJ: IEEE Press, 2019: 1-7. [62] WANG Y M, JIANG Y Z, JIN L, et al. Contrastive regression for domain adaptation on gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2022: 19376-19385. [63] CAI X, ZENG J, SHAN S, CPHEN X. Source-free adaptive gaze estimation by uncertainty reduction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 22035-22045. [64] ZHANG J, CHEN J, TANG H, et al. Unsupervised high-resolution portrait gaze correction and animation[J]. IEEE Transactions on Image Processing, 2022, 31: 5272-5286. [65] LIAN D Z, HU L N, LUO W X, et al. Multiview multitask gaze estimation with deep convolutional neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(10): 3010-3023. [66] LIAN D Z, ZHANG Z H, LUO W X, et al. RGB-D based gaze estimation via multi-task CNN[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019: 2488-2495. [67] YU Y, LIU G, ODOBEZ J. Deep multitask gaze estimation with a constrained landmark-gaze model[C]//Proceedings of the European Conference on Computer Vision Workshop. Cham: Springer, 2018: 456-474. [68] ZHANG M F, LIU Y F, LU F. Gazeonce: real-time multi-person gaze estimation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2022: 4197-4206. [69] RECASENS A, VONDRICK C, KHOSLA A, et al. Following gaze in video[C]//Proceedings of the IEEE International Conference on Computer Vision. Washington DC: IEEE Computer Society, 2017: 1444-1452. [70] KRUTHIVENTI S, GUDISA V, DHOLAKIYA J, et al. Saliency unified: a deep architecture for simultaneous eye fixation prediction and salient object segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Washington DC: IEEE Computer Society, 2016: 5781-5790. [71] WANG W G, SHEN J B, DONG X P, et al. Inferring salient objects from human fixations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 1913-1927. [72] CHONG E, RUIZ N, WANG Y X, et al. Connecting gaze, scene, and attention: generalized attention estimation via joint modeling of gaze and scene saliency[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2018: 397-412. [73] HU D, HUANG K. GFNet: gaze focus network using attention for gaze estimation[C]//Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), 2023: 2399-2404. [74] CHEN W, XU H, ZHU C, et al. Gaze estimation via the joint modeling of multiple cues[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(3): 1390-1402. [75] TU D, SHEN W, SUN W, et al. Un-Gaze: a unified Transformer for joint gaze-location and gaze-object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023(1). [76] CHO K, MERRIENBOER B, BAHDANAU D, et al. On the properties of neural machine translation: encoder-decoder approaches[C]//Proceedings of the 8th Workshop on Syntax, Semantics and Structure in Statistical Translation, 2014: 103-111. [77] SCHUSTER M, PALIWAL K K. Bidirectional recurrent neural networks[J]. IEEE Transactions on Signal Processing, 1997,45(11):2673-2681. [78] LI G, DAI L, GAO Q, GAO H, et al. Improvement of unconstrained appearance-based gaze tracking with LSTM[C]//Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2022:1254-1259. [79] WANG L, LI S. Wheelchair-centered omnidirectional gaze-point estimation in the wild[J]. IEEE Transactions on Human-Machine Systems, 2023, 53(3): 466-478. [80] CHEN Z K, SHI B. Offset calibration for appearance-based gaze estimation via gaze decomposition[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Piscataway, NJ: IEEE Press, 2020: 259-268. [81] 苟超,卓莹,王康, 等.眼动跟踪研究进展与展望[J].自动化学报, 2022, 48(5): 1173-1192. GOU C, ZHUO Y, WANG K, et al. Research advances and prospects of eye tracking[J]. Acta?Automatica Sinica, 2022, 48(5): 1173-1192. [82] CHENG Y H, ZHANG X C, LU F, et al. Gaze estimation by exploring two-eye asymmetry[J]. IEEE Transactions on Image Processing, 2020, 29: 5259-5272. [83] 王燕, 吕艳萍. 混合深度CNN联合注意力的高光谱图像分类[J]. 计算机科学与探索, 2023, 17(2): 385-395. WANG Y, LYU Y P. Hybrid deep CNN-attention for hyperspectral image classification[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(2): 385-395. [84] LI J, CHEN Z J, ZHONG Y H, et al. Appearance-based gaze estimation for ASD diagnosis[J]. IEEE Transactions on Cybernetics, 2022, 52(7): 6504-6517. [85] HU Z, ZHAO K, ZHOU B, et al. Gaze target estimation inspired by interactive attention[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(12): 8524-8536. [86] SMITH B A, YIN Q, FEINER S, et al. Gaze locking: passive eye contact detection for human-object interaction[C]//Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology. New York: ACM Press, 2013: 271-280. [87] FUNES M K A, MONAY F, ODOBEZ J. EyeDiap: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras[C]//Proceedings of the ACM Symposium on Eye Tracking Research & Applications. New York: ACM Press, 2014: 255-258. [88] HUANG Q, VEERARAGHAVAN A, SABHARWAL A. TabletGaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets[J]. Machine Vision and Applications, 2017, 28(5): 445-461. [89] PARK S, AKSAN E, ZHANG X, et al. Towards end-to-end video-based eye-tracking[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 2020:747-763. [90] GHOSH S, DHALL A, SHARMA G, et al. Speak2label: using domain knowledge for creating a large scale driver gaze zone estimation dataset[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021:2896-2905. [91] HE J F, PHAM K, VALLIAPPAN N, et al. On-device few-shot personalization for real-time gaze estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop. Piscataway, NJ: IEEE Press, 2019: 1149-1158. [92] GUO T C, LIU Y C, ZHANG H, et al. A generalized and robust method towards practical gaze estimation on smart phone[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop. Piscataway, NJ: IEEE Press, 2019: 1131-1139. [93] BAO Y W, CHENG Y H, LIU Y F, et al. Adaptive feature fusion network for gaze tracking in mobile tablets[C]//Proceedings of the 25th International Conference on Pattern Recognition. Piscataway, NJ: IEEE Press, 2020: 9936-9943. [94] GOODFELLOW I, ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems, 2014: 2672-2680. [95] GAO J Y, ZHANG T Z, XU C S. Graph convolutional tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2019: 4644-4654. |
[1] | 王彩玲, 闫晶晶, 张智栋. 基于多模态数据的人体行为识别方法研究综述[J]. 计算机工程与应用, 2024, 60(9): 1-18. |
[2] | 廉露, 田启川, 谭润, 张晓行. 基于神经网络的图像风格迁移研究进展[J]. 计算机工程与应用, 2024, 60(9): 30-47. |
[3] | 杨晨曦, 庄旭菲, 陈俊楠, 李衡. 基于深度学习的公交行驶轨迹预测研究综述[J]. 计算机工程与应用, 2024, 60(9): 65-78. |
[4] | 宋建平, 王毅, 孙开伟, 刘期烈. 结合双曲图注意力网络与标签信息的短文本分类方法[J]. 计算机工程与应用, 2024, 60(9): 188-195. |
[5] | 李厚君, 韦柏全. 属性蒸馏的零样本识别方法[J]. 计算机工程与应用, 2024, 60(9): 219-227. |
[6] | 车运龙, 袁亮, 孙丽慧. 基于强语义关键点采样的三维目标检测方法[J]. 计算机工程与应用, 2024, 60(9): 254-260. |
[7] | 邱云飞, 王宜帆. 双分支结构的多层级三维点云补全[J]. 计算机工程与应用, 2024, 60(9): 272-282. |
[8] | 叶彬, 朱兴帅, 姚康, 丁上上, 付威威. 面向桌面交互场景的双目深度测量方法[J]. 计算机工程与应用, 2024, 60(9): 283-291. |
[9] | 周定威, 扈静, 张良锐, 段飞亚. 面向目标检测的数据集标签遗漏的协同修正技术[J]. 计算机工程与应用, 2024, 60(8): 267-273. |
[10] | 周伯俊, 陈峙宇. 基于深度元学习的小样本图像分类研究综述[J]. 计算机工程与应用, 2024, 60(8): 1-15. |
[11] | 孙石磊, 李明, 刘静, 马金刚, 陈天真. 深度学习在糖尿病视网膜病变分类领域的研究进展[J]. 计算机工程与应用, 2024, 60(8): 16-30. |
[12] | 汪维泰, 王晓强, 李雷孝, 陶乙豪, 林浩. 时空图神经网络在交通流预测研究中的构建与应用综述[J]. 计算机工程与应用, 2024, 60(8): 31-45. |
[13] | 谢威宇, 张强. 基于深度学习的图像中无人机与飞鸟检测研究综述[J]. 计算机工程与应用, 2024, 60(8): 46-55. |
[14] | 谌海云, 黄忠义, 王海川, 余鸿皓. 基于改进Tracktor的行人多目标跟踪算法[J]. 计算机工程与应用, 2024, 60(8): 242-249. |
[15] | 常禧龙, 梁琨, 李文涛. 深度学习优化器进展综述[J]. 计算机工程与应用, 2024, 60(7): 1-12. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||