计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (13): 66-80.DOI: 10.3778/j.issn.1002-8331.2310-0290
陈健威,俞璐,韩昌芝,李林
出版日期:
2024-07-01
发布日期:
2024-07-01
CHEN Jianwei, YU Lu, HAN Changzhi, LI Lin
Online:
2024-07-01
Published:
2024-07-01
摘要: 作为迁移学习的重要分支,域适应旨在解决传统机器学习算法在训练样本和测试样本服从不同数据分布时性能急剧下降的问题。Transformer是基于自注意力机制的深度学习框架,具有强大的全局特征提取能力和建模能力,近年来Transformer与域适应相结合也成为研究的热点。虽然已有大量相关方法问世,但Transformer应用在域适应的综述却未见报道。为了填补这个领域的空白,为相关研究提供借鉴和参考,对近年来出现的一些基于Transformer的典型域适应方法进行归纳总结与分析,概述域适应的相关概念与Transformer的基本结构,从图像分类、图像语义分割、目标检测、医学图像分析四个应用梳理了各种基于Transformer的域适应方法,对图像分类下的域适应方法进行比较,总结当前域适应Transformer模型存在的挑战并探讨未来可行的研究方向。
陈健威, 俞璐, 韩昌芝, 李林. Transformer在域适应中的应用研究综述[J]. 计算机工程与应用, 2024, 60(13): 66-80.
CHEN Jianwei, YU Lu, HAN Changzhi, LI Lin. Review of Research on Application of Transformer in Domain Adaptation[J]. Computer Engineering and Applications, 2024, 60(13): 66-80.
[1] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017. [2] YANG J, LIU J, XU N, et al. TVT: transferable vision Transformer for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023: 520-530. [3] XU T, CHEN W, WANG P, et al. CDTrans: cross-domain Transformer for unsupervised domain adaptation[J]. arXiv:2109. 06165, 2021. [4] 范苍宁, 刘鹏, 肖婷, 等. 深度域适应综述: 一般情况与复杂情况[J]. 自动化学报, 2021, 47(3): 515-548. FAN C N, LIU P, XIAO T. A review of deep domain adaptation: general situation and complex situation[J]. Acta Automatica Sinica, 2021, 47(3): 515-548. [5] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv:2010.11929, 2020. [6] YU X, ZHANG L, ZHU D, et al. Robust core-periphery constrained Transformer for domain adaptation[J]. arXiv:2308. 13515, 2023. [7] TANG S, SHI Y, SONG Z, et al. Progressive source-aware Transformer for generalized source-free domain adaptation[J]. IEEE Transactions on Multimedia, 2023, 26: 4138-4152. [8] CHUAN X R, YI M Z, YOU W L, et al. Towards unsupervised domain adaptation via domain-Transformer[J]. arXiv:2202.13777, 2022. [9] WANG M, CHEN J, WANG Y, et al. TFC: Transformer fused convolution for adversarial domain adaptation[J]. IEEE Transactions on Computational Social Systems, 2024, 11(1): 697-706. [10] WANG X, GUO P, ZHANG Y. Domain adaptation via bidirectional cross-attention Transformer[J]. arXiv:2201.05887, 2022. [11] MEI Z, YE P, LI B, et al. DeNKD: decoupled non-target knowledge distillation for complementing Transformer-based unsupervised domain adaptation[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(5): 3220-3231. [12] XIA Y, YUN L J, YANG C. Transferable adversarial masked self-distillation for unsupervised domain adaptation[J]. Complex & Intelligent Systems, 2023: 1-14. [13] YANG G, TANG H, ZHONG Z, et al. Transformer-based source-free domain adaptation[J]. arXiv:2105.14138, 2021. [14] KUMAR V, LAL R, PATIL H, et al. Conmix for source-free single and multi-target domain adaptation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023: 4178-4188. [15] SANYAL S, ASOKAN A R, BHAMBRI S, et al. Domain-specificity inducing Transformers for source-free domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 18928-18937. [16] SUN T, LU C, ZHANG T, et al. Safe self-refinement for Transformer-based domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 7191-7200. [17] PRABHU V, YENAMANDRA S, SINGH A, et al. Adapting self-supervised vision Transformers by probing attention-conditioned masking consistency[C]//Advances in Neural Information Processing Systems, 2022: 23271-23283. [18] MA W, ZHANG J, LI S, et al. Making the best of both worlds: a domain-oriented Transformer for unsupervised domain adaptation[C]//Proceedings of the 30th ACM International Conference on Multimedia, 2022: 5620-5629. [19] LI X, LAN C, WEI G, et al. Semantic-aware message broadcasting for efficient unsupervised domain adaptation[J]. arXiv:2212.02739, 2022. [20] ZHU J, BAI H, WANG L. Patch-mix Transformer for unsupervised domain adaptation: a game perspective[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 3561-3571. [21] FU S, CHEN J, CHEN D, et al. CNNs/ViTs-CNNs/ViTs: mutual distillation for unsupervised domain adaptation[J]. Information Sciences, 2023, 622: 83-97. [22] YE Y, FU S, CHEN J. Learning cross-domain representations by vision Transformer for unsupervised domain adaptation[J]. Neural Computing and Applications, 2023,35(15): 10847-10860. [23] HE J, LIU B, YANG X. Non-local patch mixup for unsupervised domain adaptation[C]//Proceedings of the 2022 IEEE International Conference on Data Mining (ICDM), 2022: 969-974. [24] WANG L, WANG M, ZHANG D, et al. Unsupervised domain adaptation via style-aware self-intermediate domain[J]. arXiv:2209.01870, 2022. [25] LI J, YANG L, WANG Q, et al. WDAN: a weighted discriminative adversarial network with dual classifiers for fine-grained open-set domain adaptation[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(9): 5133-5147. [26] LI J, YANG L, HU Q. Enhancing multi-source open-set domain adaptation through nearest neighbor classification with self-supervised vision Transformer[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 34(4): 2648-2662. [27] ZHU D, LI Y, YUAN J, et al. Universal domain adaptation via compressive attention matching[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 6974-6985. [28] ZHANG Y, WANG Z, HE W. Class relationship embedded learning for source-free unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 7619-7629. [29] CHEN D, ZHU H, YANG S. UC-SFDA: source-free domain adaptation via uncertainty prediction and evidence-based contrastive learning[J]. Knowledge-Based Systems, 2023, 275: 110728. [30] KOJIMA T, MATSUO Y, IWASAWA Y. Robustifying vision Transformer without retraining from scratch by test-time class-conditional feature alignment[J]. arXiv:2206.13951, 2022. [31] LAI Z, VESDAPUNT N, ZHOU N, et al. PADCLIP: pseudo-labeling with adaptive debiasing in clip for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 16155-16165. [32] SINGHA M, PAL H, JHA A, et al. AD-CLIP: adapting domains in prompt space using CLIP[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 4355-4364. [33] CHEN H, WU Z, HAN X, et al. Multi-prompt alignment for multi-source unsupervised domain adaptation[J]. arXiv:2209.15210, 2022. [34] MA T, SUN Y, YANG Z, et al. ProD: prompting-to-disentangle domain knowledge for cross-domain few-shot image classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 19754-19763. [35] ZHENG Z, YUE X, WANG K, et al. Prompt vision Transformer for domain generalization[J]. arXiv:2208.08914, 2022. [36] GAO Y, SHI X, ZHU Y, et al. Visual prompt tuning for test-time domain adaptation[J]. arXiv:2210.04831, 2022. [37] LI A, ZHUANG L, FAN S, et al. Learning common and specific visual prompts for domain generalization[C]//Proceedings of the Asian Conference on Computer Vision, 2022: 4260-4275. [38] RANGWANI H, AITHAL S K, MISHRA M, et al. A closer look at smoothness in domain adversarial training[C]//International Conference on Machine Learning, 2022: 18378-18399. [39] HERATH S, FERNANDO B, ABBASNEJAD E, et al. Energy-based self-training and normalization for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 11653-11662. [40] YI C, CHEN H, XU Y, et al. Model-contrastive federated domain adaptation[J]. arXiv:2305.10432, 2023. [41] BOHDAL O, LI D, HU S X, et al. Feed-forward source-free latent domain adaptation via cross-attention[J]. arXiv:2207. 07624, 2022. [42] SULTANA M, NASEER M, KHAN M H, et al. Self-distilled vision Transformer for domain generalization[C]//Proceedings of the Asian Conference on Computer Vision, 2022: 3068-3085. [43] HUANG Y, YANG X, ZHANG J, et al. Relative alignment network for source-free multimodal video domain adaptation[C]//Proceedings of the 30th ACM International Conference on Multimedia, 2022: 1652-1660. [44] HOYER L, DAI D, VAN G L. Daformer: improving network architectures and training strategies for domain-adaptive semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 9924-9935. [45] HOYER L, DAI D, VAN G L. HRDA: context-aware high-resolution domain-adaptive semantic segmentation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 372-391. [46] RAO P P, QIAO F, ZHANG W, et al. Quadformer: quadruple Transformer for unsupervised domain adaptation in power line segmentation of aerial images[J]. arXiv:2211.16988, 2022. [47] CHEN R, RONG Y, GUO S, et al. Smoothing matters: momentum Transformer for domain adaptive semantic segmentation[J]. arXiv:2203.07988, 2022. [48] ZHANG J, HUANG J, ZHANG X, et al. UniDAformer: unified domain adaptive panoptic segmentation Transformer via hierarchical mask calibration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 11227-11237. [49] WANG K, KIM D, FERIS R, et al. CDAC: cross-domain attention consistency in Transformer for domain adaptive semantic segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 11519-11529. [50] TRUONG T D, LE N, RAJ B, et al. Fredom: fairness domain adaptation approach to semantic scene understanding[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 19988-19997. [51] JIA J, CHEN W, YUAN J, et al. Source-target coordinated training with multi-head hybrid-attention for domain adaptive semantic segmentation[C]//Proceedings of the ICLR, 2023. [52] HOYER L, DAI D, WANG H, et al. MIC: masked image consistency for context-enhanced domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 11721-11732. [53] XING B, YING X, WANG R, et al. Cross-modal contrastive learning for domain adaptation in 3D semantic segmentation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2023: 2974-2982. [54] ZHANG J, YANG K, MA C, et al. Bending reality: distortion-aware Transformers for adapting to panoramic semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 16917-16927. [55] ZHENG X, PAN T, LUO Y, et al. Look at the neighbor: distortion-aware unsupervised domain adaptation for panoramic semantic segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 18687-18698. [56] MA X, ZHANG X, WANG Z, et al. Unsupervised domain adaptation augmented by mutually boosted attention for semantic segmentation of vhr remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 1-15. [57] LI W, GAO H, SU Y, et al. Unsupervised domain adaptation for remote sensing semantic segmentation with Transformer[J]. Remote Sensing, 2022, 14(19): 4942. [58] HE P, JIAO L, SHANG R, et al. A patch diversity Transformer for domain generalized semantic segmentation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023: 1-13. [59] GONG R, WANG Q, DAI D, et al. One-shot domain adaptive and generalizable semantic segmentation with class-aware cross-domain Transformers[J]. arXiv:2212.07292, 2022. [60] MATA C, RYOO M S. Learning from labeled images and unlabeled videos for video segmentation[C]//Proceedings of the ICLR, 2023. [61] Dos S F A, JUNIOR J M, PISTORI H, et al. Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection[J]. Computers and Electronics in Agriculture, 2022, 203: 107480. [62] ZHANG J, XU S, SUN J, et al. Unsupervised adversarial domain adaptation for agricultural land extraction of remote sensing images[J]. Remote Sensing, 2022, 14(24): 6298. [63] CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 213-229. [64] WANG W, CAO Y, ZHANG J, et al. Exploring sequence feature alignment for domain adaptive detection Transformers[C]//Proceedings of the 29th ACM International Conference on Multimedia, 2021: 1730-1738. [65] HUANG W J, LU Y L, LIN S Y, et al. AQT: adversarial query Transformers for domain adaptive object detection[C]//Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2022: 972-979. [66] DENG J, ZHANG X, LI W, et al. Cross-domain detection Transformer based on spatial-aware and semantic-aware token alignment[J]. arXiv:2206.00222, 2022. [67] YU J, LIU J, WEI X, et al. MTTrans: cross-domain object detection with mean teacher Transformer[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 629-645. [68] ZHANG J, HUANG J, LUO Z, et al. DA-DETR: domain adaptive detection Transformer with information fusion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 23787-23798. [69] GONG K, LI S, LI S, et al. Improving transferability for domain adaptive detection Transformers[C]//Proceedings of the 30th ACM International Conference on Multimedia, 2022: 1543-1551. [70] LI S, SUI X, FU J, et al. Few-shot domain adaptation with polymorphic Transformers[C]//Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, 2021: 330-340. [71] DU S, BAYASI N, HAMARNEH G, et al. MDViT: multi-domain vision Transformer for small medical image segmentation datasets[C]//Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2023: 448-458. [72] VRAY G, TOMAR D, BOZORGTABAR B, et al. Source-free open-set domain adaptation for histopathological images via distilling self-supervised vision Transformer[J]. arXiv:2307.04596, 2023. [73] XIA M, YANG H, QU Y, et al. Multilevel structure-preserved GAN for domain adaptation in intravascular ultrasound analysis[J]. Medical Image Analysis, 2022, 82: 102614. [74] ZHU H, YAO Q, ZHOU S K. DATR: domain-adaptive Transformer for multi-domain landmark detection[J]. arXiv:2203. 06433, 2022. [75] YAN S, LIU C, YU Z, et al. EPVT: environment-aware prompt vision transformer for domain generalization in skin lesion recognition[J]. arXiv:2304.01508, 2023. [76] YUAN S, HE Z, ZHAO J, et al. Hypergraph and cross-attention-based unsupervised domain adaptation framework for cross-domain myocardial infarction localization[J]. Information Sciences, 2023, 633: 245-263. [77] ZHANG D, LI H, XIE J. MI-CAT: a Transformer-based domain adaptation network for motor imagery classification[J]. Neural Networks, 2023, 165: 451-462. [78] SONG Y, ZHENG Q, WANG Q, et al. Global adaptive Transformer for cross-subject enhanced EEG classification[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 2767-2777. [79] XU M, ISLAM M, LIM C M, et al. Class-incremental domain adaptation with smoothing and calibration for surgical report generation[C]//Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, 2021: 269-278. [80] XU M, ISLAM M, LIM C M, et al. Learning domain adaptation with model calibration for surgical report generation in robotic surgery[C]//Proceedings of the 2021 IEEE International Conference on Robotics and Automation, 2021: 12350-12356. [81] ZHANG Y, WANG Y, JIANG Z, et al. Domain adaptation via transferable swin Transformer for tire defect detection[J]. Engineering Applications of Artificial Intelligence, 2023, 122: 106109. [82] ZHANG Y, FENG K, MA H, et al. MMFNet: multisensor data and multiscale feature fusion model for intelligent cross-domain machinery fault diagnosis[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-11. [83] AN Y, ZHANG K, CHAI Y, et al. Gaussian mixture variational based Transformer domain adaptation fault diagnosis method and its application in bearing fault diagnosis[J]. IEEE Transactions on Industrial Informatics, 2024, 20(1): 615-625. [84] ZHANG Y, WANG Y, JIANG Z, et al. Tire defect detection by dual-domain adaptation-based transfer learning strategy[J]. IEEE Sensors Journal, 2022, 22(19): 18804-18814. [85] LIANG P, YU Z, WANG B, et al. Fault transfer diagnosis of rolling bearings across multiple working conditions via subdomain adaptation and improved vision Transformer network[J]. Advanced Engineering Informatics, 2023, 57: 102075. [86] ZHANG Y, JI J C, REN Z, et al. Digital twin-driven partial domain adaptation network for intelligent fault diagnosis of rolling bearing[J]. Reliability Engineering & System Safety, 2023, 234: 109186. [87] ZHANG Y, JI J C, REN Z, et al. Multi-sensor open-set cross-domain intelligent diagnostics for rotating machinery under variable operating conditions[J]. Mechanical Systems and Signal Processing, 2023, 191: 110172. [88] ZHANG Y, REN Z, FENG K, et al. Transformer-enabled cross-domain diagnostics for complex rotating machinery with multiple sensors[J]. IEEE/ASME Transactions on Mechatronics, 2023, 28(4): 2293-2304. [89] ZHANG Y, REN Z, FENG K, et al. Universal source-free domain adaptation method for cross-domain fault diagnosis of machines[J]. Mechanical Systems and Signal Processing, 2023, 191: 110159. [90] ZHANG Y, FENG K, JI J C, et al. Dynamic model-assisted bearing remaining useful life prediction using the cross-domain Transformer network[J]. IEEE/ASME Transactions on Mechatronics, 2022, 28(2): 1070-1080. [91] BAI Z, WANG B, WANG Z, et al. Domain adaptive multi-task Transformer for low-resource machine reading comprehension[J]. Neurocomputing, 2022, 509: 46-55. [92] JIANG C, GAO F, MA B, et al. Masked and adaptive Transformer for exemplar based image translation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 22418-22427. [93] KIM S, BAEK J, PARK J, et al. InstaFormer: instance-aware image-to-image translation with Transformer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 18321-18331. [94] YU Q, FAN K, ZHENG Y. Domain adaptive Transformer tracking under occlusions[J]. IEEE Transactions on Multimedia, 2023, 25: 1452-1461. [95] WANG Y, QI G, LI S, et al. Body part-level domain alignment for domain-adaptive person re-identification with Transformer framework[J]. IEEE Transactions on Information Forensics and Security, 2022, 17: 3321-3334. [96] HUANG J, GE H, SUN L, et al. ICMiF: interactive cascade microformers for cross-domain person re-identification[J]. Information Sciences, 2022, 617: 177-192. [97] WEI R, GU J, HE S, et al. Transformer-based domain-specific representation for unsupervised domain adaptive vehicle re-identification[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(3): 2935-2946. [98] YE J, FU C, ZHENG G, et al. Unsupervised domain adaptation for nighttime aerial tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 8896-8905. [99] FU C, LI T, YE J, et al. Scale-aware domain adaptation for robust UAV tracking[J]. IEEE Robotics and Automation Letters, 2023, 8(6): 3764-3771. [100] HUANG H P, SUN D, LIU Y, et al. Adaptive Transformers for robust few-shot cross-domain face anti-spoofing[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 37-54. [101] ZHOU H, KONG J, JIANG M, et al. Heterogeneous dual network with feature consistency for domain adaptation person re-identification[J]. International Journal of Machine Learning and Cybernetics, 2023, 14(5): 1951-1965. [102] LIU Y, CHEN Y, DAI W, et al. Source-free domain adaptation with contrastive domain alignment and self-supervised exploration for face anti-spoofing[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 511-528. [103] XU R, LI J, DONG X, et al. Bridging the domain gap for multi-agent perception[C]//Proceedings of the 2023 IEEE International Conference on Robotics and Automation, 2023: 6035-6042. [104] GENG M, LI J, LI C, et al. Adaptive and simultaneous trajectory prediction for heterogeneous agents via transferable hierarchical Transformer network[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(10): 11479-11492. [105] XU C, ZHOU M, GE T, et al. Unsupervised domain adaption with pixel-level discriminator for image-aware layout generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 10114-10123. [106] VENKATESWARA H, EUSEBIO J, CHAKRABORTY S, et al. Deep hashing network for unsupervised domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 5018-5027. [107] PENG X, USMAN B, KAUSHIK N, et al. VisDA: the visual domain adaptation challenge[J]. arXiv:1710.06924, 2017. [108] PENG X, BAI Q, XIA X, et al. Moment matching for multi-source domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 1406-1415. |
[1] | 车运龙, 袁亮, 孙丽慧. 基于强语义关键点采样的三维目标检测方法[J]. 计算机工程与应用, 2024, 60(9): 254-260. |
[2] | 邱云飞, 王宜帆. 双分支结构的多层级三维点云补全[J]. 计算机工程与应用, 2024, 60(9): 272-282. |
[3] | 叶彬, 朱兴帅, 姚康, 丁上上, 付威威. 面向桌面交互场景的双目深度测量方法[J]. 计算机工程与应用, 2024, 60(9): 283-291. |
[4] | 刘世鹏, 宁德军, 马崛. 针对光伏发电功率预测的LSTformer模型[J]. 计算机工程与应用, 2024, 60(9): 317-325. |
[5] | 王彩玲, 闫晶晶, 张智栋. 基于多模态数据的人体行为识别方法研究综述[J]. 计算机工程与应用, 2024, 60(9): 1-18. |
[6] | 廉露, 田启川, 谭润, 张晓行. 基于神经网络的图像风格迁移研究进展[J]. 计算机工程与应用, 2024, 60(9): 30-47. |
[7] | 杨晨曦, 庄旭菲, 陈俊楠, 李衡. 基于深度学习的公交行驶轨迹预测研究综述[J]. 计算机工程与应用, 2024, 60(9): 65-78. |
[8] | 王茹, 刘大明, 张健. Wear-YOLO:变电站电力人员安全装备检测方法研究[J]. 计算机工程与应用, 2024, 60(9): 111-121. |
[9] | 蔡腾, 陈慈发, 董方敏. 结合Transformer和动态特征融合的低照度目标检测[J]. 计算机工程与应用, 2024, 60(9): 135-141. |
[10] | 宋建平, 王毅, 孙开伟, 刘期烈. 结合双曲图注意力网络与标签信息的短文本分类方法[J]. 计算机工程与应用, 2024, 60(9): 188-195. |
[11] | 周定威, 扈静, 张良锐, 段飞亚. 面向目标检测的数据集标签遗漏的协同修正技术[J]. 计算机工程与应用, 2024, 60(8): 267-273. |
[12] | 王永贵, 王芯茹. 融合自注意力和图卷积的多视图群组推荐[J]. 计算机工程与应用, 2024, 60(8): 287-295. |
[13] | 周伯俊, 陈峙宇. 基于深度元学习的小样本图像分类研究综述[J]. 计算机工程与应用, 2024, 60(8): 1-15. |
[14] | 孙石磊, 李明, 刘静, 马金刚, 陈天真. 深度学习在糖尿病视网膜病变分类领域的研究进展[J]. 计算机工程与应用, 2024, 60(8): 16-30. |
[15] | 汪维泰, 王晓强, 李雷孝, 陶乙豪, 林浩. 时空图神经网络在交通流预测研究中的构建与应用综述[J]. 计算机工程与应用, 2024, 60(8): 31-45. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||