Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (13): 66-80.DOI: 10.3778/j.issn.1002-8331.2310-0290
• Research Hotspots and Reviews • Previous Articles Next Articles
CHEN Jianwei, YU Lu, HAN Changzhi, LI Lin
Online:
2024-07-01
Published:
2024-07-01
陈健威,俞璐,韩昌芝,李林
CHEN Jianwei, YU Lu, HAN Changzhi, LI Lin. Review of Research on Application of Transformer in Domain Adaptation[J]. Computer Engineering and Applications, 2024, 60(13): 66-80.
陈健威, 俞璐, 韩昌芝, 李林. Transformer在域适应中的应用研究综述[J]. 计算机工程与应用, 2024, 60(13): 66-80.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2310-0290
[1] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017. [2] YANG J, LIU J, XU N, et al. TVT: transferable vision Transformer for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023: 520-530. [3] XU T, CHEN W, WANG P, et al. CDTrans: cross-domain Transformer for unsupervised domain adaptation[J]. arXiv:2109. 06165, 2021. [4] 范苍宁, 刘鹏, 肖婷, 等. 深度域适应综述: 一般情况与复杂情况[J]. 自动化学报, 2021, 47(3): 515-548. FAN C N, LIU P, XIAO T. A review of deep domain adaptation: general situation and complex situation[J]. Acta Automatica Sinica, 2021, 47(3): 515-548. [5] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv:2010.11929, 2020. [6] YU X, ZHANG L, ZHU D, et al. Robust core-periphery constrained Transformer for domain adaptation[J]. arXiv:2308. 13515, 2023. [7] TANG S, SHI Y, SONG Z, et al. Progressive source-aware Transformer for generalized source-free domain adaptation[J]. IEEE Transactions on Multimedia, 2023, 26: 4138-4152. [8] CHUAN X R, YI M Z, YOU W L, et al. Towards unsupervised domain adaptation via domain-Transformer[J]. arXiv:2202.13777, 2022. [9] WANG M, CHEN J, WANG Y, et al. TFC: Transformer fused convolution for adversarial domain adaptation[J]. IEEE Transactions on Computational Social Systems, 2024, 11(1): 697-706. [10] WANG X, GUO P, ZHANG Y. Domain adaptation via bidirectional cross-attention Transformer[J]. arXiv:2201.05887, 2022. [11] MEI Z, YE P, LI B, et al. DeNKD: decoupled non-target knowledge distillation for complementing Transformer-based unsupervised domain adaptation[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34(5): 3220-3231. [12] XIA Y, YUN L J, YANG C. Transferable adversarial masked self-distillation for unsupervised domain adaptation[J]. Complex & Intelligent Systems, 2023: 1-14. [13] YANG G, TANG H, ZHONG Z, et al. Transformer-based source-free domain adaptation[J]. arXiv:2105.14138, 2021. [14] KUMAR V, LAL R, PATIL H, et al. Conmix for source-free single and multi-target domain adaptation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023: 4178-4188. [15] SANYAL S, ASOKAN A R, BHAMBRI S, et al. Domain-specificity inducing Transformers for source-free domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 18928-18937. [16] SUN T, LU C, ZHANG T, et al. Safe self-refinement for Transformer-based domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 7191-7200. [17] PRABHU V, YENAMANDRA S, SINGH A, et al. Adapting self-supervised vision Transformers by probing attention-conditioned masking consistency[C]//Advances in Neural Information Processing Systems, 2022: 23271-23283. [18] MA W, ZHANG J, LI S, et al. Making the best of both worlds: a domain-oriented Transformer for unsupervised domain adaptation[C]//Proceedings of the 30th ACM International Conference on Multimedia, 2022: 5620-5629. [19] LI X, LAN C, WEI G, et al. Semantic-aware message broadcasting for efficient unsupervised domain adaptation[J]. arXiv:2212.02739, 2022. [20] ZHU J, BAI H, WANG L. Patch-mix Transformer for unsupervised domain adaptation: a game perspective[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 3561-3571. [21] FU S, CHEN J, CHEN D, et al. CNNs/ViTs-CNNs/ViTs: mutual distillation for unsupervised domain adaptation[J]. Information Sciences, 2023, 622: 83-97. [22] YE Y, FU S, CHEN J. Learning cross-domain representations by vision Transformer for unsupervised domain adaptation[J]. Neural Computing and Applications, 2023,35(15): 10847-10860. [23] HE J, LIU B, YANG X. Non-local patch mixup for unsupervised domain adaptation[C]//Proceedings of the 2022 IEEE International Conference on Data Mining (ICDM), 2022: 969-974. [24] WANG L, WANG M, ZHANG D, et al. Unsupervised domain adaptation via style-aware self-intermediate domain[J]. arXiv:2209.01870, 2022. [25] LI J, YANG L, WANG Q, et al. WDAN: a weighted discriminative adversarial network with dual classifiers for fine-grained open-set domain adaptation[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 33(9): 5133-5147. [26] LI J, YANG L, HU Q. Enhancing multi-source open-set domain adaptation through nearest neighbor classification with self-supervised vision Transformer[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2023, 34(4): 2648-2662. [27] ZHU D, LI Y, YUAN J, et al. Universal domain adaptation via compressive attention matching[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 6974-6985. [28] ZHANG Y, WANG Z, HE W. Class relationship embedded learning for source-free unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 7619-7629. [29] CHEN D, ZHU H, YANG S. UC-SFDA: source-free domain adaptation via uncertainty prediction and evidence-based contrastive learning[J]. Knowledge-Based Systems, 2023, 275: 110728. [30] KOJIMA T, MATSUO Y, IWASAWA Y. Robustifying vision Transformer without retraining from scratch by test-time class-conditional feature alignment[J]. arXiv:2206.13951, 2022. [31] LAI Z, VESDAPUNT N, ZHOU N, et al. PADCLIP: pseudo-labeling with adaptive debiasing in clip for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 16155-16165. [32] SINGHA M, PAL H, JHA A, et al. AD-CLIP: adapting domains in prompt space using CLIP[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 4355-4364. [33] CHEN H, WU Z, HAN X, et al. Multi-prompt alignment for multi-source unsupervised domain adaptation[J]. arXiv:2209.15210, 2022. [34] MA T, SUN Y, YANG Z, et al. ProD: prompting-to-disentangle domain knowledge for cross-domain few-shot image classification[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 19754-19763. [35] ZHENG Z, YUE X, WANG K, et al. Prompt vision Transformer for domain generalization[J]. arXiv:2208.08914, 2022. [36] GAO Y, SHI X, ZHU Y, et al. Visual prompt tuning for test-time domain adaptation[J]. arXiv:2210.04831, 2022. [37] LI A, ZHUANG L, FAN S, et al. Learning common and specific visual prompts for domain generalization[C]//Proceedings of the Asian Conference on Computer Vision, 2022: 4260-4275. [38] RANGWANI H, AITHAL S K, MISHRA M, et al. A closer look at smoothness in domain adversarial training[C]//International Conference on Machine Learning, 2022: 18378-18399. [39] HERATH S, FERNANDO B, ABBASNEJAD E, et al. Energy-based self-training and normalization for unsupervised domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 11653-11662. [40] YI C, CHEN H, XU Y, et al. Model-contrastive federated domain adaptation[J]. arXiv:2305.10432, 2023. [41] BOHDAL O, LI D, HU S X, et al. Feed-forward source-free latent domain adaptation via cross-attention[J]. arXiv:2207. 07624, 2022. [42] SULTANA M, NASEER M, KHAN M H, et al. Self-distilled vision Transformer for domain generalization[C]//Proceedings of the Asian Conference on Computer Vision, 2022: 3068-3085. [43] HUANG Y, YANG X, ZHANG J, et al. Relative alignment network for source-free multimodal video domain adaptation[C]//Proceedings of the 30th ACM International Conference on Multimedia, 2022: 1652-1660. [44] HOYER L, DAI D, VAN G L. Daformer: improving network architectures and training strategies for domain-adaptive semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 9924-9935. [45] HOYER L, DAI D, VAN G L. HRDA: context-aware high-resolution domain-adaptive semantic segmentation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 372-391. [46] RAO P P, QIAO F, ZHANG W, et al. Quadformer: quadruple Transformer for unsupervised domain adaptation in power line segmentation of aerial images[J]. arXiv:2211.16988, 2022. [47] CHEN R, RONG Y, GUO S, et al. Smoothing matters: momentum Transformer for domain adaptive semantic segmentation[J]. arXiv:2203.07988, 2022. [48] ZHANG J, HUANG J, ZHANG X, et al. UniDAformer: unified domain adaptive panoptic segmentation Transformer via hierarchical mask calibration[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 11227-11237. [49] WANG K, KIM D, FERIS R, et al. CDAC: cross-domain attention consistency in Transformer for domain adaptive semantic segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 11519-11529. [50] TRUONG T D, LE N, RAJ B, et al. Fredom: fairness domain adaptation approach to semantic scene understanding[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 19988-19997. [51] JIA J, CHEN W, YUAN J, et al. Source-target coordinated training with multi-head hybrid-attention for domain adaptive semantic segmentation[C]//Proceedings of the ICLR, 2023. [52] HOYER L, DAI D, WANG H, et al. MIC: masked image consistency for context-enhanced domain adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 11721-11732. [53] XING B, YING X, WANG R, et al. Cross-modal contrastive learning for domain adaptation in 3D semantic segmentation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2023: 2974-2982. [54] ZHANG J, YANG K, MA C, et al. Bending reality: distortion-aware Transformers for adapting to panoramic semantic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 16917-16927. [55] ZHENG X, PAN T, LUO Y, et al. Look at the neighbor: distortion-aware unsupervised domain adaptation for panoramic semantic segmentation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 18687-18698. [56] MA X, ZHANG X, WANG Z, et al. Unsupervised domain adaptation augmented by mutually boosted attention for semantic segmentation of vhr remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 1-15. [57] LI W, GAO H, SU Y, et al. Unsupervised domain adaptation for remote sensing semantic segmentation with Transformer[J]. Remote Sensing, 2022, 14(19): 4942. [58] HE P, JIAO L, SHANG R, et al. A patch diversity Transformer for domain generalized semantic segmentation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023: 1-13. [59] GONG R, WANG Q, DAI D, et al. One-shot domain adaptive and generalizable semantic segmentation with class-aware cross-domain Transformers[J]. arXiv:2212.07292, 2022. [60] MATA C, RYOO M S. Learning from labeled images and unlabeled videos for video segmentation[C]//Proceedings of the ICLR, 2023. [61] Dos S F A, JUNIOR J M, PISTORI H, et al. Unsupervised domain adaptation using transformers for sugarcane rows and gaps detection[J]. Computers and Electronics in Agriculture, 2022, 203: 107480. [62] ZHANG J, XU S, SUN J, et al. Unsupervised adversarial domain adaptation for agricultural land extraction of remote sensing images[J]. Remote Sensing, 2022, 14(24): 6298. [63] CARION N, MASSA F, SYNNAEVE G, et al. End-to-end object detection with transformers[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 213-229. [64] WANG W, CAO Y, ZHANG J, et al. Exploring sequence feature alignment for domain adaptive detection Transformers[C]//Proceedings of the 29th ACM International Conference on Multimedia, 2021: 1730-1738. [65] HUANG W J, LU Y L, LIN S Y, et al. AQT: adversarial query Transformers for domain adaptive object detection[C]//Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2022: 972-979. [66] DENG J, ZHANG X, LI W, et al. Cross-domain detection Transformer based on spatial-aware and semantic-aware token alignment[J]. arXiv:2206.00222, 2022. [67] YU J, LIU J, WEI X, et al. MTTrans: cross-domain object detection with mean teacher Transformer[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 629-645. [68] ZHANG J, HUANG J, LUO Z, et al. DA-DETR: domain adaptive detection Transformer with information fusion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 23787-23798. [69] GONG K, LI S, LI S, et al. Improving transferability for domain adaptive detection Transformers[C]//Proceedings of the 30th ACM International Conference on Multimedia, 2022: 1543-1551. [70] LI S, SUI X, FU J, et al. Few-shot domain adaptation with polymorphic Transformers[C]//Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, 2021: 330-340. [71] DU S, BAYASI N, HAMARNEH G, et al. MDViT: multi-domain vision Transformer for small medical image segmentation datasets[C]//Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2023: 448-458. [72] VRAY G, TOMAR D, BOZORGTABAR B, et al. Source-free open-set domain adaptation for histopathological images via distilling self-supervised vision Transformer[J]. arXiv:2307.04596, 2023. [73] XIA M, YANG H, QU Y, et al. Multilevel structure-preserved GAN for domain adaptation in intravascular ultrasound analysis[J]. Medical Image Analysis, 2022, 82: 102614. [74] ZHU H, YAO Q, ZHOU S K. DATR: domain-adaptive Transformer for multi-domain landmark detection[J]. arXiv:2203. 06433, 2022. [75] YAN S, LIU C, YU Z, et al. EPVT: environment-aware prompt vision transformer for domain generalization in skin lesion recognition[J]. arXiv:2304.01508, 2023. [76] YUAN S, HE Z, ZHAO J, et al. Hypergraph and cross-attention-based unsupervised domain adaptation framework for cross-domain myocardial infarction localization[J]. Information Sciences, 2023, 633: 245-263. [77] ZHANG D, LI H, XIE J. MI-CAT: a Transformer-based domain adaptation network for motor imagery classification[J]. Neural Networks, 2023, 165: 451-462. [78] SONG Y, ZHENG Q, WANG Q, et al. Global adaptive Transformer for cross-subject enhanced EEG classification[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 2767-2777. [79] XU M, ISLAM M, LIM C M, et al. Class-incremental domain adaptation with smoothing and calibration for surgical report generation[C]//Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, Strasbourg, France, 2021: 269-278. [80] XU M, ISLAM M, LIM C M, et al. Learning domain adaptation with model calibration for surgical report generation in robotic surgery[C]//Proceedings of the 2021 IEEE International Conference on Robotics and Automation, 2021: 12350-12356. [81] ZHANG Y, WANG Y, JIANG Z, et al. Domain adaptation via transferable swin Transformer for tire defect detection[J]. Engineering Applications of Artificial Intelligence, 2023, 122: 106109. [82] ZHANG Y, FENG K, MA H, et al. MMFNet: multisensor data and multiscale feature fusion model for intelligent cross-domain machinery fault diagnosis[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-11. [83] AN Y, ZHANG K, CHAI Y, et al. Gaussian mixture variational based Transformer domain adaptation fault diagnosis method and its application in bearing fault diagnosis[J]. IEEE Transactions on Industrial Informatics, 2024, 20(1): 615-625. [84] ZHANG Y, WANG Y, JIANG Z, et al. Tire defect detection by dual-domain adaptation-based transfer learning strategy[J]. IEEE Sensors Journal, 2022, 22(19): 18804-18814. [85] LIANG P, YU Z, WANG B, et al. Fault transfer diagnosis of rolling bearings across multiple working conditions via subdomain adaptation and improved vision Transformer network[J]. Advanced Engineering Informatics, 2023, 57: 102075. [86] ZHANG Y, JI J C, REN Z, et al. Digital twin-driven partial domain adaptation network for intelligent fault diagnosis of rolling bearing[J]. Reliability Engineering & System Safety, 2023, 234: 109186. [87] ZHANG Y, JI J C, REN Z, et al. Multi-sensor open-set cross-domain intelligent diagnostics for rotating machinery under variable operating conditions[J]. Mechanical Systems and Signal Processing, 2023, 191: 110172. [88] ZHANG Y, REN Z, FENG K, et al. Transformer-enabled cross-domain diagnostics for complex rotating machinery with multiple sensors[J]. IEEE/ASME Transactions on Mechatronics, 2023, 28(4): 2293-2304. [89] ZHANG Y, REN Z, FENG K, et al. Universal source-free domain adaptation method for cross-domain fault diagnosis of machines[J]. Mechanical Systems and Signal Processing, 2023, 191: 110159. [90] ZHANG Y, FENG K, JI J C, et al. Dynamic model-assisted bearing remaining useful life prediction using the cross-domain Transformer network[J]. IEEE/ASME Transactions on Mechatronics, 2022, 28(2): 1070-1080. [91] BAI Z, WANG B, WANG Z, et al. Domain adaptive multi-task Transformer for low-resource machine reading comprehension[J]. Neurocomputing, 2022, 509: 46-55. [92] JIANG C, GAO F, MA B, et al. Masked and adaptive Transformer for exemplar based image translation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 22418-22427. [93] KIM S, BAEK J, PARK J, et al. InstaFormer: instance-aware image-to-image translation with Transformer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 18321-18331. [94] YU Q, FAN K, ZHENG Y. Domain adaptive Transformer tracking under occlusions[J]. IEEE Transactions on Multimedia, 2023, 25: 1452-1461. [95] WANG Y, QI G, LI S, et al. Body part-level domain alignment for domain-adaptive person re-identification with Transformer framework[J]. IEEE Transactions on Information Forensics and Security, 2022, 17: 3321-3334. [96] HUANG J, GE H, SUN L, et al. ICMiF: interactive cascade microformers for cross-domain person re-identification[J]. Information Sciences, 2022, 617: 177-192. [97] WEI R, GU J, HE S, et al. Transformer-based domain-specific representation for unsupervised domain adaptive vehicle re-identification[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(3): 2935-2946. [98] YE J, FU C, ZHENG G, et al. Unsupervised domain adaptation for nighttime aerial tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 8896-8905. [99] FU C, LI T, YE J, et al. Scale-aware domain adaptation for robust UAV tracking[J]. IEEE Robotics and Automation Letters, 2023, 8(6): 3764-3771. [100] HUANG H P, SUN D, LIU Y, et al. Adaptive Transformers for robust few-shot cross-domain face anti-spoofing[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 37-54. [101] ZHOU H, KONG J, JIANG M, et al. Heterogeneous dual network with feature consistency for domain adaptation person re-identification[J]. International Journal of Machine Learning and Cybernetics, 2023, 14(5): 1951-1965. [102] LIU Y, CHEN Y, DAI W, et al. Source-free domain adaptation with contrastive domain alignment and self-supervised exploration for face anti-spoofing[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2022: 511-528. [103] XU R, LI J, DONG X, et al. Bridging the domain gap for multi-agent perception[C]//Proceedings of the 2023 IEEE International Conference on Robotics and Automation, 2023: 6035-6042. [104] GENG M, LI J, LI C, et al. Adaptive and simultaneous trajectory prediction for heterogeneous agents via transferable hierarchical Transformer network[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(10): 11479-11492. [105] XU C, ZHOU M, GE T, et al. Unsupervised domain adaption with pixel-level discriminator for image-aware layout generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 10114-10123. [106] VENKATESWARA H, EUSEBIO J, CHAKRABORTY S, et al. Deep hashing network for unsupervised domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 5018-5027. [107] PENG X, USMAN B, KAUSHIK N, et al. VisDA: the visual domain adaptation challenge[J]. arXiv:1710.06924, 2017. [108] PENG X, BAI Q, XIA X, et al. Moment matching for multi-source domain adaptation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 1406-1415. |
[1] | WANG Cailing, YAN Jingjing, ZHANG Zhidong. Review on Human Action Recognition Methods Based on Multimodal Data [J]. Computer Engineering and Applications, 2024, 60(9): 1-18. |
[2] | LIAN Lu, TIAN Qichuan, TAN Run, ZHANG Xiaohang. Research Progress of Image Style Transfer Based on Neural Network [J]. Computer Engineering and Applications, 2024, 60(9): 30-47. |
[3] | YANG Chenxi, ZHUANG Xufei, CHEN Junnan, LI Heng. Review of Research on Bus Travel Trajectory Prediction Based on Deep Learning [J]. Computer Engineering and Applications, 2024, 60(9): 65-78. |
[4] | WANG Ru, LIU Daming, ZHANG Jian. Wear-YOLO:Research on Detection Methods of Safety Equipment for Power Personnel in Substations [J]. Computer Engineering and Applications, 2024, 60(9): 111-121. |
[5] | CAI Teng, CHEN Cifa, DONG Fangmin. Low-Light Object Detection Combining Transformer and Dynamic Feature Fusion [J]. Computer Engineering and Applications, 2024, 60(9): 135-141. |
[6] | SONG Jianping, WANG Yi, SUN Kaiwei, LIU Qilie. Short Text Classification Combined with Hyperbolic Graph Attention Networks and Labels [J]. Computer Engineering and Applications, 2024, 60(9): 188-195. |
[7] | YANG Wentao, LEI Yuqi, LI Xingyue, ZHENG Tiancheng. Chinese Long Text Classification Model Based on BERT Fused Chinese Input Methods and BLCG [J]. Computer Engineering and Applications, 2024, 60(9): 196-202. |
[8] | CHE Yunlong, YUAN Liang, SUN Lihui. 3D Object Detection Based on Strong Semantic Key Point Sampling [J]. Computer Engineering and Applications, 2024, 60(9): 254-260. |
[9] | QIU Yunfei, WANG Yifan. Multi-Level 3D Point Cloud Completion with Dual-Branch Structure [J]. Computer Engineering and Applications, 2024, 60(9): 272-282. |
[10] | YE Bin, ZHU Xingshuai, YAO Kang, DING Shangshang, FU Weiwei. Binocular Depth Measurement Method for Desktop Interaction Scene [J]. Computer Engineering and Applications, 2024, 60(9): 283-291. |
[11] | LIU Shipeng, NING Dejun, MA Jue. LSTformer Model for Photovoltaic Power Prediction [J]. Computer Engineering and Applications, 2024, 60(9): 317-325. |
[12] | ZHOU Dingwei, HU Jing, ZHANG Liangrui, DUAN Feiya. Collaborative Correction Technology of Label Omission in Dataset for Object Detection [J]. Computer Engineering and Applications, 2024, 60(8): 267-273. |
[13] | WANG Yonggui, WANG Xinru. Multi-View Group Recommendation Integrating Self-Attention and Graph Convolution [J]. Computer Engineering and Applications, 2024, 60(8): 287-295. |
[14] | ZHOU Bojun, CHEN Zhiyu. Survey of Few-Shot Image Classification Based on Deep Meta-Learning [J]. Computer Engineering and Applications, 2024, 60(8): 1-15. |
[15] | SUN Shilei, LI Ming, LIU Jing, MA Jingang, CHEN Tianzhen. Research Progress on Deep Learning in Field of Diabetic Retinopathy Classification [J]. Computer Engineering and Applications, 2024, 60(8): 16-30. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||