[1] 冯亚强, 宋龙, 张公平. 基于粒子滤波的三维转弯目标跟踪方法[J]. 航空兵器, 2022, 29(3): 28-32.
FENG Y Q, SONG L, ZHANG G P. 3D turning target tracking method based on particle filter[J]. Aero Weaponry, 2022, 29(3): 28-32.
[2] 刘向前, 闫娟, 杨慧斌, 等. 基于改进光流法的目标跟踪技术研究[J]. 上海工程技术大学学报, 2021, 35(3): 237-242.
LIU X Q, YAN J, YANG H B, et al. Research on target tracking based on improved optical flow method[J]. Journal of Shanghai University of Engineering Science, 2021, 35(3): 237-242.
[3] 李彪, 孙瑾, 李星达, 等. 自适应特征融合的相关滤波跟踪算法[J]. 计算机工程与应用, 2022, 58(9): 208-218.
LI B, SUN J, LI X D, et al. Correlation filter target tracking based on adaptive multi-feature fusion[J]. Computer Engineering and Applications, 2022, 58(9): 208-218.
[4] 曹雯雯, 康彬, 颜俊, 等. 面向多源数据融合的稀疏表示目标跟踪[J]. 计算机工程与应用, 2019, 55(6): 1-7.
CAO W W, KANG B, YAN J, et al. Sparse representation target tracking via multi-source data fusion[J]. Computer Engineering and Applications, 2019, 55(6): 1-7.
[5] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv:2010.11929, 2020.
[6] LIN L T, FAN H, ZHANG Z P, et al. SwinTrack: a simple and strong baseline for transformer tracking[C]//Advances in Neural Information Processing Systems, 2022, 35: 16743-16754.
[7] XU Y D, WANG Z Y, LI Z X, et al. SiamFC++: towards robust and accurate visual tracking with target estimation guidelines[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12549-12556.
[8] ONDRA?OVI? M, TARáBEK P. Siamese visual object tracking: a survey[J]. IEEE Access, 2021, 9: 110149-110172.
[9] CHEN X, YAN B, ZHU J W, et al. Transformer tracking[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 8126-8135.
[10] WANG N, ZHOU W G, WANG J, et al. Transformer meets tracker: exploiting temporal context for robust visual tracking[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 1571-1580.
[11] ZHANG L C, DANELLJAN M, GONZALEZ-GARCIA A, et al. Multi-modal fusion for end-to-end RGB-T tracking[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop. Piscataway: IEEE, 2019: 2252-2261.
[12] BHAT G, DANELLJAN M, VAN GOOL L, et al. Learning discriminative model prediction for tracking[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 6182-6191.
[13] ZHANG T L, LIU X R, ZHANG Q, et al. SiamCDA: complementarity- and distractor-aware RGB-T tracking based on Siamese network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(3): 1403-1417.
[14] LI C, LIU L, LU A, et al. Challenge-aware RGBT tracking[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer, 2020: 222-237.
[15] XIAO Y, YANG M M, LI C L, et al. Attribute-based progressive fusion network for RGBT tracking[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, 36(3): 2831-2838.
[16] ZHANG Q, YANG Y B. Rest: an efficient transformer for visual recognition[C]//Advances in Neural Information Processing Systems, 2021, 34: 15475-15485.
[17] TOLSTIKHIN I O, HOULSBY N, KOLESNIKOV A, et al. MLP-Mixer: an all-MLP architecture for vision[C]//Advances in Neural Information Processing Systems, 2021, 34: 24261-24272.
[18] CHEN S, GE C, TONG Z, et al. AdaptFormer: adapting vision transformers for scalable visual recognition[C]//Advances in Neural Information Processing Systems, 2022, 35: 16664-16678.
[19] HUI T R, XUN Z Z, PENG F G, et al. Bridging search region interaction with template for RGB-T tracking[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 13630-13639.
[20] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017, 30.
[21] ULYANOV D, VEDALDI A, LEMPITSKY V. Instance normalization: the missing ingredient for fast stylization[J]. arXiv:1607.08022, 2016.
[22] LI C L, XUE W L, JIA Y Q, et al. LasHeR: a large-scale high-diversity benchmark for RGBT tracking[J]. IEEE Transactions on Image Processing, 2022, 31: 392-404.
[23] LI C L, LIANG X Y, LU Y J, et al. RGB-T object tracking: benchmark and baseline[J]. Pattern Recognition, 2019, 96: 106977.
[24] ZHU Y B, LI C L, LUO B, et al. Dense feature aggregation and pruning for RGBT tracking[C]//Proceedings of the 27th ACM International Conference on Multimedia. New York: ACM, 2019: 465-472.
[25] GAO Y, LI C L, ZHU Y B, et al. Deep adaptive fusion network for high performance RGBT tracking[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop. Piscataway: IEEE, 2019: 91-99.
[26] LI C L, LU A D, ZHENG A H, et al. Multi-adapter RGBT tracking[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop. Piscataway: IEEE, 2019: 2262-2270.
[27] LU A D, LI C L, YAN Y Q, et al. RGBT tracking via multi-adapter network with hierarchical divergence loss[J]. IEEE Transactions on Image Processing, 2021, 30: 5613-5625.
[28] ZHANG H, ZHANG L, ZHUO L, et al. Object tracking in RGB-T videos using modal-aware attention network and competitive learning[J]. Sensors, 2020, 20(2): 393.
[29] LU A D, QIAN C, LI C L, et al. Duality-gated mutual condition network for RGBT tracking[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022.
[30] ZHAI S L, WU Y, LIU L, et al. RGBT Tracking based on modality feature enhancement[J]. Multimedia Tools and Applications, 2024, 83(10): 29311-29330.
[31] HOU R C, XU B Y, REN T W, et al. MTNet: learning modality-aware representation with transformer for RGBT tracking[C]//Proceedings of the 2023 IEEE International Conference on Multimedia and Expo. Piscataway: IEEE, 2023: 1163-1168.
[32] LIU L, LI C L, XIAO Y, et al. Quality-aware RGBT tracking via supervised reliability learning and weighted residual guidance[C]//Proceedings of the 31st ACM International Conference on Multimedia. New York: ACM, 2023: 3129-3137.
[33] LUO Y, GUO X Q, DONG M T, et al. Learning modality complementary features with mixed attention mechanism for RGB-T tracking[J]. Sensors, 2023, 23(14): 6609.
[34] WANG X, SHU X J, ZHANG S L, et al. MFGNet: dynamic modality-aware filter generation for RGB-T tracking[J]. IEEE Transactions on Multimedia, 2022, 25: 4335-4348.
[35] ZHANG P Y, ZHAO J, BO C J, et al. Jointly modeling motion and appearance cues for robust RGB-T tracking[J]. IEEE Transactions on Image Processing, 2021, 30: 3335-3347.
[36] ZHAO Y J, LAI H C, GAO G X. RMFNet: redetection multimodal fusion network for RGBT tracking[J]. Applied Sciences, 2023, 13(9): 5793.
[37] TU Z Z, LIN C, ZHAO W, et al. M5L: multi-modal multi-margin metric learning for RGBT tracking[J]. IEEE Transactions on Image Processing, 2021, 31: 85-98.
[38] ZHANG P Y, WANG D, LU H C, et al. Learning adaptive attribute-driven representation for real-time RGB-T tracking[J]. International Journal of Computer Vision, 2021, 129(9): 2714-2729.
[39] XIAO X B, XIONG X Z, MENG F Q, et al. Multi-scale feature interactive fusion network for RGBT tracking[J]. Sensors, 2023, 23(7): 3410.
[40] WANG C Q, XU C Y, CUI Z, et al. Cross-modal pattern-propagation for RGB-T tracking[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 7064-7073. |