[1] BHAT G, JOHNANDER J, DANELLJAN M, et al. Unvei-ling the power of deep tracking[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2018: 493-509.
[2] XING J L, AI H Z, LAO S H. Multiple human tracking based on multi-view upper-body detection and discriminative learning[C]//Proceedings of the 20th International Conference on Pattern Recognition. Piscataway: IEEE, 2010: 1698-1701.
[3] ZHANG G C, VELA P A. Good features to track for visual SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1373-1382.
[4] BOZEK K, HEBERT L, MIKHEYEV A S, et al. Towards dense object tracking in a 2D honeybee hive[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4185-4193.
[5] BERTINETTO L, VALMADRE J, GOLODETZ S, et al. Staple: complementary learners for real-time tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 1401-1409.
[6] BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional Siamese networks for object tracking[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer International Publishing, 2016: 850-865.
[7] LI B, WU W, WANG Q, et al. SiamRPN++: evolution of Siamese visual tracking with very deep networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4277-4286.
[8] VASWANI A , SHAZEER N, PARMAR N , et al. Attention is all you need[J]. arXiv:1706.03762, 2017.
[9] YE B T, CHANG H, MA B P, et al. Joint feature learning and relation modeling for tracking: a one-stream framework[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 341-357.
[10] YAN B, PENG H W, FU J L, et al. Learning spatio-temporal transformer for visual tracking[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 10428-10437.
[11] LI B, YAN J J, WU W, et al. High performance visual trac-king with Siamese region proposal network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 8971-8980.
[12] ZHANG Z P. Ocean: object-aware anchor-free tracking[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2020: 771-787.
[13] DANELLJAN M, BHAT G, KHAN F S, et al. ATOM: accurate tracking by overlap maximization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4655-4664.
[14] SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4510-4520.
[15] WANG Q L, WU B G, ZHU P F, et al. ECA-Net: efficient channel attention for deep convolutional neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 11531-11539.
[16] TAN M X, LE Q V. EfficientNet: rethinking model scaling for convolutional neural networks[J]. arXiv:1905.11946, 2019.
[17] RICHARD ZHANG. Making convolutional networks shift-invariant again [J]. arXiv:1904.11486, 2019.
[18] 杨晓强, 刘文昊. 融合低通滤波器的孪生网络跟踪算法[J]. 计算机工程与应用, 2023, 59(23): 237-245.
YANG X Q, LIU W H. Siamese network tracking algorithm of fused low pass filter[J]. Computer Engineering and Applications, 2023, 59(23): 237-245.
[19] ZHANG H Y, WANG Y, DAYOUB F, et al. VarifocalNet: an IoU-aware dense object detector[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 8510-8519.
[20] HUANG L H, ZHAO X, HUANG K Q. GOT-10k: a large high-diversity benchmark for generic object tracking in the wild[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(5): 1562-1577.
[21] LI B, WU W, WANG Q, et al. SiamRPN++: evolution of Siamese visual tracking with very deep networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4277-4286.
[22] CHEN X, YAN B, ZHU J W, et al. Transformer tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 8122-8131.
[23] CUI Y T, JIANG C, WANG L M, et al. MixFormer: end-to-end tracking with iterative mixed attention[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 13598-13608.
[24] LIN L T, FAN H, XU Y, et al. SwinTrack: a simple and strong baseline for transformer tracking[J]. arXiv:2112.00995, 2021.
[25] GAO S Y, ZHOU C L, MA C, et al. AiATrack: attention in atte-ntion for Transformer visual tracking[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 146-164.
[26] WU Y, LIM J, YANG M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834-1848.
[27] MUELLER M, SMITH N, GHANEM B. A benchmark and simulator for UAV tracking[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2016: 445-461.
[28] BHAT G, DANELLJAN M, VAN GOOL L, et al. Learning discriminative model prediction for tracking[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 6181-6190.
[29] YAN B, PENG H W, WU K, et al. LightTrack: finding lightweight neural networks for object tracking via one-shot architecture search[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 15175-15184. |