[1] 严飞, 马可, 刘佳, 等. 无人机目标实时自适应跟踪系统[J]. 计算机工程与应用, 2022, 58(10): 178-184.
YAN F, MA K, LIU J, et al. UAV target real-time adaptive tracking system[J]. Computer Engineering and Applications, 2022, 58(10): 178-184.
[2] 苑玉彬, 吴一全, 赵朗月, 等. 基于深度学习的无人机航拍视频多目标检测与跟踪研究进展[J]. 航空学报, 2023, 44(18): 6-36.
YUAN Y B, WU Y Q, ZHAO L Y, et al. Research progress of UAV aerial video multi-object detection and tracking based on deep learning[J]. Acta Aeronautica et Astronautica Sinica, 2023, 44(18): 6-36.
[3] ZHOU K, XIANG T. Torchreid: a library for deep learning person re-identification in pytorch[J]. arXiv:1910.10093, 2019.
[4] 单兆晨, 黄丹丹, 耿振野, 等. 免锚检测的行人多目标跟踪算法[J]. 计算机工程与应用, 2022, 58(10): 145-152.
SHAN Z C, HUANG D D, GENG Z Y, et al. Pedestrian multi-object tracking algorithm of anchor-free detection[J]. Computer Engineering and Applications, 2022, 58(10): 145-152.
[5] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017.
[6] BEWLEY A, GE Z, OTT L, et al. Simple online and realtime tracking[C]//2016 IEEE International Conference on Image Processing (ICIP), 2016: 3464-3468.
[7] KALMAN R E. A new approach to linear filtering and prediction problems[J]. Journal of Basic Engineering, 1960, 82(1): 35-45.
[8] AKSHITHA S, KUMAR K S A, NETHRITHAMEDA M, et al. Implementation of hungarian algorithm to obtain optimal solution for travelling salesman problem[C]//2018 IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), 2018: 2470-2474.
[9] WOJKE N, BEWLEY A, PAULUS D. Simple online and realtime tracking with a deep association metric[C]//2017 IEEE International Conference on Image Processing (ICIP), 2017: 3645-3649.
[10] WANG Z, ZHENG L, LIU Y, et al. Towards real-time multi-object tracking[C]//European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 107-122.
[11] ZHANG Y, WANG C, WANG X, et al. Fairmot: on the fairness of detection and re-identification in multiple object tracking[J]. International Journal of Computer Vision, 2021, 129: 3069-3087.
[12] DU Y, ZHAO Z, SONG Y, et al. Strongsort: make deepsort great again[J]. IEEE Transactions on Multimedia, 2023, 25: 8725-8737.
[13] AHARON N, ORFAIG R, BOBROVSKY B Z. BoT-SORT: robust associations multi-pedestrian tracking[J]. arXiv: 2206.
14651, 2022.
[14] ZHANG Y, SUN P, JIANG Y, et al. Bytetrack: multi-object tracking by associating every detection box[C]//European Conference on Computer Vision. Cham: Springer, 2022: 1-21.
[15] CAO J, PANG J, WENG X, et al. Observation-centric sort: rethinking sort for robust multi-object tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 9686-9696.
[16] LI M, ZHAI D, YANG D, et al. BVTracker: multi-vehicle tracking based on behavioural-visual features[J]. IEEE Sensors Journal, 2023, 23(11): 11815-11824.
[17] GRAVES A. Long short-term memory[M]//Supervised sequence labelling with recurrent neural networks. [S.l.]: Springer, 2012: 37-45.
[18] LI Y, FAN Q, HUANG H, et al. A modified YOLOv8 detection network for UAV aerial image recognition[J]. Drones, 2023, 7(5): 304.
[19] ZHU L, WANG X, KE Z, et al. BiFormer: vision transformer with bi-level routing attention[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 10323-10333.
[20] WANG J, CHEN K, XU R, et al. Carafe: content-aware reassembly of features[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 3007-3016.
[21] 李震霄, 孙伟, 刘明明, 等. 交通监控场景中的车辆检测与跟踪算法研究[J]. 计算机工程与应用, 2021, 57(8): 103-111.
LI Z X, SUN W, LIU M M, et al. Research on vehicle detection and tracking algorithms in traffic monitoring scenes [J]. Computer Engineering and Applications, 2021, 57(8): 103-111.
[22] FENG C, ZHONG Y, GAO Y, et al. Tood: task-aligned one-stage object detection[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021: 3490-3499.
[23] 刘文婷, 卢新明. 基于计算机视觉的Transformer研究进展[J]. 计算机工程与应用, 2022, 58(6): 1-16.
LIU W T, LU X M. Research progress of Transformer based on computer vision[J]. Computer Engineering and Applications, 2022, 58(6): 1-16.
[24] LIU Z, LIN Y, CAO Y, et al. Swin transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 10012-10022.
[25] WANG W, YAO L, CHEN L, et al. CrossFormer: a versatile vision transformer hinging on cross-scale attention[J]. arXiv:2108.00154, 2021.
[26] TU Z, TALEB H, ZHANG H, et al. Maxvit: multi-axis vision transformer[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 459-479.
[27] CHEN Z, ZHU Y, ZHAO C, et al. Dpt: deformable patch-based transformer for visual recognition[C]//Proceedings of the 29th ACM International Conference on Multimedia, 2021: 2899-2907.
[28] XIA Z, PAN X, SONG S, et al. Vision transformer with deformable attention[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 4794-4803.
[29] WEN L, DU D, CAI Z, et al. UA-DETRAC: a new benchmark and protocol for multi-object detection and tracking[J]. Computer Vision and Image Understanding, 2020, 193: 102907.
[30] DU D, QI Y, YU H, et al. The unmanned aerial vehicle benchmark: object detection and tracking[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 370-386.
[31] MILAN A, LEAL-TAIXé L, REID I, et al. MOT16: a benchmark for multi-object tracking[J]. arXiv:1603.00831, 2016.
[32] LUITEN J, OSEP A, DENDORFER P, et al. Hota: a higher order metric for evaluating multi-object tracking[J]. International Journal of Computer Vision, 2021, 129: 548-578.
[33] 毕鹏程, 罗健欣, 陈卫卫. 轻量化卷积神经网络技术研究[J]. 计算机工程与应用, 2019, 55(16): 25-35.
BI P C, LUO J X, CHEN W W. Research on lightweight convolutional neural network technology[J]. Computer Engineering and Applications, 2019, 55(16): 25-35.
[34] 叶子勋, 张红英. YOLOv4口罩检测算法的轻量化改进[J]. 计算机工程与应用, 2021, 57(17): 157-168.
YE Z X, ZHANG H Y. Lightweight improvement of YOLOv4 mask detection algorithm[J]. Computer Engineering and Applications, 2021, 57(17): 157-168.
|