[1] RISTEA N C, MADAN N, IONESCU R T, et al. Self-supervised predictive convolutional attentive block for anomaly detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 13576-13586.
[2] LIU W, CHANG H, MA B, et al. Diversity-measurable anomaly detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 12147-12156.
[3] PANG G, SHEN C, CAO L, et al. Deep learning for anomaly detection: a review[J]. ACM Computing Surveys (CSUR), 2021, 54(2): 1-38.
[4] CHALAPATHY R, CHAWLA S. Deep learning for anomaly detection: a survey[J]. arXiv:1901.03407, 2019.
[5] 张晓平, 纪佳慧, 王力, 等. 基于视频的人体异常行为识别与检测方法综述[J]. 控制与决策, 2022, 37(1): 14-27.
ZHANG X P, JI J H, WANG L, et al. Overview of video based human abnormal behavior recognition and detection methods[J]. Control and Decision, 2022, 37(1): 14-27.
[6] REISS T, HOSHEN Y. Attribute-based representations for accurate and interpretable video anomaly detection[J]. arXiv:2212.00789, 2022.
[7] PARK H, NOH J, HAM B. Learning memory-guided normality for anomaly detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 14372-14381.
[8] RAVANBAKHSH M, NABI M, SANGINETO E, et al. Abnormal event detection in videos using generative adversarial nets[C]//2017 IEEE International Conference on Image Processing (ICIP), 2017: 1577-1581.
[9] LV H, CHEN C, CUI Z, et al. Learning normal dynamics in videos with meta prototype network[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 15425-15434.
[10] HASAN M, CHOI J, NEUMANN J, et al. Learning temporal regularity in video sequences[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 733-742.
[11] LI Y J, DAI Z J. Abnormal behavior detection in crowd scene using YOLO and Conv-AE[C]//2021 33rd Chinese Control and Decision Conference (CCDC), 2021: 1720-1725.
[12] LIU W, LUO W, LIAN D, et al. Future frame prediction for anomaly detection—a new baseline[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 6536-6545.
[13] DONG F, ZHANG Y, NIE X. Dual discriminator generative adversarial network for video anomaly detection[J]. IEEE Access, 2020, 8: 88170-88176.
[14] ZAHEER M Z, MAHMOOD A, KHAN M H, et al. Generative cooperative learning for unsupervised video anomaly detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 14744-14754.
[15] LEE S, KIM H G, RO Y M. BMAN: bidirectional multi-scale aggregation networks for abnormal event detection[J]. IEEE Transactions on Image Processing, 2019, 29: 2395-2408.
[16] ULLAH W, HUSSAIN T, ULLAH F U M, et al. TransCNN: hybrid CNN and transformer mechanism for surveillance anomaly detection[J]. Engineering Applications of Artificial Intelligence, 2023, 123: 106173.
[17] GEORGESCU M I, BARBALAU A, IONESCU R T, et al. Anomaly detection in video via self-supervised and multi-task learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 12742-12752.
[18] JI H, ZENG X, LI H, et al. Human abnormal behavior detection method based on T-TINY-YOLO[C]//Proceedings of the 5th International Conference on Multimedia and Image Processing, 2020: 1-5.
[19] 徐守坤, 顾佳楠, 庄丽华, 等. 基于两阶段计算Transformer的小目标检测[J]. 计算机科学与探索, 2023, 17(12): 2967-2983.
XU S K, GU J N, ZHUANG L H, et al. Small object detection based on two-stage calculation transformer[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(12): 2967-2983.
[20] GLENN J, AYUSH C, LAUGHING, et al. ultralytics/ultralytics-main[EB/OL]. [2023-01-10]. https://github.com/ultralytics/ultralytics.
[21] YANG L, ZHANG R Y, LI L, et al. Simam: a simple, parameter-free attention module for convolutional neural networks[C]//International Conference on Machine Learning, 2021: 11863-11874.
[22] TONG Z, CHEN Y, XU Z, et al. Wise-IoU: bounding box regression loss with dynamic focusing mechanism[J]. arXiv:2301.10051, 2023.
[23] HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916.
[24] WANG C Y, LIAO H Y M, WU Y H, et al. CSPNet: a new backbone that can enhance learning capability of CNN[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020: 390-391.
[25] CHEN J, KAO S, HE H, et al. Run, don’t walk: chasing higher FLOPS for faster neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 12021-12031.
[26] MAHADEVAN V, LI W, BHALODIA V, et al. Anomaly detection in crowded scenes[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, CA, USA: IEEE, 2010: 1975-1981.
[27] TAO R, WEI Y, LI H, et al. Over-sampling de-occlusion attention network for prohibited items detection in noisy x-ray images[J]. arXiv:2103.00809, 2021.
[28] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]//Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, October 11-14, 2016. [S.l.]: Springer International Publishing, 2016: 21-37.
[29] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[30] ZHENG Z, YE R, WANG P, et al. Localization distillation for dense object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 9407-9416.
[31] YANG Z, LI Z, JIANG X, et al. Focal and global knowledge distillation for detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 4643-4652.
[32] CHEN F, ZHANG H, HU K, et al. Enhanced training of query-based object detection via selective query recollection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 23756-23765.
[33] GLENN J, AYUSH C, JIRKA B, et al. ultralytics/yolov5: v6.1-YOLOv5[EB/OL]. [2022-02-22]. https://github.com/ultralytics/yolov5.
[34] WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 7464-7475.
[35] TAO R, WEI Y, JIANG X, et al. Towards real-world X-ray security inspection: a high-quality benchmark and lateral inhibition module for prohibited items detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 10923-10932.
[36] WANG M, DU H, MEI W, et al. Material-aware cross-channel interaction attention (MCIA) for occluded prohibited item detection[J]. The Visual Computer, 2023, 39: 2865-2877.
[37] WEI Y, WANG Y, SONG H. CFPA-Net: cross-layer feature fusion and parallel attention network for detection and classification of prohibited items in x-ray baggage images[C]//2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS), 2021: 203-207.
[38] 董乙杉, 郭靖圆, 李明泽, 等. 基于反向瓶颈和LCBAM设计的X光违禁品检测[J]. 计算机科学与探索, 2024, 18(5):1259-1270.
DONG Y S, GUO J Y, LI M Z, et al. X-ray prohibited items detection based on inverted bottleneck and light convolution block attention module[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(5): 1259-1270.
[39] 孙嘉傲, 董乙杉, 郭靖圆, 等. 自适应与多尺度特征融合的X光违禁品检测[J]. 计算机工程与应用, 2024, 60(2): 96-102.
SUN J A, DONG Y S, GUO J Y, et al. Detection of X-ray contraband by adaptive and multi-scale feature fusion[J]. Computer Engineering and Applications, 2024, 60(2): 96-102.
[40] ZHENG Z, WANG P, LIU W, et al. Distance-IoU loss: faster and better learning for bounding box regression[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 12993-13000.
[41] ZHANG Y F, REN W, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506: 146-157.
[42] GEVORGYAN Z. SIoU loss: more powerful learning for bounding box regression[J]. arXiv:2205.12740, 2022. |