[1] 赵洋, 王潇, 蔡柠泽, 等. 自动驾驶目标检测不确定性估计方法综述[J]. 汽车工程学报, 2024, 14(5): 760-771.
ZHAO Y, WANG X, CAI N Z, et al. A review of uncertainty estimation methods in autonomous driving object detection[J]. Chinese Journal of Automotive Engineering, 2024, 14(5): 760-771.
[2] 张亚丽, 田启川, 唐超林. 基于事件相机的目标检测算法研究[J]. 计算机工程与应用, 2024, 60(13): 23-35.
ZHANG Y L, TIAN Q C, TANG C L. Review of object dete-ction based on event cameras[J]. Computer Engineering and Applications, 2024, 60(13): 23-35.
[3] JYOTHI D N, REDDY G H, PRASHANTH B, et al. Collaborative training of object detection and re-identification in multi-object tracking using YOLOv8[C]//Proceedings of the 2024 International Conference on Computing and Data Science. Piscataway: IEEE, 2024: 1-6.
[4] FAYAZ S, PARAH S A, QURESHI G J, et al. Intelligent underwater object detection and image restoration for autonomous underwater vehicles[J]. IEEE Transactions on Vehicular Technology, 2024, 73(2): 1726-1735.
[5] 邓天民, 谭思奇, 蒲龙忠. 基于改进YOLOv5s的交通信号灯识别方法[J]. 计算机工程, 2022, 48(9): 55-62.
DENG T M, TAN S Q, PU L Z. Traffic light recognition method based on improved YOLOv5s[J]. Computer Engineering, 2022, 48(9): 55-62.
[6] 邱嘉钰, 张雅声, 方宇强, 等. 基于事件相机的目标检测与跟踪算法综述[J]. 激光与光电子学进展, 2025, 62(4): 42-58.
QIU J Y, ZHANG Y S, FANG Y Q, et al. Review of event camera-based target detection and tracking algorithms[J]. Laser & Optoelectronics Progress, 2025, 62(4): 42-58.
[7] SHARIFF W, DILMAGHANI M S, KIELTY P, et al. Event cameras in automotive sensing: a review[J]. IEEE Access, 2024, 12: 51275-51306.
[8] GOYAL G, DI PIETRO F, CARISSIMI N, et al. MoveEnet: online high-frequency human pose estimation with an event camera[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2023: 4024-4033.
[9] LI J N, LI J, ZHU L, et al. Asynchronous spatio-temporal memory network for continuous event-based object detection[J]. IEEE Transactions on Image Processing, 2022, 31: 2975-2987.
[10] LUO C, WU J H, SUN S X, et al. TransCODNet: underwater transparently camouflaged object detection via RGB and event frames collaboration[J]. IEEE Robotics and Automation Letters, 2024, 9(2): 1444-1451.
[11] ZHANG X Y, GONG Y, LU J L, et al. Multi-modal fusion technology based on vehicle information: a survey[J]. IEEE Transactions on Intelligent Vehicles, 2023, 8(6): 3605-3619.
[12] ZHAO J D, WU D, YU Z X, et al. DRMNet: a multi-task detection model based on image processing for autonomous driving scenarios[J]. IEEE Transactions on Vehicular Technology, 2023, 72(12): 15341-15355.
[13] YANG N, LIU Z W, MA S, et al. Joint intensity and event framework for vehicle detection in degraded conditions[C]//Proceedings of the 2023 7th International Conference on Transportation Information and Safety. Piscataway: IEEE, 2023: 1568-1574.
[14] JIANG Z Y, XIA P F, HUANG K, et al. Mixed frame-/ event-driven fast pedestrian detection[C]//Proceedings of the 2019 International Conference on Robotics and Automation. Piscataway: IEEE, 2019: 8332-8338.
[15] LI J N, DONG S W, YU Z F, et al. Event-based vision enha-nced: a joint detection framework in autonomous driving[C]//Proceedings of the 2019 IEEE International Conference on Multimedia and Expo. Piscataway: IEEE, 2019: 1396-1401.
[16] CAO H, CHEN G, XIA J H, et al. Fusion-based feature att-ention gate component for vehicle detection based on event camera[J]. IEEE Sensors Journal, 2021, 21(21): 24540-24548.
[17] LIU M Y, QI N, SHI Y H, et al. An attention fusion network for event-based vehicle object detection[C]//Proceedings of the 2021 IEEE International Conference on Image Processing. Piscataway: IEEE, 2021: 3363-3367.
[18] GEHRIG D, RüEGG M, GEHRIG M, et al. Combining events and frames using recurrent asynchronous multimodal networks for monocular depth prediction[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 2822-2829.
[19] 郑宇亮, 陈云华, 白伟杰, 等. 融合事件数据和图像帧的车辆目标检测[J]. 计算机应用, 2024, 44(3): 931-937.
ZHENG Y L, CHEN Y H, BAI W J, et al. Vehicle target detection by fusing event data and image frames[J]. Journal of Computer Applications, 2024, 44(3): 931-937.
[20] QIAO G C, NING N, ZUO Y, et al. Spatio-temporal fusion spiking neural network for frame-based and event-based camera sensor fusion[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024, 8(3): 2446-2456.
[21] WANG X, LI J N, ZHU L, et al. VisEvent: reliable object tracking via collaboration of frame and event flows[J]. IEEE Transactions on Cybernetics, 2024, 54(3): 1997-2010.
[22] BAI W J, CHEN Y H, FENG R, et al. Accurate and efficient frame-based event representation for AER object recognition[C]//Proceedings of the 2022 International Joint Conference on Neural Networks. Piscataway: IEEE, 2022: 1-6.
[23] BARCHID S, MENNESSON J, DJéRABA C. Bina-rep event frames: a simple and effective representation for event-based cameras[C]//Proceedings of the 2022 IEEE International Conference on Image Processing. Piscataway: IEEE, 2022: 3998-4002.
[24] BINAS J, NEIL D, LIU S C, et al. DDD17: end-to-end DAVIS driving dataset[J]. arXiv:1711.01458, 2017.
[25] ZHU A Z, THAKUR D, ?ZASLAN T, et al. The multivehicle stereo event camera dataset: an event camera dataset for 3D perception[J]. IEEE Robotics and Automation Letters, 2018, 3(3): 2032-2039. |