[1] JAIN S. DeepSeaNet: improving underwater object detection using EfficientDet[J]. arXiv:2306.06075, 2023.
[2] WANG S, WU W, WANG X, et al. Underwater optical image object detection based on YOLOv7 algorithm[C]//Proceedings of the OCEANS 2023-Limerick, 2023: 1-5.
[3] WANG G, HWANG J N, WILLIAMS K, et al. Closed-loop tracking-by-detection for ROV-based multiple fish tracking[C]//Proceedings of the 2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI), 2016: 7-12.
[4] LIN H Y, TSENG S L, LI J Y. SUR-Net: a deep network for fish detection and segmentation with limited training data[J]. IEEE Sensors Journal, 2022, 22(18): 18035-18044.
[5] LI Z, MUREZ Z, KRIEGMAN D, et al. Learning to see through turbulent water[C]// Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018: 512-520.
[6] QIAN Y, ZHENG Y, GONG M, et al. Simultaneous 3D reconstruction for water surface and underwater scene[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 754-770.
[7] XIONG J, HEIDRICH W. In-the-wild single camera 3D reconstruction through moving water surfaces[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 12558-12567.
[8] MILDENHALL B, SRINIVASAN P, TANCIK M, et al. NeRF: representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106.
[9] CURLESS B, LEVOY M. A volumetric method for building complex models from range images[C]//Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 1996: 303-312.
[10] MüLLER T, EVANS A, SCHIED C, et al. Instant neural graphics primitives with a multiresolution hash encoding[J]. ACM Transactions on Graphics (ToG), 2022, 41(4): 1-15.
[11] CHEN A, XU Z, GEIGER A, et al. TensoRF: tensorial radiance fields[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 333-350.
[12] BARRON J T, MILDENHALL B, VERBIN D, et al. Mip-NeRF 360: unbounded anti-aliased neural radiance fields[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 5470-5479.
[13] GARBIN S J, KOWALSKI M, JOHNSON M, et al. FastNeRF: high-fidelity neural rendering at 200FPS[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 14346-14355.
[14] 马汉声, 祝玉华, 李智慧, 等. 神经辐射场多视图合成技术综述[J].计算机工程与应用, 2024, 60(4): 21-38.
MA H S, ZHU Y H, LI Z H, et al. Survey of neural radiance fields for multi-view synthesis technologies[J]. Computer Engineering and Applications ,2024, 60(4): 21-38.
[15] KERBL B, KOPANAS G, LEIMKüHLER T, et al. 3D Gaussian splatting for real-time radiance field rendering[J]. ACM Transactions on Graphics, 2023, 42(4):1-14.
[16] SETHURAMAN A V, RAMANAGOPAL M S, SKINNER K A.WaterNeRF:neural radiance fields for underwater scenes[C]//Proceedings of the OCEANS 2023-MTS/IEEE US Gulf Coast, 2023: 1-7.
[17] LEVY D, PELEG A, PEARL N, et al. SeaThru-NeRF: neural radiance fields in scattering media[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 56-65.
[18] AKKAYNAK D, TREIBITZ T. A revised underwater image formation model[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 6723-6732.
[19] AKKAYNAK D, TREIBITZ T, SHLESINGER T, et al. What is the space of attenuation coefficients in underwater computer vision?[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4931-4940.
[20] SCHONBERGER J L, FRAHM J M. Structure-from-motion revisited[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 4104-4113.
[21] KOZLOV I, RZHANOV Y. Uncertainty in 3D reconstruction of underwater objects due to refraction[C]//Proceedings of the OCEANS 2017-Anchorage, 2017: 1-4.
[22] TIAN Y, NARASIMHAN S G. Seeing through water: image restoration using model-based tracking[C]//Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, 2009: 2303-2310.
[23] OREIFEJ O, SHU G, PACE T, et al. A two-stage reconstruction approach for seeing through water[C]//Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2011: 1153-1160.
[24] SUN D, YANG X, LIU M Y, et al. PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 8934-8943.
[25] TEED Z, DENG J. RAFT: recurrent all-pairs field transforms for optical flow[C]//Proceedings of the 16th European Conference on Computer Vision(ECCV 2020), Glasgow, UK, August 23-28, 2020: 402-419.
[26] ZHAO S, SHENG Y, DONG Y, et al. MaskFlownet: asymmetric feature matching with learnable occlusion mask[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 6278-6287.
[27] DOSOVITSKIY A, FISCHER P, ILG E, et al. FlowNet: learning optical flow with convolutional networks[C]//Proceedings of the IEEE International Conference on Computer Vision, 2015: 2758-2766. |