[1] CHEN L, TANG W, WAN T R, et al. Self-supervised monocular image depth learning and confidence estimation[J]. Neurocomputing, 2020, 381: 272-281.
[2] 高悦, 戴蒙, 张晴. 基于多模态特征交互的RGB-D显著性目标检测[J]. 计算机工程与应用, 2024, 60(2): 211-220.
GAO Y, DAI M, ZHANG Q. RGB-D salient object detection based on multi-modal feature interaction[J]. Computer Engineering and Applications, 2024, 60(2): 211-220.
[3] 秦超, 闫子飞. 基于图像对齐和不确定估计的深度视觉里程计[J]. 计算机工程与应用, 2022, 58(22): 101-107.
QIN C, YANG Z F. Deep visual odometry based on image alignment and uncertainty estimation[J]. Computer Engineering and Applications, 2022, 58(22): 101-107.
[4] ZHAO C, SUN Q, ZHANG C, et al. Monocular depth estimation based on deep learning: an overview[J]. Science China Technological Sciences, 2020, 63(9): 1612-1627.
[5] MING Y, MENG X, FAN C, et al. Deep learning for monocular depth estimation: a review[J]. Neurocomputing, 2021, 438: 14-33.
[6] LUO X, HUANG J B, SZELISKI R, et al. Consistent video depth estimation[J]. ACM Transactions on Graphics, 2020, 39(4): 1-71.
[7] EIGEN D, FERGUS R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture[C]//Proceedings of the IEEE International Conference on Computer Vision, 2015: 2650-2658.
[8] LAINA I, RUPPRECHT C, BELAGIANNIS V, et al. Deeper depth prediction with fully convolutional residual networks[C]//Proceedings of the 2016 4th International Conference on 3D Vision, 2016: 239-248.
[9] RUDOLPH M, DAWOUD Y, GüLDENRING R, et al. Lightweight monocular depth estimation through guided decoding[C]//Proceedings of the 2022 International Conference on Robotics and Automation, 2022: 2344-2350.
[10] SONG M, LIM S, KIM W. Monocular depth estimation using laplacian pyramid-based depth residuals[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(11): 4381-4393.
[11] BHAT S F, ALHASHIM I, WONKA P. AdaBins: depth estimation using adaptive bins[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 4009-4018.
[12] MA F, KARAMAN S. Sparse-to-dense: depth prediction from sparse depth samples and a single image[C]//Proceedings of the 2018 IEEE International Conference on Robotics and Automation, 2018: 4796-4803.
[13] CHENG X, WANG P, YANG R. Learning depth with convolutional spatial propagation network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 42(10): 2361-2379.
[14] XIONG X, XIONG H, XIAN K, et al. Sparse-to-dense depth completion revisited: sampling strategy and graph construction[C]//Proceedings of the 16th European Conference on Computer Vision, 2020: 682-699.
[15] LO C C, VANDEWALLE P. Depth estimation from monocular images and sparse radar using deep ordinal regression network[C]//Proceedings of the 2021 IEEE International Conference on Image Processing, 2021: 3343-3347.
[16] JIAO J, CAO Y, SONG Y, et al. Look deeper into depth: monocular depth estimation with semantic booster and attention-driven loss[C]//Proceedings of the European Conference on Computer Vision, 2018: 53-69.
[17] LI J, JI W, ZHANG M, et al. Delving into calibrated depth for accurate RGB-D salient object detection[J]. International Journal of Computer Vision, 2023, 131(4): 855-876.
[18] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017: 5998-6008.
[19] LIU L, SONG X, LYU X, et al. FCFR-Net: feature fusion based coarse-to-fine residual learning for depth completion [C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 2136-2144.
[20] TANG J, TIAN F P, FENG W, et al. Learning guided convolutional network for depth completion[J]. IEEE Transactions on Image Processing, 2020, 30: 1116-1129.
[21] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[22] SILBERMAN N, HOIEM D, KOHLI P, et al. Indoor segmentation and support inference from RGBD images[C]// Proceedings of the 12th European Conference on Computer Vision, 2012: 746-760.
[23] DAI A, CHANG A X, SAVVA M, et al. ScanNet: richly-annotated 3D reconstructions of indoor scenes[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017: 5828-5839.
[24] KOCH T, LIEBEL L, FRAUNDORFER F, et al. Evaluation of CNN-based single-image depth estimation methods[C]// Proceedings of the European Conference on Computer Vision Workshops, 2018: 331-348.
[25] VASILJEVIC I, KOLKIN N, ZHANG S, et al. DIODE: a dense indoor and outdoor depth dataset[J]. arXiv:1908.00463, 2019.
[26] LEE J H, HAN M K, KO D W, et al. From big to small: multi-scale local planar guidance for monocular depth estimation[J]. arXiv:1907.10326, 2019.
[27] 张竞澜, 魏敏, 文武. 基于DSPP的单目图像深度估计[J]. 计算机应用研究, 2022, 39(12): 3837-3840.
ZHANG J L, WEI M, WEN W. Monocular depth estimation based on DSPP [J]. Application Research of Computers, 2022, 39(12): 3837-3840.
[28] CHEN Y, ZHAO H, HU Z, et al. Attention-based context aggregation network for monocular depth estimation[J]. International Journal of Machine Learning and Cybernetics, 2021, 12(6): 1583-1596.
[29] PATIL V, SAKARIDIS C, LINIGER A, et al. P3depth: monocular depth estimation with a piecewise planarity prior[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 1610-1621.
[30] ELDESOKEY A, FELSBERG M, KHAN F S. Confidence propagation through CNNs for guided sparse depth regression[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 42(10): 2423-2436.
[31] XIA Z, SULLIVAN P, CHAKRABARTI A. Generating and exploiting probabilistic monocular depth estimates[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 65-74.
[32] 贾瑞明, 李阳, 李彤, 等. 多层级特征融合结构的单目图像深度估计网络[J]. 计算机工程, 2020, 46(12): 207-214.
JIA R M, LI Y, LI T, et al. Monocular image depth estimation network based on multiple level feature fusion structure[J]. Computer Engineering, 2020, 46(12): 207-214.
[33] BHAT S F, BIRKL R, WOFK D, et al. ZoeDepth: zero-shot transfer by combining relative and metric depth[J]. arXiv:2302.12288, 2023.
[34] LI Z, WANG X, LIU X, et al. BinsFormer: revisiting adaptive bins for monocular depth estimation[J]. arXiv:2204.00987,2022.
[35] THOMPSON J L, PHUNG S L, BOUZERDOUM A. D-Net: a generalised and optimised deep network for monocular depth estimation[J]. IEEE Access, 2021, 9: 134543-134555.
[36] AGARWAL A, ARORA C. DepthFormer: multiscale vision transformer for monocular depth estimation with global local information fusion[C]//Proceedings of the 2022 IEEE International Conference on Image Processing, 2022: 3873-3877.
[37] QIU J, CUI Z, ZHANG Y, et al. DeepLidar: deep surface normal guided depth prediction for outdoor scene from sparse Lidar data and single color image[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 3313-3322. |