[1] LI H C, XIONG P F, FAN H Q, et al. DFANet: deep feature aggregation for real-time semantic segmentation[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 9514-9523.
[2] LIU W Y, WEN Y D, YU Z D, et al. SphereFace: deep hypersphere embedding for face recognition[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 6738-6746.
[3] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 779-788.
[4] ZUIDERVELD K. Contrast limited adaptive histogram equali- zation[J]. Graphics Gems, 1994: 474-485.
[5] LAND E H, MCCANN J J. Lightness and retinex theory[J]. JOSA, 1971, 61(1): 1-11.
[6] IBRAHIM H, KONG N S P. Brightness preserving dynamic histogram equalization for image contrast enhancement[J]. IEEE Transactions on Consumer Electronics, 2007, 53(4): 1752-1758.
[7] WANG C, YE Z F. Brightness preserving histogram equali- zation with maximum entropy: a variational perspective[J]. IEEE Transactions on Consumer Electronics, 2005, 51(4): 1326-1334.
[8] CHEN S D, RAMLI A R. Minimum mean brightness error bi-histogram equalization in contrast enhancement[J]. IEEE Transactions on Consumer Electronics, 2003, 49(4): 1310-1319.
[9] JOBSON D J, RAHMAN Z, WOODELL G A. A multiscale retinex for bridging the gap between color images and the human observation of scenes[J]. IEEE Transactions on Image Processing, 1997, 6(7): 965-976.
[10] WANG S H, ZHENG J, HU H M, et al. Naturalness preserved enhancement algorithm for non-uniform illumination images[J]. IEEE Transactions on Image Processing, 2013, 22(9): 3538-3548.
[11] LI M D, LIU J Y, YANG W H, et al. Structure-revealing low-light image enhancement via robust retinex model[J]. IEEE Transactions on Image Processing, 2018, 27(6): 2828-2841.
[12] WEI C, WANG W J, YANG W H, et al. Deep retinex decomposition for low-light enhancement[J]. arXiv:1808.04560, 2018.
[13] ZHANG Y H, ZHANG J W, GUO X J. Kindling the darkness: a practical low-light image enhancer[C]//Proceedings of the 27th ACM International Conference on Multimedia. New York: ACM, 2019: 1632-1640.
[14] ZHANG Y H, GUO X J, MA J Y, et al. Beyond brightening low-light images[J]. International Journal of Computer Vision, 2021, 129(4): 1013-1037.
[15] WU W H, WENG J, ZHANG P P, et al. URetinex-Net: retinex-based deep unfolding network for low-light image enhancement[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 5891-5900.
[16] HAI J, XUAN Z, YANG R, et al. R2RNet: low-light image enhancement via real-low to real-normal network[J]. Journal of Visual Communication and Image Representation, 2023, 90: 103712.
[17] GUO C L, LI C Y, GUO J C, et al. Zero-reference deep curve estimation for low-light image enhancement[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 1777-1786.
[18] JIANG Y F, GONG X Y, LIU D, et al. EnlightenGAN: deep light enhancement without paired supervision[J]. IEEE Transactions on Image Processing, 2021, 30: 2340-2349.
[19] MA L, MA T Y, LIU R S, et al. Toward fast, flexible, and robust low-light image enhancement[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 5627-5636.
[20] YANG W H, WANG S Q, FANG Y M, et al. From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 3060-3069.
[21] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017: 5998-6008.
[22] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv:2010.11929, 2020.
[23] CHEN Z, ZHANG Y, GU J, et al. Cross aggregation transformer for image restoration[C]//Advances in Neural Information Processing Systems, 2022: 25478-25490.
[24] WANG T, ZHANG K H, SHEN T R, et al. Ultra-high-definition low-light image enhancement: a benchmark and transformer based method[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(3): 2654-2662.
[25] 杜晓刚, 路文杰, 雷涛, 等. 亮度信噪比引导Transformer的低照度图像增强[J]. 计算机工程与应用, 2025, 61(6): 263-272.
DU X G, LU W J, LEI T, et al. Low-light image enhancement using brightness and signal-to-noise ratio guided transformer[J]. Computer Engineering and Applications, 2025, 61(6): 263-272.
[26] BYCHKOVSKY V, PARIS CHAN E, et al. Learning photographic global tonal adjustment with a database of input/output image pairs[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2011: 97-104.
[27] LEE C, KIM C S. Contrast enhancement based on layered difference representation[C]//Proceedings of the 2012 19th IEEE International Conference on Image Processing. Piscataway: IEEE, 2012: 965-968.
[28] YAO S S, LIN W S, ONG E, et al. Contrast signal-to-noise ratio for image quality assessment[C]//Proceedings of the IEEE International Conference on Image Processing. Piscataway: IEEE, 2005: 397-400.
[29] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
[30] MITTAL A, SOUNDARARAJAN R, BOVIK A C. Making a “completely blind” image quality analyzer[J]. IEEE Signal Processing Letters, 2013, 20(3): 209-212.
[31] MA C, YANG C Y, YANG X K, et al. Learning a no-reference quality metric for single-image super-resolution[J]. Computer Vision and Image Understanding, 2017, 158: 1-16. |