[1] 章云港, 易本顺, 吴晨玥, 等. 基于卷积神经网络的低剂量CT图像去噪方法[J]. 光学学报, 2018, 38(4): 123-129.
ZHANG Y G, YI B S, WU C Y, et al. Low-dose CT image denoising method based on convolutional neural network[J]. Acta Optica Sinica, 2018, 38(4): 123-129.
[2] CHEN H, ZHANG Y, KALRA M K, et al. Low-dose CT with a residual encoder-decoder convolutional neural network[J]. IEEE Transactions on Medical Imaging, 2017, 36(12): 2524-2535.
[3] YANG Q S, YAN P K, ZHANG Y B, et al. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss[J]. IEEE Transactions on Medical Imaging, 2018, 37(6): 1348-1357.
[4] HUANG Z Z, ZHANG J P, ZHANG Y, et al. DU-GAN: generative adversarial networks with dual-domain U-net-based discriminators for low-dose CT denoising[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 71: 4500512.
[5] HO J, JAIN A, ABBEEL P. Denoising diffusion probabilistic models[C]//Advances in Neural Information Processing Systems (NeurIPS), 2020: 6840-6851.
[6] DHARIWAL P, NICHOL A. Diffusion models beat GANs on image synthesis[C]//Advances in Neural Information Processing Systems (NeurIPS), 2021: 8780-8794.
[7] HO J, SAHARIA C, CHAN W, et al. Cascaded diffusion models for high fidelity image generation[J]. Journal of Machine Learning Research, 2022, 23(47): 1-33.
[8] ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10674-10685.
[9] NICHOL A, DHARIWAL P, RAMESH A, et al. GLIDE: towards photorealistic image generation and editing with text-guided diffusion models[J]. arXiv:2112.10741, 2021.
[10] HO J, SALIMANS T. Classifier-free diffusion guidance[C]//Proceedings of the NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.
[11] ZHANG B W, GU S Y, ZHANG B, et al. StyleSwin: transformer-based GAN for high-resolution image generation[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 11294-11304.
[12] HOOGEBOOM E, SALIMANS T. Blurring diffusion models[J]. arXiv:2209.05557, 2022.
[13] NICHOL A Q, DHARIWAL P. Improved denoising diffusion probabilistic models[C]//Proceedings of the International Conference on Machine Learning, 2021: 8162-8171.
[14] SONG J, MENG C, ERMON S. Denoising diffusion implicit models[C]//Proceedings of the International Conference on Learning Representations, 2021.
[15] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems (NeurIPS), 2017.
[16] ZHANG H, GOODFELLOW I, METAXAS D, et al. Self-attention generative adversarial networks[C]//Proceedings of the International Conference on Machine Learning, 2019: 7354-7363.
[17] JIANG Y, CHANG S, WANG Z. TransGAN: two pure transformers can make one strong GAN, and that can scale up[C]//Advances in Neural Information Processing Systems (NeurIPS), 2021: 14745-14758.
[18] PEEBLES W, XIE S. Scalable diffusion models with transformers[J]. arXiv:2212.09748, 2022.
[19] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4): 834-848.
[20] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7132-7141.
[21] WANG Q L, WU B G, ZHU P F, et al. ECA-net: efficient channel attention for deep convolutional neural networks[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 11531-11539.
[22] LIU W, RABINOVICH A, BERG A C. ParseNet: looking wider to see better[J]. arXiv:1506.04579, 2015.
[23] DAI Y M, GIESEKE F, OEHMCKE S, et al. Attentional feature fusion[C]//Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2021: 3559-3568.
[24] CHEN B Y, LENG S, YU L F, et al. An open library of CT patient projection data[C]//Proceedings of SPIE, 2016: 330-335.
[25] WANG D Y, FAN F L, WU Z, et al. CTformer: convolution-free Token2Token dilated vision transformer for low-dose CT denoising[J]. Physics in Medicine and Biology, 2023, 68(6): 065012.
[26] LIU X, XIE Y, CHENG J, et al. Diffusion probabilistic priors for zero-shot low-dose CT image denoising[J]. arXiv:2305.
15887, 2023. |