[1] SHURRAB S, DUWAIRI R. Self-supervised learning methods and applications in medical imaging analysis: a survey[J]. PeerJ Computer Science, 2022, 8: e1045.
[2] WOLF D, PAYER T, LISSON C S, et al. Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging[J]. Scientific Reports, 2023, 13: 20260.
[3] HUANG S C, PAREEK A, JENSEN M, et al. Self-supervised learning for medical image classification: a systematic review and implementation guidelines[J]. NPJ Digital Medicine, 2023, 6: 74.
[4] WU Z R, XIONG Y J, YU S X, et al. Unsupervised feature learning via non-parametric instance discrimination[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 3733-3742.
[5] YE M, ZHANG X, YUEN P C, et al. Unsupervised embedding learning via invariant and spreading instance feature[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 6203-6212.
[6] TIAN Y L, KRISHNAN D, ISOLA P. Contrastive multiview coding[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 776-794.
[7] HE K M, FAN H Q, WU Y X, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 9726-9735.
[8] CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]//Proceedings of the International Conference on Machine Learnin, 2020: 1597-1607.
[9] HE K M, CHEN X L, XIE S N, et al. Masked autoencoders are scalable vision learners[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 15979-15988.
[10] ZHOU L, LIU H, BAE J, et al. Self pre-training with masked autoencoders for medical image classification and segmentation[J]. arXiv:2203.05573, 2022.
[11] XU Z A, DAI Y, LIU F Y, et al. Swin MAE: masked autoencoders for small datasets[J]. Computers in Biology and Medicine, 2023, 161: 107037.
[12] GUO K H, CHEN J, QIU T, et al. MedGAN: an adaptive GAN approach for medical image generation[J]. Computers in Biology and Medicine, 2023, 163: 107119.
[13] SCHLEGL T, SEEB?CK P, WALDSTEIN S M, et al. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery[C]//Proceedings of the International Conference on Information Processing in Medical Imaging. Cham: Springer International Publishing, 2017: 146-157.
[14] WANG H Q, TANG Y H, WANG Y H, et al. Masked image modeling with local multi-scale reconstruction[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 2122-2131.
[15] GAO P, MA T, LI H, et al. Convmae: masked convolution meets masked autoencoders[J]. arXiv:2205.03892, 2022.
[16] TIAN Y J, XIE L X, WANG Z Z, et al. Integrally pre-trained transformer pyramid networks[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 18610-18620.
[17] ZBONTAR J, JING L, MISRA I, et al. Barlow twins: self-supervised learning via redundancy reduction[C]//Proceedings of the International Conference on Machine Learning, 2021: 12310-12320.
[18] OORD A, LI Y, VINYALS O. Representation learning with contrastive predictive coding[J]. arXiv:1807.03748, 2018.
[19] CARON M, MISRA I, MAIRAL J, et al. Unsupervised learning of visual features by contrasting cluster assignments[C]//Advances in Neural Information Processing Systems, 2020: 9912-9924.
[20] GRILL J B, STRUB F, ALTCHé F, et al. Bootstrap your own latent-a new approach to self-supervised learning[C]//Advances in Neural Information Processing Systems, 2020: 21271-21284.
[21] CHEN X L, XIE S N, HE K M. An empirical study of training self-supervised vision transformers[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9620-9629.
[22] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×16 words: Transformers for image recognition at scale[J]. arXiv:2010.11929, 2020.
[23] CARON M, TOUVRON H, MISRA I, et al. Emerging properties in self-supervised vision transformers[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9630-9640.
[24] PUNN N S, AGARWAL S. BT-Unet: a self-supervised learning framework for biomedical image segmentation using barlow twins with U-Net models[J]. Machine Learning, 2022, 111(12): 4585-4600.
[25] VAN GANSBEKE W, VANDENHENDE S, GEORGOULIS S, et al. Revisiting contrastive methods for unsupervised learning of visual representations[C]//Advances in Neural Information Processing Systems, 2021: 16238-16250.
[26] TIAN Y, LIU F B, PANG G S, et al. Self-supervised pseudo multi-class pre-training for unsupervised anomaly detection and segmentation in medical images[J]. Medical Image Analysis, 2023, 90: 102930.
[27] HAMILTON M, ZHANG Z, HARIHARAN B, et al. Unsupervised semantic segmentation by distilling feature correspondences[J]. arXiv:2203.08414, 2022.
[28] KHOSLA P, TETERWAK P, WANG C, et al. Supervised contrastive learning[C]//Advances in Neural Information Processing Systems, 2020: 18661-18673.
[29] KALAPOS A, GYIRES-TóTH B. Self-supervised pretraining for 2D medical image segmentation[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2023: 472-484.
[30] CHO J H, MALL U, BALA K, et al. PiCIE: unsupervised semantic segmentation using invariance and equivariance in clustering[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 16789-16799.
[31] WEN X, ZHAO B, ZHENG A, et al. Self-supervised visual representation learning with semantic grouping[C]//Adva-nces in Neural Information Processing Systems, 2022: 16423-16438.
[32] XIE Z D, LIN Y T, ZHANG Z, et al. Propagate yourself: exploring pixel-level consistency for unsupervised visual representation learning[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 16679-16688.
[33] WANG X L, ZHANG R F, SHEN C H, et al. Dense contrastive learning for self-supervised visual pre-training[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 3023-3032.
[34] LIU Z, LIN Y T, CAO Y, et al. Swin transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9992-10002.
[35] CHEN K, LIU Z L, HONG L Q, et al. Mixed autoencoder for self-supervised visual representation learning[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 22742-22751.
[36] PRABHAKAR C, LI H B, YANG J, et al. ViT-AE++: impro-ving vision transformer autoencoder for self-supervised medical image representations[J]. arXiv:2301.07382, 2023.
[37] BERNARD O, LALANDE A, ZOTTI C, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?[J]. IEEE Transactions on Medical Imaging, 2018, 37(11): 2514-2525.
[38] ARMATO S G, MCLENNAN G, BIDAUT L, et al. The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans[J]. Medical Physics, 2011, 38(2): 915-931.
[39] CODELLA N C F, GUTMAN D, CELEBI M E, et al. Skin lesion analysis toward melanoma detection: a challenge at the 2017 International symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC)[C]//Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging. Piscataway: IEEE, 2018: 168-172. |