[1] SUN C, SHRIVASTAVA A, SINGH S, et al. Revisiting unreasonable effectiveness of data in deep learning era[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 843-852.
[2] WANG X, HUANG T E, DARRELL T, et al. Frustratingly simple few-shot object detection[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 9919-9928.
[3] NORTHCUTT C G, ATHALYE A, MUELLER J. Pervasive label errors in test sets destabilize machine learning benchmarks[J]. arXiv:2103.14749, 2021.
[4] ZHANG H, CHEN F, SHEN Z, et al. Solving missing-annotation object detection with background recalibration loss[C]//Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, 2020: 1888-1892.
[5] ZHANG Y, CHENG Y, HUANG X, et al. Simple and robust loss design for multi-label learning with missing labels[J]. arXiv:2112.07368, 2021.
[6] CHENG Y, QIAN K, MIN F. Global and local attention-based multi-label learning with missing labels[J].Information Sciences, 2022, 594: 20-42.
[7] TAN A, LIANG J, WU W Z, et al. Semi-supervised partial multi-label classification via consistency learning[J].Pattern Recognition, 2022, 131: 108839.
[8] WANG T, YANG T, CAO J, et al. Co-mining: self-supervised learning for sparsely annotated object detection[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 2800-2808.
[9] GOLDBERGER J, BENREUVEN E. Training deep neural-networks using a noise adaptation layer[C]//Proceedings of the International Conference on Learning Representations, 2016.
[10] JINDAL I, NOKLEBY M, CHEN X. Learning deep networks from noisy labels with dropout regularization[C]//Proceedings of the 2016 IEEE 16th International Conference on Data Mining, 2016: 967-972.
[11] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
[12] GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[J].arXiv:1412.6572, 2014.
[13] MIYATO T, DAI A M, GOODFELLOW I. Adversarial training methods for semi-supervised text classification[J]. arXiv:1605.07725, 2021.
[14] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv:1706.06083, 2019.
[15] SHAFAHI A, NAJIBI M, GHIASI M A, et al. Adversarial training for free![C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019: 3358-3369.
[16] ZHANG D, ZHANG T, LU Y, et al. You only propagate once: accelerating adversarial training via maximal principle[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019: 227-238.
[17] ZHU C, CHENG Y, GAN Z, et al. FreeLB: enhanced adversarial training for natural language understanding[J].arXiv:1909.11764, 2019.
[18] MIYATO T, MAEDA S, KOYAMA M, et al. Virtual adversarial training: a regularization method for supervised and semi-supervised learning[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(8): 1979-1993.
[19] DEVRIES T, TAYLOR G W. Improved regularization of convolutional neural networks with cutout[J]. arXiv:1708. 04552, 2017.
[20] GE Z, LIU S, WANG F, et al. YOLOX: exceeding YOLO series in 2021[J]. arXiv:2107.08430, 2021.
[21] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149. |