[1] ZENG H, SHU X, WANG Y, et al. EmotionCues: emotion-oriented visual summarization of classroom videos[J]. IEEE Transactions on Visulization and Computer Graphics, 2021, 27(7): 3168-3181.
[2] SAADI I, CUNNINGHAM D W, TALEB-AHMED A, et al. Driver’s facial expression recognition: a comprehensive survey[J]. Expert Systems with Applications, 2024, 242: 122784.
[3] ZHANG J J, ZHENG K, MAZHAR S, et al. Trusted emotion recognition based on multiple signals captured from video[J]. Expert Systems with Applications, 2023, 233: 120948.
[4] LI Y T, WEI J S, LIU Y, et al. Deep learning for micro-expression recognition: a survey[J]. IEEE Transactions on Affective Computing, 2022, 13(4): 2028-2046.
[5] LI Y, HUANG X, ZHAO G. Joint local and global information learning with single apex frame detection for micro-expression recognition[J]. IEEE Transactions on Image Processing, 2021, 30: 249-263.
[6] FANG Y C, LUO J, LOU C S. Fusion of multi-directional rotation invariant uniform LBP features for face recognition[C]//Proceedings of the 3rd International Symposium on Intelligent Information Technology Application. Piscataway: IEEE, 2009: 332-335.
[7] ZHANG T, ZHENG W M, CUI Z, et al. A deep neural network-driven feature learning method for multi-view facial expression recognition[J]. IEEE Transactions on Multimedia, 2016, 18(12): 2528-2536.
[8] KUMAR P, HAPPY S L, ROUTRAY A. A real-time robust facial expression recognition system using HOG features[C]//Proceedings of the International Conference on Computing, Analytics and Security Trends. Piscataway: IEEE, 2016: 289-293.
[9] SHOJAEILANGARI S, YAU W Y, NANDAKUMAR K, et al. Robust representation and recognition of facial emotions using extreme sparse learning[J]. IEEE Transactions on Image Processing, 2015, 24(7): 2140-2152.
[10] CHEN Y D, YANG X, CHAM T J, et al. Towards unbiased visual emotion recognition via causal intervention[C]//Proceedings of the 30th ACM International Conference on Multimedia. New York: ACM, 2022: 60-69.
[11] WANG L J, JIA G L, JIANG N, et al. EASE: robust facial expression recognition via emotion ambiguity-sensitive cooperative networks[C]//Proceedings of the 30th ACM International Conference on Multimedia. New York: ACM, 2022: 218-227.
[12] BARROS P, BARAKOVA E, WERMTER S. Adapting the interplay between personalized and generalized affect recognition based on an unsupervised neural framework[J]. IEEE Transactions on Affective Computing, 2022, 13(3): 1349-1365.
[13] KIM D H, BADDAR W J, JANG J, et al. Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition[J]. IEEE Transactions on Affective Computing, 2019, 10(2): 223-236.
[14] VERMA M, VIPPARTHI S K, SINGH G, et al. LEARNet: dynamic imaging network for micro expression recognition[J]. IEEE Transactions on Image Processing, 2020, 29: 1618-1627.
[15] SONG B L, LI K, ZONG Y, et al. Recognizing spontaneous micro-expression using a three-stream convolutional neural network[J]. IEEE Access, 2019, 7: 184537-184551.
[16] XIA Z Q, HONG X P, GAO X Y, et al. Spatiotemporal recurrent convolutional networks for recognizing spontaneous micro-expressions[J]. IEEE Transactions on Multimedia, 2020, 22(3): 626-640.
[17] XIE Y, CHEN T S, PU T, et al. Adversarial graph representation adaptation for cross-domain facial expression recognition[C]//Proceedings of the 28th ACM International Conference on Multimedia. New York: ACM, 2020: 1255-1264.
[18] XU X, RUAN Z, YANG L. Facial expression recognition based on graph neural network[C]//Proceedings of the IEEE 5th International Conference on Image, Vision and Compu-ting. Piscataway: IEEE, 2020: 211-214.
[19] LIU D Z, ZHANG H T, ZHOU P. Video-based facial expression recognition using graph convolutional networks[C]//Proceedings of the 25th International Conference on Pattern Recognition. Piscataway: IEEE, 2021: 607-614.
[20] ZHOU J Z, ZHANG X M, LIU Y, et al. Facial expression recognition using spatial-temporal semantic graph network[C]//Proceedings of the 2020 IEEE International Conference on Image Processing. Piscataway: IEEE, 2020: 1961-1965.
[21] 黄苗苗, 王慧颖, 王梅霞, 等. 图嵌入学习研究综述: 从简单图到复杂图[J/OL]. 计算机科学, 1-29[2025-10-11]. https://link.cnki.net/urlid/50.1075.tp.20250613.1315.030.
HUANG M M,WANG H Y, WANG M X, et al.Survey of graph embedding learning: from simple graphs to complex graphs[J/OL]. Computer Science, 1-29[2025-10-11]. https://link.cnki.net/urlid/50.1075.tp.20250613.1315.030.
[22] 周诚辰, 于千城, 张丽丝, 等. Graph Transformers研究进展综述[J]. 计算机工程与应用, 2024, 60(14): 37-49.
ZHOU C C, YU Q C, ZHANG L S, et al. Overview of rese-arch progress in graph Transformers[J]. Computer Engine-ering and Applications, 2024, 60(14): 37-49.
[23] GORI M, MONFARDINI G, SCARSELLI F. A new model for learning in graph domains[C]//Proceedings of the IEEE International Joint Conference on Neural Networks. Piscataway: IEEE, 2005: 729-734.
[24] MICHELI A. Neural network for graphs: a contextual constructive approach[J]. IEEE Transactions on Neural Networks, 2009, 20(3): 498-511.
[25] BRUNA J, ZAREMBA W, SZLAM A, et al. Spectral networks and locally connected networks on graphs[J]. arXiv: 1312.62031, 2013.
[26] KIPF T, WELLING M. Semi-supervised classification with graph convolutional networks[J]. arXiv:1609.02907, 2016.
[27] EKMAN P, FRIESEN W V. Measuring facial movement[J]. Environmental Psychology and Nonverbal Behavior, 1976, 1(1): 56-75.
[28] ZHANG J J, FEI C, ZHENG Y Q, et al. Trusted emotion recognition based on multiple signals captured from video and its application in intelligent education[J]. Electronic Research Archive, 2024, 32(5): 3477-3521.
[29] LUCEY P, COHN J F, KANADE T, et al. The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression[C]//Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2010: 94-101.
[30] LYONS M, AKAMATSU S, KAMACHI M, et al. Coding facial expressions with Gabor wavelets[C]//Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition. Piscataway: IEEE, 1998: 200-205.
[31] CHEN L F, YEN Y S. Taiwanese facial expression image database[EB/OL]. (2020-5-12) [2024-08-19]. http://bml.ym.edu.tw/tfeid/.
[32] LI S, DENG W H. Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition[J]. IEEE Transactions on Image Processing, 2019, 28(1): 356-370.
[33] YAN W J, WU Q, LIU Y J, et al. CASME database: a dataset of spontaneous micro-expressions collected from neutralized faces[C]//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Piscataway: IEEE, 2013: 1-7.
[34] YAN W J, LI X, WANG S J, et al. CASME II: an improved spontaneous micro-expression database and the baseline evaluation[J]. PLoS One, 2014, 9(1): e86041.
[35] LI X B, PFISTER T, HUANG X H, et al. A spontaneous micro-expression database: inducement, collection and baseline[C]//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Piscataway: IEEE, 2013: 1-6.
[36] DAVISON A K, LANSLEY C, COSTEN N, et al. SAMM: a spontaneous micro-facial movement dataset[J]. IEEE Transactions on Affective Computing, 2018, 9(1): 116-129.
[37] PFISTER T, LI X B, ZHAO G Y, et al. Differentiating spontaneous from posed facial expressions within a generic facial expression recognition framework[C]//Proceedings of the IEEE International Conference on Computer Vision Workshops. Piscataway: IEEE, 2011: 868-875.
[38] WANG F P, LI J, QI C, et al. JGULF: joint global and unilateral local feature network for micro-expression recogn-ition[J]. Image and Vision Computing, 2024, 147: 105091.
[39] LIONG S T, GAN Y S, SEE J, et al. Shallow triple stream three-dimensional CNN (STSTNet) for micro-expression recognition[C]//Proceedings of the 14th IEEE International Conference on Automatic Face & Gesture Recognition. Piscataway: IEEE, 2019: 1-5.
[40] GAN Y S, LIONG S T, YAU W C, et al. OFF-ApexNet on micro-expression recognition system[J]. Signal Processing: Image Communication, 2019, 74: 129-139.
[41] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[42] IANDOLA F N, HAN S, MOSKEWICZ M W, et al. Squee-zeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[J]. arXiv:1602.07360, 2016.
[43] SZEGEDY C, LIU W, JIA Y Q, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1-9.
[44] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [J]. arXiv:1409. 1556, 2014.
[45] YANG J J, WANG S F. Capturing spatial and temporal patterns for distinguishing between posed and spontaneous expressions[C]//Proceedings of the 25th ACM International Conference on Multimedia. New York: ACM, 2017: 469-477.
[46] WANG S F, HAO L F, JI Q. Posed and spontaneous expression distinction using latent regression Bayesian networks[J]. ACM Transactions on Multimedia Computing, Communications, and Applications, 2020, 16(3): 1-18.
[47] WANG S F, WU C L, HE M H, et al. Posed and spontaneous expression recognition through modeling their spatial patterns[J]. Machine Vision and Applications, 2015, 26(2): 219-231.
[48] WANG S F, WU C L, JI Q. Capturing global spatial patterns for distinguishing posed and spontaneous expressions[J]. Computer Vision and Image Understanding, 2016, 147: 69-76.
[49] KARTHEEK M N, PRASAD M V N K, BHUKYA R. DRCP: dimensionality reduced chess pattern for person independent facial expression recognition[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2022, 36(11): 2256016.
[50] BAYGIN M, TUNCER I, DOGAN S, et al. Automated facial expression recognition using exemplar hybrid deep feature generation technique[J]. Soft Computing, 2023, 27(13): 8721-8737.
[51] CHIRRA V R R, UYYALA S R, KOLLI V K K. Virtual facial expression recognition using deep CNN with ensemble learning[J]. Journal of Ambient Intelligence and Humanized Computing, 2021, 12(12): 10581-10599.
[52] WANG S F, ZHENG Z Q, YIN S, et al. A novel dynamic model capturing spatial and temporal patterns for facial expression analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(9): 2082-2095.
[53] MOEINI A, FAEZ K, SADEGHI H, et al. 2D facial expression recognition via 3D reconstruction and feature fusion[J]. Journal of Visual Communication and Image Representation, 2016, 35: 1-14.
[54] MEHTA A, RASTEGARI M. Separable self-attention for mobile vision transformers[J]. arXiv:2206.02680, 2022.
[55] MAAZ M, SHAKER A, CHOLAKKAL H, et al. EdgeNeXt: efficiently amalgamated CNN-transformer architecture for mobile vision applications[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2023: 3-20.
[56] CHEN J R, KAO S H, HE H, et al. Run, don’t walk: chasing higher FLOPS for faster neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 12021-12031.
[57] SUN M M, YAN C M. FGENet: a lightweight facial expression recognition algorithm based on FasterNet[J]. Signal, Image and Video Processing, 2024, 18(8): 5939-5956.
[58] BORING E G. Titchener’s experimentalists[J]. Journal of the History of the Behavioral Sciences, 1967, 3(4): 315-325.
[59] HUGHES M A. Emotions revealed: recognizing faces and feelings to improve communication and emotional life[J]. Lib-rary Journal, 2003, 128(8): 140.
[60] IZARD C, FINE S, SCHULTZ D, et al. Emotion knowledge as a predictor of social behavior and academic competence in children at risk[J]. Psychological Science, 2001, 12(1): 18-23. |