Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (14): 14-25.DOI: 10.3778/j.issn.1002-8331.2312-0227
• Research Hotspots and Reviews • Previous Articles Next Articles
YANG Zhuo, MAI Eryuan, LI Huicong, MO Jianqing
Online:
2024-07-15
Published:
2024-07-15
杨卓,麦尔渊,李惠聪,莫建清
YANG Zhuo, MAI Eryuan, LI Huicong, MO Jianqing. Survey on Recent Advances in Context Awareness of Augmented Reality[J]. Computer Engineering and Applications, 2024, 60(14): 14-25.
杨卓, 麦尔渊, 李惠聪, 莫建清. 增强现实上下文感知最新研究进展综述[J]. 计算机工程与应用, 2024, 60(14): 14-25.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2312-0227
[1] FARSHID M, PASCHEN J, ERIKSSON T, et al. Go boldly!: explore augmented reality (AR), virtual reality (VR), and mixed reality (MR) for business[J]. Business Horizons, 2018, 61(5): 657-663. [2] CARMIGNIANI J, FURHT B, ANISETTI M, et al. Augmented reality technologies, systems and applications[J]. Multimedia Tools and Applications, 2011, 51: 341-377. [3] RUBERT J, LANGLOTZ T, ZOLLMANN S, et al. Towards pervasive augmented reality: context-awareness in augmented reality[J]. IEEE Transactions on Visualization and Computer Graphics, 2016, 23(6): 1706-1724. [4] LEE J Y, SEO D W, RHEE G. Visualization and interaction of pervasive services using context-aware augmented reality[J]. Expert Systems with Applications, 2008, 35(4): 1873-1882. [5] HERR D, REINHARDT J, REINA G, et al. Immersive modular factory layout planning using augmented reality[J]. Procedia CIRP, 2018, 72: 1112-1117. [6] FENDER A, HERHOLZ P, ALEXA M, et al. Optispace: automated placement of interactive 3d projection maping content[C]//Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018: 1-11. [7] GUO Y C, WENG T H, FISCHER R, et al. 3D semantic segmentation based on spatial-aware convolution and shape completion for augmented reality applications[J]. Computer Vision and Image Understanding, 2022, 224: 103550. [8] ENS B, IRANI P. Spatial analytic interfaces: spatial user interfaces for in situ visual analytics[J]. IEEE Computer Graphics and Applications, 2016, 37(2): 66-79. [9] NILSSON S, GUSTAFSSON T, CARLEBERG P. Hands free interaction with virtual information in a real environment[J]. PsychNology Journal, 2009, 7(2): 175-196. [10] KWOK T C K, KIEFER P, SCHINAZI V R, et al. Gaze-guided narratives: adapting audio guide content to gaze in virtual and real environments[C]//Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019: 1-12. [11] DOURISH P. What we talk about when we talk about context[J]. Personal and Ubiquitous Computing, 2004, 8: 19-30. [12] DUMENCO S. What we talk about when we talk about ‘content’[J].B to B, 2014, 85(21):60. [13] SCHMIDT A, BEIGL M, GELLERSEN H W. There is more to context than location[J]. Computers & Graphics, 1999, 23(6): 893-901. [14] 王海涛, 宋丽华. 情景感知: 基本概念, 关键技术与应用系统[J]. 数据与计算发展前沿, 2022, 4(3): 110-123. WANG H T, SONG L H. Context awareness: basic concepts, key technologies and application systems[J]. Frontiers of Data and Computing, 2022, 4(3): 110-123. [15] LAM K Y, LEE L H, HUI P. A2w: context-aware recommendation system for mobile augmented reality web browser[C]//Proceedings of the 29th ACM International Conference on Multimedia, 2021: 2447-2455. [16] KNIERIM P, KOSCH T, SCHMIDT A. The nomadic office: a location independent workspace through mixed reality[J]. IEEE Pervasive Computing, 2021, 20(4): 71-78. [17] LI W, LI C, KIM M, et al. Location-aware adaptation of augmented reality narratives[C]//Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023: 1-15. [18] WYSOPAL A, ROSS V, PASSANANTI J, et al. Level-of-detail AR: dynamically adjusting augmented reality level of detail based on visual angle[C]//Proceedings of the 2023 IEEE Conference Virtual Reality and 3D User Interfaces, 2023: 63-71. [19] BUCHNER J, BUNTINS K, KERRES M. The impact of augmented reality on cognitive load and performance: a systematic review[J]. Journal of Computer Assisted Learning, 2022, 38(1): 285-303. [20] BLATTGERSTE J, RENNER P, PFEIFFER T. Advantages of eye-gaze over head-gaze-based selection in virtual and augmented reality under varying field of views[C]//Proceedings of the Workshop on Communication by Gaze Interaction, 2018: 1-9. [21] HAN L, ZHENG T, ZHU Y, et al. Live semantic 3D perception for immersive augmented reality[J]. IEEE Transactions on Visualization and Computer Graphics, 2020, 26(5): 2012-2022. [22] ARIKAWA M, HAYASHI K. Adaptive equalization of transmitter and receiver IQ skew by multi-layer linear and widely linear filters with deep unfolding[J]. Optics Express, 2020, 28(16): 23478-23494. [23] 莫同,李伟平, 吴中海, 等. 一种情境感知服务系统框架[J]. 计算机学报, 2010, 33(11): 2084-2092. MO T, LI W P, WU Z H, et al.Framework of context-aware based service system[J].Chinese Journal of Computers, 2010, 33(11): 2084-2092. [24] DURRANT-WHYTE H, BAILEY T. Simultaneous localization and mapping: part I[J]. IEEE Robotics & Automation Magazine, 2006, 13(2): 99-110. [25] KLEIN G, MURRAY D. Parallel tracking and mapping for small AR workspaces[C]//Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 2007: 225-234. [26] NEWCOMBE R A, IZADI S, HILLIGES O, et al. KinectFusion: real-time dense surface mapping and tracking[C]//Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, 2011: 127-136. [27] WHELAN T, KAESS M, FALLON M H, et al. Kintinuous: spatially extended KinectFusion[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2012: 3-14. [28] SALAS-MORENO R F, NEWCOMBE R A, STRASDAT H, et al. SLAM++: simultaneous localisation and mapping at the level of objects[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013: 1352-1359. [29] SALAS-MORENO R F, GLOCKEN B, KELLY P H J, et al. Dense planar SLAM[C]//Proceedings of the 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2014: 157-164. [30] GAL R, SHAPIRA L, OFEK E, et al. FLARE: fast layout for augmented reality applications[C]//Proceedings of the 2014 IEEE International Symposium on Mixed And Augmented Reality (ISMAR), 2014: 207-212. [31] ENS B, OFEK E, BRUCE N, et al. Spatial constancy of surface-embedded layouts across multiple environments[C]//Proceedings of the 3rd ACM Symposium on Spatial User Interaction, 2015: 65-68. [32] LAGES W, BOWMAN D. An adaptive interface for spatial augmented reality workspaces[C]//Proceedings of the Symposium on Spatial User Interaction, 2019: 1-2. [33] CHEN J, LI C, SONG S, et al. iARVis: mobile AR based declarative information visualization authoring, exploring and sharing[C]//Proceedings of the 2023 IEEE Conference on Virtual Reality and 3D User Interfaces, 2023: 11-21. [34] NUERNBERGER B, OFEK E, BENKO H, et al. SnapToReality: aligning augmented reality to the real world[C]//Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016: 1233-1244. [35] ALGHOFAILI R, NGUYEN C, KRS V, et al. WARPY: sketching environment-aware 3D curves in mobile augmented reality[C]//Proceedings of the 2023 IEEE Conference on Virtual Reality and 3D User Interfaces, 2023: 367-377. [36] GOMES A, FERNANDES K, WANG D. Surface prediction for spatial augmented reality applications[J]. Virtual Reality, 2021, 25: 761-771. [37] DUAN K, BAI S, XIE L,et al. Centernet: keypoint triplets for object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 6569-6578. [38] REIS D, KUPEC J, HONG J, et al. Real-time flying object detection with YOLOv8[J]. arXiv:2305.09972, 2023. [39] ROSEN D M, DOHERTY K J, TERáN ESPINOZA A, et al. Advances in inference and representation for simultaneous localization and mapping[J]. Annual Review of Control, Robotics, and Autonomous Systems, 2021, 4: 215-242. [40] ZOU Z, CHEN K, SHI Z, et al. Object detection in 20 years: a survey[J]. Proceedings of the IEEE, 2023, 111(3): 257-276. [41] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014: 580-587. [42] HE K, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 2961-2969. [43] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, October 5-9, 2015: 234-241. [44] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 779-788. [45] KIRILLOV A, MINTUN E, RAVI N, et al. Segment anything[J]. arXiv:2304.02643, 2023. [46] YANG Z, LIU C. TUPPer-Map: temporal and unified panoptic perception for 3D metric-semantic mapping[C]//Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021: 1094-1101. [47] CHEN K, ZHANG J, LIU J, et al. Semantic visual simultaneous localization and mapping: a survey[J]. arXiv:2209. 06428, 2022. [48] XU J, CAO H, LI D, et al. Edge assisted mobile semantic visual SLAM[C]//Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, 2020: 1828-1837. [49] ?YSAKOWSKI M, ?YWANOWSKI K, BANASZCZYK A, et al. Real-time onboard object detection for augmented reality: enhancing head-mounted display with YOLOv8[J]. arXiv:2306.03537, 2023. [50] CHEN L, TANG W, JOHN N, et al. Context-aware mixed reality: a framework for ubiquitous interaction[J]. arXiv:1803.05541, 2018. [51] RUNZ M, BUFFIER M, AGAPITO L. MaskFusion: real-time recognition, tracking and reconstruction of multiple moving objects[C]//Proceedings of the 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2018: 10-20. [52] LANG Y, LIANG W, YU L F. Virtual agent positioning driven by scene semantics in mixed reality[C]//Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, 2019: 767-775. [53] KARI M, GROSSE-PUPPENDAHL T, COELHO L F, et al. Transformr: pose-aware object substitution for composing alternate mixed realities[C]//Proceedings of the 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021: 69-79. [54] TAHARA T, SENO T, NARITA G, et al. Retargetable AR: context-aware augmented reality in indoor scenes based on 3D scene graph[C]//Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2020: 249-255. [55] QIAN X, HE F, HU X, et al. ScalAR: authoring semantically adaptive augmented reality experiences in virtual reality[C]//Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 2022: 1-18. [56] CHENG Y, YAN Y, YI X, et al. SemanticAdapt: optimization-based adaptation of mixed reality layouts leveraging virtual-physical semantic connections[C]//Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, 2021: 282-297. [57] ELLENBERG M O, SATKOWSKI M, LUO W, et al. Spatiality and semantics-towards understanding content placement in mixed reality[C]//Proceedings of the Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 2023: 1-8. [58] STANESCU A, MOHR P, KOZINSKI M, et al. State-aware configuration detection for augmented reality step-by-step tutorials[C]//Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023: 157-166. [59] LI C, LI W, HUANG H, et al. Interactive augmented reality storytelling guided by scene semantics[J]. ACM Transactions on Graphics (TOG), 2022, 41(4): 1-15. [60] CHO H, KOMAR M L, LINDLBAUER D. RealityReplay: detecting and replaying temporal changes in situ using mixed reality[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2023, 7(3): 1-25. [61] 周小龙, 汤帆扬, 管秋, 等.基于3D人眼模型的视线跟踪技术综述[J].计算机辅助设计与图形学学报, 2017, 29(9):1579-1589. ZHOU X L, TANG F Y, GUAN Q, et al. A survey of 3D eye model based gaze tracking[J]. Journal of Computer-Aided Design & Computer Graphics, 2017, 29(9): 1579-1589. [62] ECKSTEIN M K, GUERRA-CARRILLO B, SINGLEY A T M, et al. Beyond eye gaze: what else can eyetracking reveal about cognition and cognitive development?[J]. Developmental Cognitive Neuroscience, 2017, 25: 69-91. [63] DUCHOWSKI A T, KREJTZ K, KREJTZ I, et al. The index of pupillary activity: measuring cognitive load vis-à-vis task difficulty with pupil oscillation[C]//Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018: 1-13. [64] JACOB R J K. The use of eye movements in human-computer interaction techniques: what you look at is what you get[J]. ACM Transactions on Information Systems (TOIS), 1991, 9(2): 152-169. [65] MAJARANTA P, BULLING A. Eye tracking and eye-based human-computer interaction[M]//Advances in physiological computing. London: Springer, 2014: 39-65. [66] DASKALOGRIGORAKIS G, MCNAMARA A, MANIA K. Holo-box: level-of-detail glanceable interfaces for augmented reality[C]//Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference, 2021: 1-2. [67] CHEN Y S, HSIEH C E, JIE M T Y, et al. Leap to the eye: implicit gaze-based interaction to reveal invisible objects for virtual environment exploration[C]//Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023: 214-222. [68] NAKANO Y I, ISHII R. Estimating user’s engagement from eye-gaze behaviors in human-agent conversations[C]//Proceedings of the 15th International Conference on Intelligent User Interfaces, 2010: 139-148. [69] HU D, QIN H, LIU H, et al. Gaze tracking algorithm based on projective mapping correction and gaze point compensation in natural light[C]//Proceedings of the 2019 IEEE 15th International Conference on Control and Automation (ICCA), 2019: 1150-1155. [70] MORIMOTO C H, COUTINHO F L, HANSEN D W. Screen-light decomposition framework for point-of-gaze estimation using a single uncalibrated camera and multiple light sources[J]. Journal of Mathematical Imaging and Vision, 2020, 62: 585-605. [71] QIAN K, ARICHI T, PRICE A, et al. An eye tracking based virtual reality system for use inside magnetic resonance imaging systems[J]. Scientific Reports, 2021, 11(1): 16301. [72] PARK S, ZHANG X, BULLING A, et al. Learning to find eye region landmarks for remote gaze estimation in unconstrained settings[C]//Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, 2018: 1-10. [73] CHI J, LIU J, WANG F, et al. 3-D gaze-estimation method using a multi-camera-multi-light-source system[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9695-9708. [74] WANG K, ZHAO R, SU H, et al. Generalizing eye tracking with bayesian adversarial learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 11907-11916. [75] ZHUANG Y, ZHANG Y, ZHAO H. Appearance-based gaze estimation using separable convolution neural networks[C]//Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2021: 609-612. [76] CHENG Y, HUANG S, WANG F, et al. A coarse-to-fine adaptive network for appearance-based gaze estimation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 10623-10630. [77] YANG S, HE Y, CHEN Y. SpatialGaze: towards spatial gaze tracking for extended reality[J]. CCF Transactions on Pervasive Computing and Interaction, 2023,5(4): 430-446. [78] WANG Z, ZHAO Y, LU F. Gaze-vergence-controlled see-through vision in augmented reality[J]. IEEE Transactions on Visualization and Computer Graphics, 2022, 28(11): 3843-3853. [79] PLOPSKI A, HIRZLE T, NOROUZI N, et al. The eye in extended reality: a survey on gaze interaction and eye tracking in head-worn extended reality[J]. ACM Computing Surveys (CSUR), 2022, 55(3): 1-39. [80] SCHOLTES M, SEEWALD P, ECKSTEIN L. Implementation and evaluation of a gaze-dependent in-vehicle driver warning system[C]//Advances in Human Aspects of Transportation, 2019: 895-905. [81] TOYAMA T, SONNTAG D, ORLOSKY J, et al. Attention engagement and cognitive state analysis for augmented reality text display functions[C]//Proceedings of the 20th International Conference on Intelligent User Interfaces, 2015: 322-332. [82] ZHANG Z, PAN Z, LI W, et al. X-Board: an egocentric adaptive AR assistant for perception in indoor environments[J]. Virtual Reality, 2023, 27(2): 1327-1343. [83] MCNAMARA A, BOYD K, GEORGE J, et al. Information placement in virtual reality[C]//Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, 2019: 1765-1769. [84] GEBHARDT C, HECOX B, VAN OPHEUSDEN B, et al. Learning cooperative personalized policies from gaze data[C]//Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, 2019: 197-208. [85] PFEUFFER K, ABDRABOU Y, ESTEVES A, et al. ARtention: a design space for gaze-adaptive user interfaces in augmented reality[J]. Computers & Graphics, 2021, 95: 1-12. [86] LINDLBAUER D, FEIT A M, HILLIGES O. Context-aware online adaptation of mixed reality interfaces[C]//Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, 2019: 147-160. [87] GRAS G, YANG G Z. Context-aware modeling for augmented reality display behaviour[J]. IEEE Robotics and Automation Letters, 2019, 4(2): 562-569. [88] WECKER D, YIGITBAS E. Minimizingl[C]//Proceedings of the 2023 ACM Symposium on Spatial User Interaction, 2023: 1-12. [89] XU Y, STOJANOVIC N, STOJANOVIC L, et al. An approach for using complex event processing for adaptive augmented reality in cultural heritage domain: experience report[C]//Proceedings of the 6th ACM International Conference on Distributed Event-Based Systems, 2012: 139-148. [90] CHEN C, NGUYEN C, HOFFSWELL J, et al. PaperToPlace: transforming instruction documents into spatialized and context-aware mixed reality experiences[C]//Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 2023: 1-21. [91] LUO Y, LIU F, SHE Y, et al. A context‐aware mobile augmented reality pet interaction model to enhance user experience[J]. Computer Animation and Virtual Worlds, 2023, 34(1): e2123. [92] KESHAVARZI M, YANG A Y, KO W, et al. Optimization and manipulation of contextual mutual spaces for multi-user virtual and augmented reality interaction[C]//Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, 2020: 353-362. [93] JING A, MATTHEWS B, MAY K, et al. EyemR-Talk: using speech to visualise shared mr gaze cues[C]//Proceedings of SIGGRAPH Asia 2021 Posters, Tokyo, Japan, December 14-17, 2021: 1-2. [94] PIUMSOMBOON T, LEE G A, HART J D, et al. Mini-Me: an adaptive avatar for mixed reality remote collaboration[C]//Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018: 1-13. [95] HUYNH B, ORLOSKY J, H?LLERER T. In-situ labeling for augmented reality language learning[C]//Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, 2019: 1606-1611. [96] PUTZE F, KüSTER D, URBAN T, et al. Attention sensing through multimodal user modeling in an augmented reality guessing game[C]//Proceedings of the 2020 International Conference on Multimodal Interaction, 2020: 33-40. |
[1] | LIU Jianhua, WANG Nan, BAI Mingchen. Progress of Instantiated Reality Augmentation Method for Smart Phone Indoor Scene Elements [J]. Computer Engineering and Applications, 2024, 60(7): 58-69. |
[2] | WEN Mingqi, REN Luqian, CHEN Zhenqin, YANG Zhuo, ZHAN Yinwei. Survey of Deep Learning Based Approaches for Gaze Estimation [J]. Computer Engineering and Applications, 2024, 60(12): 18-33. |
[3] | JIA Xiaohui, FENG Chongyang, LIU Jinyue. Tracking and Registration Method Based on Point Cloud Matching for Augmented Reality Facing Work System [J]. Computer Engineering and Applications, 2023, 59(6): 291-298. |
[4] | WANG Aohui, ZHANG Long, SONG Wenyu, MENG Jie. Review of End-to-End Streaming Speech Recognition [J]. Computer Engineering and Applications, 2023, 59(2): 22-33. |
[5] | PENG Ruinan, WAN Taoruan, GONG Linming. Research on Rapid Reconstruction System for Augmented Reality Environment [J]. Computer Engineering and Applications, 2023, 59(15): 290-299. |
[6] | CHEN Qiuchang, ZHAO Hui, ZUO Enguang, ZHAO Yuxia, WEI Wenyu. Implicit Sentiment Analysis Based on Context Aware Tree Recurrent Neutral Network [J]. Computer Engineering and Applications, 2022, 58(7): 167-175. |
[7] | CUI Hu, HUANG Renjing, CHEN Qingmei, HUANG Chuhua. Dynamic Gesture Recognition Method Based on Asynchronous Multi-Time Domain Features [J]. Computer Engineering and Applications, 2022, 58(21): 163-171. |
[8] | QIAO Min, ZHANG Deyu, LIU Siyu, YAN Tianyi, XIANG Jie. Novel Brain-Computer Interface System Based on Steady-State Visual Evoked Potential [J]. Computer Engineering and Applications, 2021, 57(8): 153-159. |
[9] | MAO Zhengchong, CHEN Haidong. Adaptive Scale Context-Aware Correlation Filter Tracking Algorithm [J]. Computer Engineering and Applications, 2021, 57(3): 168-174. |
[10] | TAN Lixing, LU Jiaqi, ZHANG Xiaonan, LIU Yuhong, ZHANG Rongfen. Improved Ghost Machine Gesture Interaction System Based on Lightweight OpenPose [J]. Computer Engineering and Applications, 2021, 57(16): 159-166. |
[11] | LIN Yanan, CHEN Wanqing, ZHENG Shijue, YANG Qing. Research on Application of AR in Display of Chinese Historical Allusions [J]. Computer Engineering and Applications, 2021, 57(14): 275-280. |
[12] | ZHANG Zhenhai,ZHANG Xiangting. Context-Aware Information Service Recommendation Method for High-Speed Rail [J]. Computer Engineering and Applications, 2021, 57(12): 231-236. |
[13] | LIU Jia, GUO Bin, ZHANG Jingjing, YAN Dong. 3D Registration Method for Augmented Reality Based on Visual and Haptic Integration [J]. Computer Engineering and Applications, 2021, 57(11): 70-76. |
[14] | LI Tingting, WANG Xianghai. Research on Children’s Intelligence Development System Based on AR-VR Hybrid Technology [J]. Computer Engineering and Applications, 2020, 56(23): 259-264. |
[15] | LI Ling, GU Xiaomei, LIU Zihao. Application Research of Multi-subdomain Random Forest in Context-Aware Recommendation [J]. Computer Engineering and Applications, 2020, 56(22): 132-141. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||