Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (2): 19-31.DOI: 10.3778/j.issn.1002-8331.2305-0056
• Research Hotspots and Reviews • Previous Articles Next Articles
LI Zhenhuai, ZHAN Yinwei
Online:
2024-01-15
Published:
2024-01-15
李镇淮,战荫伟
LI Zhenhuai, ZHAN Yinwei. Overview of 360-Degree Video and Viewport Prediction[J]. Computer Engineering and Applications, 2024, 60(2): 19-31.
李镇淮, 战荫伟. 360度视频与视口预测方法综述[J]. 计算机工程与应用, 2024, 60(2): 19-31.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2305-0056
[1] PODBORSKI D, SON J, BHULLAR G S, et al. Html5 MSE playback of MPEG 360 VR tiled streaming: JavaScript implementation of MPEG-OMAF viewport-dependent video profile with HEVC tiles[C]//Proceedings of the 10th ACM Multimedia Systems Conference, 2019: 324-327. [2] BAO Y, WU H, ZHANG T, et al. Shooting a moving target: motion-prediction-based transmission for 360-degree videos[C]//Proceedings of the 2016 IEEE International Conference on Big Data (Big Data), 2016: 1161-1170. [3] FAN C L, LO W C, PAI Y T, et al. A survey on 360° video streaming: acquisition, transmission, and display[J]. ACM Computing Surveys, 2019, 52(4): 1-36. [4] YAQOOB A, BI T, MUNTEAN G M. A survey on adaptive 360 video streaming: solutions, challenges and opportunities[J]. IEEE Communications Surveys & Tutorials, 2020, 22(4): 2801-2838. [5] CHIARIOTTI F. A survey on 360-degree video: coding, quality of experience and streaming[J]. Computer Communications, 2021, 177: 133-155. [6] WONG E S, WAHAB N H A, SAEED F, et al. 360-degree video band-width reduction: technique and approaches comprehensive review[J]. Applied Sciences, 2022, 12(15): 7581. [7] 叶成英, 李建微, 陈思喜. VR全景视频传输研究进展[J]. 计算机应用研究, 2022, 39(6): 1601-1607. YE C Y, LI J W, CHEN S X. Research progress of VR panoramic video transmission[J]. Application Research of Computers, 2022, 39(6): 1601-1607. [8] 缪辰启, 罗铖. 全景视频视口预测方法综述[J]. 电视技术, 2022, 46(2): 10-13. MIAO C Q, LUO C. A overview of panoramic video viewport prediction methods[J]. Video Engineering, 2022, 46(2): 10-13. [9] SODAGAR I. The MPEG-dash standard for multimedia streaming over the internet[J]. IEEE MultiMedia, 2011, 18(4): 62-67. [10] GRAF M, TIMMERER C, MUELLER C. Towards bandwidth efficient adaptive streaming of omnidirectional video over http: design, implementation, and evaluation[C]//Proceedings of the 8th ACM on Multimedia Systems Conference. New York, NY, USA: Association for Computing Machinery, 2017: 261-271. [11] GURRIERI L E, DUBOIS E. Acquisition of omnidirectional stereoscopic images and videos of dynamic scenes: a review[J]. Journal of Electronic Imaging, 2013, 22(3): 030902. [12] SZELISKI R. Image alignment and stitching: a tutorial[J]. Foundations and Trends? in Computer Graphics and Vision, 2007, 2(1): 1-59. [13] YU M, LAKSHMAN H, GIROD B. A framework to evaluate omnidirectional video coding schemes[C]//Proceedings of the 2015 IEEE International Symposium on Mixed and Augmented Reality, 2015: 31-36. [14] EL-GANAINY T, HEFEEDA M. Streaming virtual reality content[J]. arXiv:1612.08350, 2016. [15] MAUGEY T, LE MEUR O, LIU Z. Saliency-based navigation in omnidirectional image[C]//Proceedings of the 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP), 2017: 1-6. [16] LI L, LI Z, BUDAGAVI M, et al. Projection based advanced motion model for cubic mapping for 360-degree video[C]//Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), 2017: 1427-1431. [17] AZEVEDO R G D A, BIRKBECK N, DE SIMONE F, et al. Visual distortions in 360° videos[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(8): 2524-2537. [18] YOUVALARI R G, AMINLOU A, HANNUKSELA M M, et al. Efficient coding of 360-degree pseudo-cylindrical panoramic video for virtual reality applications[C]//Proceedings of the 2016 IEEE International Symposium on Multimedia(ISM), 2016: 525-528. [19] WANG Y, WANG R, WANG Z, et al. Polar square projection for panoramic video[C]//Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), 2017: 1-4. [20] WU C, ZHAO H, SHANG X. Rhombic mapping scheme for panoramic video encoding[M]//Digital TV and wireless multimedia communication. Singapore: Springer Singapore, 2018: 443-453. [21] HE Y, XIU X, HANHART P, et al. Content-adaptive 360-degree video coding using hybrid cubemap projection[C]//Proceedings of the 2018 Picture Coding Symposium (PCS), 2018: 313-317. [22] LIN J L, LEE Y H, SHIH C H, et al. Efficient projection and coding tools for 360° video[J]. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2019, 9(1): 84-97. [23] CAI Y, LI X, WANG Y, et al. An overview of panoramic video projection schemes in the IEEE 1857.9 standard for immersive visual content coding[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(9): 6400-6413. [24] POURAZAD M T, DOUTRE C, AZIMI M, et al. HEVC: the new gold standard for video compression: how does HEVC compare with H.264/AVC?[J]. IEEE Consumer Electronics Magazine, 2012, 1(3): 36-46. [25] MUKHERJEE D, BANKOSKI J, GRANGE A, et al. The latest open-source video codec VP9 an overview and preliminary results[C]//Proceedings of the 2013 Picture Coding Symposium (PCS), 2013: 390-393. [26] GROIS D, NGUYEN T, MARPE D. Coding efficiency comparison of AV1/VP9, H.265/MPEG-HEVC, and H.264/MPEG-AVC encoders[C]//Proceedings of the 2016 Picture Coding Symposium (PCS), 2016: 1-5. [27] SULLIVAN G J. Video coding standards progress report: joint video experts team launches the versatile video coding project[J]. SMPTE Motion Imaging Journal, 2018, 127(8): 94-98. [28] WIEN M, BOYCE J M, STOCKHAMMER T, et al. Standardization status of immersive video coding[J]. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2019, 9(1): 5-17. [29] MOSS J D, MUTH E R. Characteristics of head-mounted displays and their effects on simulator sickness[J]. Human Factors, 2011, 53(3): 308-319. [30] CORBILLON X, SIMON G, DEVLIC A, et al. Viewport-adaptive navigable 360-degree video delivery[C]//Proceedings of the 2017 IEEE International Conference on Communications (ICC), 2017: 1-7. [31] PAL U, KING H. Effect of UHD high frame rates (HFR) on DVB-S2 bit error rate (BER)[C]//SMPTE15: Persistence of Vision-Defining the Future, 2015: 1-11. [32] HANNUKSELA M M, WANG Y K. An overview of omnidirectional media format (OMAF)[J]. Proceedings of the IEEE, 2021, 109(9): 1590-1606. [33] QIAN F, JI L, HAN B, et al. Optimizing 360 video delivery over cellular networks[C]//Proceedings of the 5th Workshop on All Things Cellular: Operations, Applications and Challenges, 2016: 1-6. [34] YU M, LAKSHMAN H, GIROD B. Content adaptive representations of omnidirectional videos for cinematic virtual reality[C]//Proceedings of the 3rd International Workshop on Immersive Media Experiences, 2015: 1-6. [35] CHEN S, ZHANG Y, LI Y, et al. Spherical structural similarity index for objective omnidirectional video quality assessment[C]//Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), 2018: 1-6. [36] SUN Y, LU A, YU L. Weighted-to-spherically-uniform quality eval-uation for omnidirectional video[J]. IEEE Signal Processing Letters, 2017, 24(9): 1408-1412. [37] ZHOU Y, YU M, MA H, et al. Weighted-to-spherically-uniform SSIM objective quality evaluation for panoramic video[C]//Proceedings of the 14th IEEE International Conference on Signal Processing (ICSP), 2018: 54-57. [38] SHEN W, ZHOU M, LIAO X, et al. An end-to-end no-reference video quality assessment method with hierarchical spatiotemporal feature representation[J]. IEEE Transactions on Broadcasting, 2022, 68 (3): 651-660. [39] LIU Y, YIN X, WAN Z, et al. Toward a no-reference omnidirectional image quality evaluation by using multi-perceptual features[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(2): 1-19. [40] QI Y, JIANG G, YU M, et al. Viewport perception based blind stereoscopic omnidirectional image quality assessment[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(10): 3926-3941. [41] ZHANG C, LIU S. No-reference omnidirectional image quality assessment based on joint network[C]//Proceedings of the 30th ACM International Conference on Multimedia, 2022: 943-951. [42] TAN T K, WEERAKKODY R, MRAK M, et al. Video quality evaluation methodology and verification testing of HEVC compression performance[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2016, 26(1): 76-90. [43] SESHADRINATHAN K, SOUNDARARAJAN R, BOVIK A C, et al. Study of subjective and objective quality assessment of video[J]. IEEE Transactions on Image Processing, 2010, 19(6): 1427-1441. [44] CORBILLON X, DE SIMONE F, SIMON G. 360-degree video head movement dataset[C]//Proceedings of the 8th ACM on Multimedia Systems Conference, 2017: 199-204. [45] DE ABREU A, OZCINAR C, SMOLIC A. Look around you: saliency maps for omnidirectional images in VR applications[C]//Proceedings of the 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX), 2017: 1-6. [46] LO W C, FAN C L, LEE J, et al. 360 video viewing dataset in head-mounted virtual reality[C]//Proceedings of the 8th ACM on Multimedia Systems Conference, 2017: 211-216. [47] SITZMANN V, SERRANO A, PAVEL A, et al. Saliency in VR: how do people explore virtual environments?[J]. IEEE Transactions on Visualization and Computer Graphics, 2018, 24(4): 1633-1642. [48] RAI Y, GUTIéRREZ J, LE CALLET P. A dataset of head and eye movements for 360 degree images[C]//Proceedings of the 8th ACM on Multimedia Systems Conference, 2017: 205-210. [49] DAVID E J, GUTIéRREZ J, COUTROT A, et al. A dataset of head and eye movements for 360 videos[C]//Proceedings of the 9th ACM Multimedia Systems Conference, 2018: 432-437. [50] GUTIéRREZ J, DAVID E J, COUTROT A, et al. Introducing un salient360! benchmark: a platform for evaluating visual attention models for 360 contents[C]//Proceedings of the 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018: 1-3. [51] GUTIéRREZ J, DAVID E, RAI Y, et al. Toolbox and dataset for the development of saliency and scanpath models for omnidirectional/360° still images[J]. Signal Processing: Image Communication, 2018, 69: 35-42. [52] AGTZIDIS I, STARTSEV M, DORR M. 360-degree video gaze behaviour: a ground-truth data set and a classification algorithm for eye movements[C]//Proceedings of the 27th ACM International Conference on Multimedia, 2019: 1007-1015. [53] LI B J, BAILENSON J N, PINES A, et al. A public database of immersive VR videos with corresponding ratings of arousal, valence, and correlations between head movements and self report measures[J]. Frontiers in Psychology, 2017, 8: 2116. [54] REN X, DUAN H, MIN X, et al. Where are the children with autism looking in reality?[C]//Artificial Intelligence: Second CAAI International Conference, 2023: 588-600. [55] UPENIK E, ?E?áBEK M, EBRAHIMI T. Testbed for subjective evaluation of omnidirectional visual content[C]// Proceedings of the 2016 Picture Coding Symposium (PCS), 2016: 1-5. [56] FAN C L, LEE J, LO W C, et al. Fixation prediction for 360 video streaming in head-mounted virtual reality[C]//Proceedings of the 27th Workshop on Network and Operating Systems Support for Digital Audio and Video, 2017: 67-72. [57] WU C, TAN Z, WANG Z, et al. A dataset for exploring user behaviors in VR spherical video streaming[C]//Proceedings of the 8th ACM on Multimedia Systems Conference, 2017: 193-198. [58] ZHANG Z, XU Y, YU J, et al. Saliency detection in 360 videos[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 488-503. [59] OZCINAR C, SMOLIC A. Visual attention in omnidirectional video for virtual reality applications[C]//Proceedings of the 10th international Conference on Quality of Multimedia Experience (QoMEX), 2018: 1-6. [60] CHENG H T, CHAO C H, DONG J D, et al. Cube padding for weakly-supervised saliency prediction in 360 videos[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 1420-1429. [61] XU Y, DONG Y, WU J, et al. Gaze prediction in dynamic 360 immersive videos[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 5333-5342. [62] FREMEREY S, SINGLA A, MESEBERG K, et al. Avtrack360: an open dataset and software recording people’s head rotations watching 360° videos on an HMD[C]//Proceedings of the 9th ACM Multimedia Systems Conference, 2018: 403-408. [63] DUANMU F, MAO Y, LIU S, et al. A subjective study of viewer navigation behaviors when watching 360-degree videos on computers[C]//Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), 2018: 1-6. [64] LI C, XU M, DU X, et al. Bridge the gap between VQA and human behavior on omnidirectional video: a large-scale dataset and a deep learning model[C]//Proceedings of the 26th ACM International Conference on Multimedia, 2018: 932-940. [65] NASRABADI A T, SAMIEI A, MAHZARI A, et al. A taxonomy and dataset for 360 videos[C]//Proceedings of the 10th ACM Multimedia Systems Conference, 2019: 273-278. [66] XU M, SONG Y, WANG J, et al. Predicting head movement in panoramic video: a deep reinforcement learning approach[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41 (11): 2693-2708. [67] XU M, YANG L, TAO X, et al. Saliency prediction on omnidirectional image with generative adversarial imitation learning[J]. IEEE Transactions on Image Processing, 2021, 30: 2087-2102. [68] ZHU Y, ZHAI G, YANG Y, et al. Viewing behavior supported visual saliency predictor for 360 degree videos[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(7): 4188-4201. [69] GUIMARD Q, ROBERT F, BAUCE C, et al. On the link between emotion, attention and content in virtual immersive environments[C]//Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), 2022: 2521-2525. [70] YANG L, XU M, GUO Y, et al. Hierarchical bayesian lstm for head trajectory prediction on omnidirectional images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(11): 7563-7580. [71] XU Z, ZHANG X, ZHANG K, et al. Probabilistic viewport adaptive streaming for 360-degree videos[C]//Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), 2018: 1-5. [72] JIANG X, NAAS S A, CHIANG Y H, et al. SVP: sinusoidal viewport prediction for 360-degree video streaming[J]. IEEE Access, 2020, 8: 164471-164481. [73] HOU X, DEY S, ZHANG J, et al. Predictive view generation to enable mobile 360-degree and VR experiences[C]//Proceedings of the 2018 Morning Workshop on Virtual Reality and Augmented Reality Network. New York, NY, USA: Association for Computing Machinery, 2018: 20-26. [74] YU J, LIU Y. Field-of-view prediction in 360-degree videos with attention-based neural encoder-decoder networks[C]//Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, 2019: 37-42. [75] NGUYEN H, DAO T N, PHAM N S, et al. An accurate viewport estimation method for 360 video streaming using deep learning[J]. EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, 2022, 9(4): e2. [76] LU Y, ZHU Y, WANG Z. Personalized 360-degree video streaming: A meta-learning approach[C]//Proceedings of the 30th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery, 2022: 3143-3151. [77] JIANG Y, POULARAKIS K, KIEDANSKI D, et al. Robust and resource-efficient machine learning aided viewport prediction in virtual reality[C]//Proceedings of the IEEE International Conference on Big Data (Big Data), 2022: 1002-1013. [78] THRUN S, PRATT L. Learning to learn[M]. [s.l.]: Springer Science & Business Media, 2012. [79] BAN Y, XIE L, XU Z, et al. Cub360: exploiting cross-users behaviors for viewport prediction in 360 video adaptive streaming[C]//Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), 2018: 1-6. [80] XIE L, ZHANG X, GUO Z. CLS: a cross-user learning based system for improving QOE in 360-degree video adaptive streaming[C]//Proceedings of the 26th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery, 2018: 564-572. [81] PETRANGELI S, SIMON G, SWAMINATHAN V. Trajectory-based viewport prediction for 360-degree virtual reality videos[C]//Proceedings of the 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), 2018: 157-160. [82] FU J, CHEN Z, CHEN X, et al. Sequential reinforced 360-degree video adaptive streaming with cross-user attentive network[J]. IEEE Transactions on Broadcasting, 2021, 67(2): 383-394. [83] YANG J, LUO J, WANG J, et al. CMUVP: cooperative multicast and unicast with viewport prediction for VR video streaming in 5G H-CRAN[J]. IEEE Access, 2019, 7: 134187-134197. [84] ROSSI S, DE SIMONE F, FROSSARD P, et al. Spherical clustering of users navigating 360° content[C]//Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019: 4020-4024. [85] NASRABADI A T, SAMIEI A, PRAKASH R. Viewport prediction for 360° videos: a clustering approach[C]//Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video. New York, NY, USA: Association for Computing Machinery, 2020: 34-39. [86] CHEN J, LUO X, HU M, et al. Sparkle: user-aware viewport prediction in 360-degree video streaming[J]. IEEE Transactions on Multimedia, 2021, 23: 3853-3866. [87] DONG P, SHEN R, XIE X, et al. Predicting long-term field of view in 360-degree video streaming[J]. IEEE Network, 2022(1): 1-8. [88] VAN DAMME S, VEGA M T, DE TURCK F. Machine learning based content-agnostic viewport prediction for 360-degree video[J]. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2022, 18(2) : 1-24. [89] ZHOU Q, YANG Z, GUO H, et al. 360broadview: viewer management for viewport prediction in 360-degree video live broadcast[C]//Proceedings of the 4th ACM International Conference on Multimedia in Asia. New York, NY, USA: Association for Computing Machinery, 2022. [90] ZHANG R, LIU J, LIU F, et al. Buffer-aware virtual reality video streaming with personalized and private viewport prediction[J]. IEEE Journal on Selected Areas in Communications, 2022, 40(2): 694-709. [91] CHAO F Y, OZCINAR C, SMOLIC A. Privacy-preserving viewport prediction using federated learning for 360° live video streaming[C]//Proceedings of the IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), 2022: 1-6. [92] BORJI A, CHENG M M, HOU Q, et al. Salient object detection: a survey[J]. Computational Visual Media, 2019, 5: 117-150. [93] CHAO F Y, ZHANG L, HAMIDOUCHE W, et al. A multi-FoV viewport-based visual saliency model using adaptive weighting losses for 360° images[J]. IEEE Transactions on Multimedia, 2021, 23: 1811-1826. [94] CHEN D, QING C, XU X, et al. Salbinet360: saliency prediction on 360° images with local-global bifurcated deep network[C]//Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2020: 92-100. [95] COHEN T S, GEIGER M, K?HLER J, et al. Spherical cnns[J]. arXiv:1801.10130, 2018. [96] COORS B, CONDURACHE A P, GEIGER A. Spherenet: learning spherical representations for detection and classification in omnidirectional images[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 518-533. [97] LV H, YANG Q, LI C, et al. SaLGCN: saliency prediction for 360-degree images based on spherical graph convolutional networks[C]//Proceedings of the 28th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery, 2020: 682-690. [98] ZHAO P, ZHANG Y, BIAN K, et al. Laddernet: knowledge transfer based viewpoint prediction in 360° video[C]//Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019: 1657-1661. [99] XU Y, ZHANG Z, GAO S. Spherical dnns and their applications in 360° images and videos[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(10): 7235-7252. [100] FAN C L, YEN S C, HUANG C Y, et al. Optimizing fixation prediction using recurrent neural networks for 360° video streaming in head-mounted virtual reality[J]. IEEE Transactions on Multimedia, 2020, 22(3): 744-759. [101] NGUYEN A, YAN Z, NAHRSTEDT K. Your attention is unique: detecting 360-degree video saliency in head-mounted display for head movement prediction[C]//Proceedings of the 26th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery, 2018: 1190-1198. [102] FENG X, LIU Y, WEI S. LiveDeep: online viewport prediction for live virtual reality streaming using lifelong deep learning[C]//Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2020: 800-808. [103] HU Q, ZHOU J, ZHANG X, et al. Viewport-adaptive 360-degree video coding[J]. Multimedia Tools and Applications, 2020, 79: 12205-12226. [104] AMBADKAR T, MAZUMDAR P. Deep reinforcement learning approach to predict head movement in 360 videos[J]. Electronic Imaging, 2022, 34: 1-5. [105] LI C, ZHANG W, LIU Y, et al. Very long term field of view prediction for 360-degree video streaming[C]//Proceedings of the IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 2019: 297-302. [106] ZHANG X, CHEUNG G, ZHAO Y, et al. Graph learning based head movement prediction for interactive 360 video streaming[J]. IEEE Transactions on Image Processing, 2021, 30: 4622-4636. [107] LI J, HAN L, ZHANG C, et al. Spherical convolution empowered viewport prediction in 360 video multicast with limited fov feedback[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(1): 1-23. [108] ZHANG L, XU W, LU D, et al. MFVP: mobile-friendly viewport prediction for live 360-degree video streaming[C]//Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2022. [109] WANG M, PENG S, CHEN X, et al. Colive: an edge-assisted online learning framework for viewport prediction in 360° live streaming[C]//Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), 2022. [110] LENG Y, CHEN C C, SUN Q, et al. Semantic-aware virtual reality video streaming[C]//Proceedings of the 9th Asia-Pacific Workshop on Systems. New York, NY, USA: Association for Computing Machinery, 2018. [111] FENG X, SWAMINATHAN V, WEI S. Viewport prediction for live 360-degree mobile video streaming using user-content hybrid motion tracking[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019, 3(2): 1-22. [112] FENG X, BAO Z, WEI S. LiveObj: object semantics-based viewport prediction for live mobile virtual reality streaming[J]. IEEE Transactions on Visualization and Computer Graphics, 2021, 27(5): 2736-2745. [113] JING C, DUC T N, TAN P X, et al. Subtitle-based viewport prediction for 360-degree virtual tourism video[C]//Proceedings of the 13th International Conference on Information, Intelligence, Systems & Applications (IISA), 2022: 1-8. [114] DOAN L, DUC T N, JING C, et al. Automatic keyword extraction for viewport prediction of 360-degree virtual tourism video[C]//Proceedings of the IEEE International Conference on Computing (ICOCO), 2022: 386-391. [115] ZHANG Y, CHAO F Y, HAMIDOUCHE W, et al. PAV-SOD: a new task towards panoramic audiovisual saliency detection[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19(3): 1-26. |
[1] | TIAN Miaomiao, ZHI Lijia, ZHANG Shaomin, CHAO Daifu. Review of Deep Learning Methods Applied to Medical CT Super-Resolution [J]. Computer Engineering and Applications, 2024, 60(3): 44-60. |
[2] | WU Zeju, SONG Lijun, JI Yang. Tire X-Ray Image Defect Detection Based on Improved Feature Pyramid Network [J]. Computer Engineering and Applications, 2024, 60(3): 270-279. |
[3] | QIAN Liping, JI Xiaomei. Research on Malware Classification Method Based on Heterogeneous Instruction Graph [J]. Computer Engineering and Applications, 2024, 60(3): 299-308. |
[4] | SONG Cheng, XIE Zhenping. Dataset Enhancement Quality Evaluation Method for Chinese Error Correction Task as Example [J]. Computer Engineering and Applications, 2024, 60(3): 331-339. |
[5] | YANG Wei, ZHONG Mingfeng, YANG Gen, HOU Zhicheng, WANG Weijun, YUAN Hai. Few Samples Data Augmentation Method Based on NVAE and OB-Mix [J]. Computer Engineering and Applications, 2024, 60(2): 103-112. |
[6] | ZHOU Jianting, XUAN Shibin, WANG Ting. Improved DDETR UAV Target Detection Algorithm Incorporating Occlusion Information [J]. Computer Engineering and Applications, 2024, 60(1): 236-244. |
[7] | LIN Wenlong, Alifu·Kuerban, CHEN Yixiao, YUAN Xu. ACFEM-RetinaNet Algorithm for Remote Sensing Image Target Detection [J]. Computer Engineering and Applications, 2024, 60(1): 245-253. |
[8] | ZHAO Jigui, QIAN Yurong, WANG Kui, HOU Shuxiang, CHEN Jiaying. Survey of Chinese Named Entity Recognition Research [J]. Computer Engineering and Applications, 2024, 60(1): 15-27. |
[9] | CHEN Jishang, Abudukelimu Halidanmu, LIANG Yunze, Abulizi Abudukelimu, Aishan Mikelayi, GUO Wenqiang. Review of Application of Deep Learning in Symbolic Music Generation [J]. Computer Engineering and Applications, 2023, 59(9): 27-45. |
[10] | JIANG Qiuxiang, GUO Weipeng, WANG Zilong, OUYANG Xingtao, LONG Ruirui. Application and Prospect of Python Language in Field of Hydrology and Water Resources [J]. Computer Engineering and Applications, 2023, 59(9): 46-58. |
[11] | LUO Huilan, CHEN Han. Spatial-Temporal Convolutional Attention Network for Action Recognition [J]. Computer Engineering and Applications, 2023, 59(9): 150-158. |
[12] | DAI Chao, LIU Ping, SHI Juncai, REN Hongjie. Regularized Extraction of Remotely Sensed Image Buildings Using U-Shaped Networks [J]. Computer Engineering and Applications, 2023, 59(8): 105-116. |
[13] | LIU Hualing, PI Changpeng, ZHAO Chenyu, QIAO Liang. Review of Cross-Domain Object Detection Algorithms Based on Depth Domain Adaptation [J]. Computer Engineering and Applications, 2023, 59(8): 1-12. |
[14] | HE Jiafeng, CHEN Hongwei, LUO Dehan. Review of Real-Time Semantic Segmentation Algorithms for Deep Learning [J]. Computer Engineering and Applications, 2023, 59(8): 13-27. |
[15] | ZHANG Yanqing, MA Jianhong, HAN Ying, CAO Yangjie, LI Jie, YANG Cong. Review of Research on Real-World Single Image Super-Resolution Reconstruction [J]. Computer Engineering and Applications, 2023, 59(8): 28-40. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||