[1] DUAN H, ZHENG Y, WANG C, et al. Treasure collection on foggy islands: building secure network archives for Internet of things[J]. IEEE Internet of Things Journal, 2019, 6(2): 2637-2650.
[2] CHENG Z, MIN M, LIWANG M, et al. Multiagent DDPG-based joint task partitioning and power control in fog computing networks[J]. IEEE Internet of Things Journal, 2022, 9(1): 104-116.
[3] YAN J, BI S, ZHANG Y. Offloading and resource allocation with general task graph in mobile edge computing: a deep reinforcement learning approach[J]. IEEE Transactions on Wireless Communications, 2020, 19(8): 5404-5419.
[4] XIAO L, LU X, XU T, et al. Reinforcement learning based mobile offloading for edge computing against jamming and interference[J]. IEEE Transactions on Communications, 2020, 68(10): 6114-6126.
[5] WU Y C, DINH T Q, FU Y, et al. A hybrid DQN and optimization approach for strategy and resource allocation in MEC networks[J]. IEEE Transactions on Wireless Communications, 2021, 20(7): 4282-4295.
[6] SONG S, FANG Z, ZHANG Z, et al. Semi-online computational offloading by dueling deep-Q network for user behavior prediction[J]. IEEE Access, 2020, 8: 118192-118204.
[7] YU L, ZHENG J, WU Y, et al. A DQN-based joint spectrum and computing resource allocation algorithm for MEC networks[C]//Proceedings of the 2022 IEEE Global Communications Conference, 2022.
[8] CHENG N, LYU F, QUAN W, et al. Space/aerial-assisted computing offloading for IoT applications: a learning-based approach[J]. IEEE Journal on Selected Areas in Communications, 2019, 37(5): 1117-1129.
[9] LU H, GU C, LUO F, et al. Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning[J]. Future Generation Computer Systems, 2020, 102: 847-861.
[10] GUO H, LIU J, REN J, et al. Intelligent task offloading in vehicular edge computing networks[J]. IEEE Wireless Communications, 2020, 27(4): 126-132.
[11] MIN M, XIAO L, CHEN Y, et al. Learning-based computation offloading for IoT devices with energy harvesting[J]. IEEE Transactions on Vehicular Technology, 2019, 68(2): 1930-1941.
[12] LI M, YU F R, SI P, et al. Resource optimization for delay-tolerant data in blockchain-enabled IoT with edge computing: a deep reinforcement learning approach[J]. IEEE Internet of Things Journal, 2020, 7(10): 9399-9412.
[13] NATH S, WU J. Deep reinforcement learning for dynamic computation offloading and resource allocation in cache-assisted mobile edge computing systems[J]. Intelligent and Converged Networks, 2020, 1(2): 181-198.
[14] CHEN Z, WANG X. Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach[J]. EURASIP Journal on Wireless Communications and Networking, 2020(1): 1-21.
[15] CHEN J, WU Z L. Dynamic computation offloading with energy harvesting devices: a graph-based deep reinforcement learning approach[J]. IEEE Communications Letters, 2021, 25(9): 2968-2972.
[16] CHEN J, XING H, XIAO Z, et al. A DRL agent for jointly optimizing computation offloading and resource allocation in MEC[J]. IEEE Internet of Things Journal, 2021, 8(24): 17508-17524.
[17] ALE L, ZHANG N, FANG X, et al. Delay-aware and energy-efficient computation offloading in mobile edge computing using deep reinforcement learning[J]. IEEE Transactions on Cognitive Communications and Networking, 2021, 7(3): 881-892.
[18] XIONG J, WANG Q, YANG Z, et al. Parametrized deep Q-networks learning: reinforcement learning with discrete-continuous hybrid action space[J]. arXiv:1810.06394, 2018.
[19] FU H, TANG H, HAO J, et al. Deep multi-agent reinforcement learning with discrete-continuous hybrid action spaces[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019: 2329-2335.
[20] RAZA S, LIN M. Constructive policy: reinforcement learning approach for connected multi-agent systems[C]//Proceedings of the 2019 IEEE 15th International Conference on Automation Science and Engineering, 2019.
[21] MUNOZ O, PASCUAL-ISERTE A, VIDAL J. Optimization of radio and computational resources for energy efficiency in latency-constrained application offloading[J]. IEEE Transactions on Vehicular Technology, 2015, 64(10): 4738-4755.
[22] MOLINA M, MUNOZ O, PASCUAL-ISERTE A, et al. Joint scheduling of communication and computation resources in multiuser wireless application offloading[C]//Proceedings of the 2014 IEEE 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication, 2014: 1093-1098.
[23] ZHANG J, DU J, SHEN Y, et al. Dynamic computation offloading with energy harvesting devices: a hybrid-decision-based deep reinforcement learning approach[J]. IEEE Internet of Things Journal, 2020, 7(10): 9303-9317. |