Dynamic Computation Offloading Based on Deep Reinforcement Learning

Cheng, BaiChuan and Zhang, ZhiLong and Liu, DanPu (2019) Dynamic Computation Offloading Based on Deep Reinforcement Learning. In: Mobimedia 2019, 29-30 Oct 2019, Wehai, China.

Text (PDF)
eai.29-6-2019.2282108.pdf - Published Version
Available under License Creative Commons Attribution No Derivatives.

Download (436kB) | Preview


Mobile edge computing (MEC) provides computation capability at the edge of wireless network. To reduce the execution delay, computation-intensive multimedia tasks can be offloaded from user equipments (UEs) to the MEC server. How to allocate the computational and wireless resources is one of the key issues to guarantee the quality of services, and is very challenging when tasks are generated dynamically. In this paper, we address the above problem. To minimize the sum execution delay of multiple users, we jointly optimize the offloading decision and the allocation of both computational and wireless resources. We propose a deep policy gradient (DPG) algorithm based on the deep reinforcement learning. Simulation results show that our proposed DPG method can achieve lower latency than the baselines under different numbers of users, computation capacities and wireless bandwidths.

Item Type: Conference or Workshop Item (Paper)
Uncontrolled Keywords: computation offloading mobile edge computing reinforcement learning policy gradients
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
QA75 Electronic computers. Computer science
Depositing User: EAI Editor I.
Date Deposited: 10 Sep 2020 08:50
Last Modified: 10 Sep 2020 08:50
URI: https://eprints.eudl.eu/id/eprint/167

Actions (login required)

View Item View Item