Carlos Silva, Naercio Magaia, and António Grilo, “Task offloading optimization in Mobile Edge Computing based on Deep Reinforcement Learning”, in the 26th International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems.

The Cloud Computing (CC) paradigm has risen in recent years as a solution to a need for computation and battery constrained User Equipment (UE) to run increasingly intensive computation tasks. Nevertheless, given the centralized nature of the CC paradigm, this option introduces significant network congestion problems and unpredictable communication delays unsuitable for real-time applications. In order to cope with these problems, the Mobile Edge Computing (MEC) concept has been introduced, which proposes to bring computation resources closer to the edge of the mobile networks in a distributed way. However, given that these edge computation resources are limited, this paradigm comes with its set of challenges that need to be solved in order to make it viable. This work proposes to innovate by presenting a network management agent capable of making offloading decisions from a heterogeneous network of UEs to a heterogeneous network of MEC servers. This agent is the orchestrator of a group of 5G Small Cells (SCeNBs), enhanced with computation and storage capabilities. In order to solve this high complexity problem, an Advantage Actor-Critic (A2C) agent is implemented and tested against several baselines. The proposed solution is shown to beat the baselines by making intelligent decisions taking into account computation, battery, delay and communication constraints ignored by the baselines. The solution is also shown to be scalable, data-efficient, robust, stable and adjustable to address not only the overall system performance but also to take into account the worst-case scenario..