IEICE Transactions on Communications
Online ISSN : 1745-1345
Print ISSN : 0916-8516
Special Section on Communication Quality in Wireless Networks
A Deep Reinforcement Learning Based Approach for Cost- and Energy-Aware Multi-Flow Mobile Data Offloading
Cheng ZHANGZhi LIUBo GUKyoko YAMORIYoshiaki TANAKA
Author information
JOURNAL RESTRICTED ACCESS

2018 Volume E101.B Issue 7 Pages 1625-1634

Details
Abstract

With the rapid increase in demand for mobile data, mobile network operators are trying to expand wireless network capacity by deploying wireless local area network (LAN) hotspots on to which they can offload their mobile traffic. However, these network-centric methods usually do not fulfill the interests of mobile users (MUs). Taking into consideration many issues such as different applications' deadlines, monetary cost and energy consumption, how the MU decides whether to offload their traffic to a complementary wireless LAN is an important issue. Previous studies assume the MU's mobility pattern is known in advance, which is not always true. In this paper, we study the MU's policy to minimize his monetary cost and energy consumption without known MU mobility pattern. We propose to use a kind of reinforcement learning technique called deep Q-network (DQN) for MU to learn the optimal offloading policy from past experiences. In the proposed DQN based offloading algorithm, MU's mobility pattern is no longer needed. Furthermore, MU's state of remaining data is directly fed into the convolution neural network in DQN without discretization. Therefore, not only does the discretization error present in previous work disappear, but also it makes the proposed algorithm has the ability to generalize the past experiences, which is especially effective when the number of states is large. Extensive simulations are conducted to validate our proposed offloading algorithms.

Content from these authors
© 2018 The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top