DOI

10.1109/ACCESS.2021.3073902

Date of this Version

4-19-2021

Keywords

Robots, Heuristic algorithms, Cloud computing, Robot sensing systems, Task analysis, Reinforcement learning, Mobile handsets

Abstract

Robots come with a variety of computing capabilities, and running computationally-intense applications on robots is sometimes challenging on account of limited onboard computing, storage, and power capabilities. Meanwhile, cloud computing provides on-demand computing capabilities, and thus combining robots with cloud computing can overcome the resource constraints robots face. The key to effectively offloading tasks is an application solution that does not underutilize the robot's own computational capabilities and makes decisions based on crucial cost parameters such as latency and CPU availability. In this paper, we formulate the application offloading problem as a Markovian decision process and propose a deep reinforcement learning-based deep Q-network (DQN) approach. The state-space is formulated with the assumption that input data size directly impacts application execution time. The proposed algorithm is designed as a continuous task problem with discrete action space; i.e., we apply a choice of action at each time step and use the corresponding outcome to train the DQN to acquire the maximum rewards possible. To validate the proposed algorithm, we designed and implemented a robot navigation testbed. The results demonstrated that for the given state-space values, the proposed algorithm learned to take appropriate actions to reduce application latency and also learned a policy that takes actions based on input data size. Finally, we compared the proposed DQN algorithm with a long short-term memory (LSTM) algorithm in terms of accuracy. When trained and validated on the same dataset, the proposed DQN algorithm obtained at least 9 percentage points greater accuracy than the LSTM algorithm.

Comments

This is the publisher PDF of M. Penmetcha, B.-C. Min: DRL-Based Dynamic Computational Offloading Method for Cloud Robotics. IEEE Access 9, 2021. This article is distributed under a CC-BY license, and is available at DOI: 10.1109/ACCESS.2021.3073902.

Share

COinS