Reinforcement Learning memory adaptation using neural networks
Abstract
Reinforcement Learning uses experience gained from feedback with an environment to learn appropriate actions for specific states of a model. The knowledge is stored in matrix form for easy reference. The current approach in storing these numerous actions requires a large memory bank. In this work, neural networks are used in function approximation to map the relation between a desired action and the state configuration of a robotic manipulator. The network maps sections of the workspace, reducing the necessary memory stored and acting as an aid to the reinforcement learning algorithm. The neural network is trained on a set of similar motions, represented as state-action pairs. The approximations generated from the neural networks are essentially interpolated trajectories. Multiple networks are trained on sections of the workspace due to localized behavior of the manipulator. The generation of interpolated motions reduces the amount of learning the reinforcement learning must perform by more than 85% when looking at these multiple neural networks. Various implementations of interpolation are explored, first attempting a trivial solution through matrix manipulation, then with two different neural network structures. The feed-forward and radial basis function networks produce similar capabilities of interpolation, with the latter being trained significantly faster.
Degree
M.S.M.E.
Advisors
Meckl, Purdue University.
Subject Area
Mechanical engineering|Computer science
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.