Decentralized and partially decentralized Multi-Agent Reinforcement Learning
Abstract
Multi-Agent systems naturally arise in a variety of domains such as robotics, distributed control and communication systems. The dynamic and complex nature of these systems makes it difficult for agents to achieve optimal performance with predefined strategies. Instead, the agents can perform better by adapting their behavior and learning optimal strategies as the system evolves. We use Reinforcement Learning paradigm for learning optimal behavior in Multi Agent systems. A reinforcement learning agent learns by trial-and-error interaction with its environment. A central component in Multi Agent Reinforcement Learning systems is the inter- communication performed by agents to learn the optimal solutions. In this thesis, we study different patterns of communication and their use in different configurations of Multi Agent systems. Communication between agents can be completely centralized, completely decentralized or partially decentralized. The interaction between the agents is modeled using the notions from Game theory. Thus, the agents could interact with each other in a in a fully cooperative, fully competitive, or in a mixed setting. In this thesis, we propose novel learning algorithms for the Multi Agent Reinforcement Learning in the context of Learning Automaton. By combining different modes of communication with the various types of game configurations, we obtain a spectrum of learning algorithms. We study the applications of these algorithms for solving various optimization and control problems.
Degree
Ph.D.
Advisors
Mukhopadhyay, Purdue University.
Subject Area
Computer science
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.