Reinforcement Learning, Building Management Systems, controls, Agent-based models
It is increasingly common to design buildings with advanced sensing and control systems to improve energy efficiency, indoor air quality which impacts health and productivity. However, there has been limited progress in making building automation systems “intelligent,” as the performance of such buildings is often limited by reactive control systems, primarily using setpoint limits and fixed operation schedules. The complex nature of building control problems motivates the application of state-of-the-art software engineering methods and techniques. Agent-based models (ABM) are well-suited for controlling complex engineering systems such as those employed in building heating, ventilation, and air-conditioning (HVAC) systems. In this paradigm, a collection of interacting autonomous components (i.e., agents) adapt and make decisions in changing environments. There is a growing body of literature on adaptive agents in ABMs in many industries, but few have looked at the compatibility of ABMs with artificial intelligence (AI) optimization approaches. In most cases, conventional optimization techniques, such as mixed integer linear programming and gradient descent, have been used to find an optimal solution. This paper explores the use of an actor-critic, model-free algorithm based on a deterministic policy gradient that provides continuous control to generate the desired supply air temperature. The case study develops a thermal energy storage (TES) agent that determines the optimal valve position to manage the temperature of the cooling water flow. The case study was developed using the Intelligent Building Agents Laboratory at the National Institute of Standards and Technology. Future work will use multiple agents (i.e., air handling unit, TES, chiller) acting in cooperation or competition.