Reinforcement Learning Building Control: An Online Approach With Guided Exploration Using Surrogate ModelsSource: ASME Journal of Engineering for Sustainable Buildings and Cities:;2024:;volume( 005 ):;issue: 001::page 11005-1DOI: 10.1115/1.4064842Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: The incorporation of emerging technologies, including solar photovoltaics, electric vehicles, battery energy storage, smart devices, Internet-of-Things devices, and sensors in buildings, desirable control objectives are becoming increasingly complex, calling for advanced controls approaches. Reinforcement learning (RL) is a powerful method for this. RL can adapt and learn from environmental interaction, but it can take a long time to learn and can be unstable initially due to limited environmental knowledge. In our research, we propose an online RL approach for buildings that uses data-driven surrogate models to guide the RL agent during its early training. This helps the controller learn faster and more stably than the traditional direct plug-and-learn online learning approach. In this research, we propose an online approach in buildings with RL where, with the help of data-driven surrogate models, the RL agent is guided during its early exploratory training stage, aiding the controller to learn a near-optimal policy faster and exhibiting more stable training progress than a traditional direct plug-and-learn online learning RL approach. The agents are assisted in their learning and action with information gained from the surrogate models generating multiple artificial trajectories starting from the current state. The research presented an exploration of various surrogate model-assisted training methods and revealed that models focusing on artificial trajectories around rule-based controls yielded the most stable performance. In contrast, models employing random exploration with a one-step look-ahead approach demonstrated superior overall performance.
|
Show full item record
contributor author | Dey, Sourav | |
contributor author | Henze, Gregor P. | |
date accessioned | 2024-04-24T22:35:48Z | |
date available | 2024-04-24T22:35:48Z | |
date copyright | 3/12/2024 12:00:00 AM | |
date issued | 2024 | |
identifier issn | 2642-6641 | |
identifier other | jesbc_5_1_011005.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4295506 | |
description abstract | The incorporation of emerging technologies, including solar photovoltaics, electric vehicles, battery energy storage, smart devices, Internet-of-Things devices, and sensors in buildings, desirable control objectives are becoming increasingly complex, calling for advanced controls approaches. Reinforcement learning (RL) is a powerful method for this. RL can adapt and learn from environmental interaction, but it can take a long time to learn and can be unstable initially due to limited environmental knowledge. In our research, we propose an online RL approach for buildings that uses data-driven surrogate models to guide the RL agent during its early training. This helps the controller learn faster and more stably than the traditional direct plug-and-learn online learning approach. In this research, we propose an online approach in buildings with RL where, with the help of data-driven surrogate models, the RL agent is guided during its early exploratory training stage, aiding the controller to learn a near-optimal policy faster and exhibiting more stable training progress than a traditional direct plug-and-learn online learning RL approach. The agents are assisted in their learning and action with information gained from the surrogate models generating multiple artificial trajectories starting from the current state. The research presented an exploration of various surrogate model-assisted training methods and revealed that models focusing on artificial trajectories around rule-based controls yielded the most stable performance. In contrast, models employing random exploration with a one-step look-ahead approach demonstrated superior overall performance. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | Reinforcement Learning Building Control: An Online Approach With Guided Exploration Using Surrogate Models | |
type | Journal Paper | |
journal volume | 5 | |
journal issue | 1 | |
journal title | ASME Journal of Engineering for Sustainable Buildings and Cities | |
identifier doi | 10.1115/1.4064842 | |
journal fristpage | 11005-1 | |
journal lastpage | 11005-12 | |
page | 12 | |
tree | ASME Journal of Engineering for Sustainable Buildings and Cities:;2024:;volume( 005 ):;issue: 001 | |
contenttype | Fulltext |