Automated Design of Energy Efficient Control Strategies for Building Clusters Using Reinforcement LearningSource: Journal of Mechanical Design:;2019:;volume( 141 ):;issue: 002::page 21704DOI: 10.1115/1.4041629Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: The control of shared energy assets within building clusters has traditionally been confined to a discrete action space, owing in part to a computationally intractable decision space. In this work, we leverage the current state of the art in reinforcement learning (RL) for continuous control tasks, the deep deterministic policy gradient (DDPG) algorithm, toward addressing this limitation. The goals of this paper are twofold: (i) to design an efficient charged/discharged dispatch policy for a shared battery system within a building cluster and (ii) to address the continuous domain task of determining how much energy should be charged/discharged at each decision cycle. Experimentally, our results demonstrate an ability to exploit factors such as energy arbitrage, along with the continuous action space toward demand peak minimization. This approach is shown to be computationally tractable, achieving efficient results after only 5 h of simulation. Additionally, the agent showed an ability to adapt to different building clusters, designing unique control strategies to address the energy demands of the clusters studied.
|
Collections
Show full item record
contributor author | Odonkor, Philip | |
contributor author | Lewis, Kemper | |
date accessioned | 2019-03-17T11:06:53Z | |
date available | 2019-03-17T11:06:53Z | |
date copyright | 12/20/2018 12:00:00 AM | |
date issued | 2019 | |
identifier issn | 1050-0472 | |
identifier other | md_141_02_021704.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4256676 | |
description abstract | The control of shared energy assets within building clusters has traditionally been confined to a discrete action space, owing in part to a computationally intractable decision space. In this work, we leverage the current state of the art in reinforcement learning (RL) for continuous control tasks, the deep deterministic policy gradient (DDPG) algorithm, toward addressing this limitation. The goals of this paper are twofold: (i) to design an efficient charged/discharged dispatch policy for a shared battery system within a building cluster and (ii) to address the continuous domain task of determining how much energy should be charged/discharged at each decision cycle. Experimentally, our results demonstrate an ability to exploit factors such as energy arbitrage, along with the continuous action space toward demand peak minimization. This approach is shown to be computationally tractable, achieving efficient results after only 5 h of simulation. Additionally, the agent showed an ability to adapt to different building clusters, designing unique control strategies to address the energy demands of the clusters studied. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | Automated Design of Energy Efficient Control Strategies for Building Clusters Using Reinforcement Learning | |
type | Journal Paper | |
journal volume | 141 | |
journal issue | 2 | |
journal title | Journal of Mechanical Design | |
identifier doi | 10.1115/1.4041629 | |
journal fristpage | 21704 | |
journal lastpage | 021704-9 | |
tree | Journal of Mechanical Design:;2019:;volume( 141 ):;issue: 002 | |
contenttype | Fulltext |