Show simple item record

contributor authorOdonkor, Philip
contributor authorLewis, Kemper
date accessioned2019-03-17T11:06:53Z
date available2019-03-17T11:06:53Z
date copyright12/20/2018 12:00:00 AM
date issued2019
identifier issn1050-0472
identifier othermd_141_02_021704.pdf
identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4256676
description abstractThe control of shared energy assets within building clusters has traditionally been confined to a discrete action space, owing in part to a computationally intractable decision space. In this work, we leverage the current state of the art in reinforcement learning (RL) for continuous control tasks, the deep deterministic policy gradient (DDPG) algorithm, toward addressing this limitation. The goals of this paper are twofold: (i) to design an efficient charged/discharged dispatch policy for a shared battery system within a building cluster and (ii) to address the continuous domain task of determining how much energy should be charged/discharged at each decision cycle. Experimentally, our results demonstrate an ability to exploit factors such as energy arbitrage, along with the continuous action space toward demand peak minimization. This approach is shown to be computationally tractable, achieving efficient results after only 5 h of simulation. Additionally, the agent showed an ability to adapt to different building clusters, designing unique control strategies to address the energy demands of the clusters studied.
publisherThe American Society of Mechanical Engineers (ASME)
titleAutomated Design of Energy Efficient Control Strategies for Building Clusters Using Reinforcement Learning
typeJournal Paper
journal volume141
journal issue2
journal titleJournal of Mechanical Design
identifier doi10.1115/1.4041629
journal fristpage21704
journal lastpage021704-9
treeJournal of Mechanical Design:;2019:;volume( 141 ):;issue: 002
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record