Show simple item record

contributor authorSimeng Liu
contributor authorGregor P. Henze
date accessioned2017-05-09T00:25:46Z
date available2017-05-09T00:25:46Z
date copyrightMay, 2007
date issued2007
identifier issn0199-6231
identifier otherJSEEDO-28403#215_1.pdf
identifier urihttp://yetl.yabesh.ir/yetl/handle/yetl/136814
description abstractThis paper describes an investigation of machine learning for supervisory control of active and passive thermal storage capacity in buildings. Previous studies show that the utilization of active or passive thermal storage, or both, can yield significant peak cooling load reduction and associated electrical demand and operational cost savings. In this study, a model-free learning control is investigated for the operation of electrically driven chilled water systems in heavy-mass commercial buildings. The reinforcement learning controller learns to operate the building and cooling plant based on the reinforcement feedback (monetary cost of each action, in this study) it receives for past control actions. The learning agent interacts with its environment by commanding the global zone temperature setpoints and thermal energy storage charging∕discharging rate. The controller extracts information about the environment based solely on the reinforcement signal; the controller does not contain a predictive or system model. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The present analysis shows that learning control is a feasible methodology to find a near-optimal control strategy for exploiting the active and passive building thermal storage capacity, and also shows that the learning performance is affected by the dimensionality of the action and state space, the learning rate and several other factors. It is found that it takes a long time to learn control strategies for tasks associated with large state and action spaces.
publisherThe American Society of Mechanical Engineers (ASME)
titleEvaluation of Reinforcement Learning for Optimal Control of Building Active and Passive Thermal Storage Inventory
typeJournal Paper
journal volume129
journal issue2
journal titleJournal of Solar Energy Engineering
identifier doi10.1115/1.2710491
journal fristpage215
journal lastpage225
identifier eissn1528-8986
keywordsTemperature
keywordsControl equipment
keywordsStress
keywordsOptimal control
keywordsThermal energy storage
keywordsCooling
keywordsSimulation AND Algorithms
treeJournal of Solar Energy Engineering:;2007:;volume( 129 ):;issue: 002
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record