YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Solar Energy Engineering
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Solar Energy Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Evaluation of Reinforcement Learning for Optimal Control of Building Active and Passive Thermal Storage Inventory

    Source: Journal of Solar Energy Engineering:;2007:;volume( 129 ):;issue: 002::page 215
    Author:
    Simeng Liu
    ,
    Gregor P. Henze
    DOI: 10.1115/1.2710491
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: This paper describes an investigation of machine learning for supervisory control of active and passive thermal storage capacity in buildings. Previous studies show that the utilization of active or passive thermal storage, or both, can yield significant peak cooling load reduction and associated electrical demand and operational cost savings. In this study, a model-free learning control is investigated for the operation of electrically driven chilled water systems in heavy-mass commercial buildings. The reinforcement learning controller learns to operate the building and cooling plant based on the reinforcement feedback (monetary cost of each action, in this study) it receives for past control actions. The learning agent interacts with its environment by commanding the global zone temperature setpoints and thermal energy storage charging∕discharging rate. The controller extracts information about the environment based solely on the reinforcement signal; the controller does not contain a predictive or system model. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The present analysis shows that learning control is a feasible methodology to find a near-optimal control strategy for exploiting the active and passive building thermal storage capacity, and also shows that the learning performance is affected by the dimensionality of the action and state space, the learning rate and several other factors. It is found that it takes a long time to learn control strategies for tasks associated with large state and action spaces.
    keyword(s): Temperature , Control equipment , Stress , Optimal control , Thermal energy storage , Cooling , Simulation AND Algorithms ,
    • Download: (1.004Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Evaluation of Reinforcement Learning for Optimal Control of Building Active and Passive Thermal Storage Inventory

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/136814
    Collections
    • Journal of Solar Energy Engineering

    Show full item record

    contributor authorSimeng Liu
    contributor authorGregor P. Henze
    date accessioned2017-05-09T00:25:46Z
    date available2017-05-09T00:25:46Z
    date copyrightMay, 2007
    date issued2007
    identifier issn0199-6231
    identifier otherJSEEDO-28403#215_1.pdf
    identifier urihttp://yetl.yabesh.ir/yetl/handle/yetl/136814
    description abstractThis paper describes an investigation of machine learning for supervisory control of active and passive thermal storage capacity in buildings. Previous studies show that the utilization of active or passive thermal storage, or both, can yield significant peak cooling load reduction and associated electrical demand and operational cost savings. In this study, a model-free learning control is investigated for the operation of electrically driven chilled water systems in heavy-mass commercial buildings. The reinforcement learning controller learns to operate the building and cooling plant based on the reinforcement feedback (monetary cost of each action, in this study) it receives for past control actions. The learning agent interacts with its environment by commanding the global zone temperature setpoints and thermal energy storage charging∕discharging rate. The controller extracts information about the environment based solely on the reinforcement signal; the controller does not contain a predictive or system model. Over time and by exploring the environment, the reinforcement learning controller establishes a statistical summary of plant operation, which is continuously updated as operation continues. The present analysis shows that learning control is a feasible methodology to find a near-optimal control strategy for exploiting the active and passive building thermal storage capacity, and also shows that the learning performance is affected by the dimensionality of the action and state space, the learning rate and several other factors. It is found that it takes a long time to learn control strategies for tasks associated with large state and action spaces.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleEvaluation of Reinforcement Learning for Optimal Control of Building Active and Passive Thermal Storage Inventory
    typeJournal Paper
    journal volume129
    journal issue2
    journal titleJournal of Solar Energy Engineering
    identifier doi10.1115/1.2710491
    journal fristpage215
    journal lastpage225
    identifier eissn1528-8986
    keywordsTemperature
    keywordsControl equipment
    keywordsStress
    keywordsOptimal control
    keywordsThermal energy storage
    keywordsCooling
    keywordsSimulation AND Algorithms
    treeJournal of Solar Energy Engineering:;2007:;volume( 129 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian