YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Water Resources Planning and Management
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Water Resources Planning and Management
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Fill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and Control

    Source: Journal of Water Resources Planning and Management:;2024:;Volume ( 150 ):;issue: 007::page 04024022-1
    Author:
    Sadegh Sadeghi Tabas
    ,
    Vidya Samadi
    DOI: 10.1061/JWRMD5.WRENG-6089
    Publisher: American Society of Civil Engineers
    Abstract: Changes in demand, various hydrological inputs, and environmental stressors are among the issues that reservoir managers and policymakers face on a regular basis. These concerns have sparked interest in applying different techniques to determine reservoir operation policy decisions. As the resolution of the analysis increases, it becomes more difficult to effectively represent a real-world system using traditional methods such as dynamic programming and stochastic dynamic programming for determining the best reservoir operation policy. One of the challenges is the “curse of dimensionality,” which means the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function. Deep reinforcement learning (DRL) is an intelligent approach to overcome the curses of stochastic optimization problems for reservoir operation policy decisions. To our knowledge, this study is the first attempt that examines various novel DRL continuous-action policy gradient methods, including deep deterministic policy gradients, twin delayed DDPG (TD3), and two different versions of Soft Actor-Critic (SAC18 and SAC19) for optimizing reservoir operation policy. In this study, multiple DRL techniques were implemented to find an optimal operation policy for Folsom Reservoir in California. The reservoir system supplies agricultural, municipal, hydropower, and environmental flow demands and flood control operations to the City of Sacramento. Analysis suggests that the TD3 and SAC are robust to meet the Folsom Reservoir’s demands and optimize reservoir operation policies. Experiments on continuous-action spaces of reservoir policy decisions demonstrated that the DRL techniques can efficiently learn strategic policies in spaces and can overcome the curse of dimensionality and modeling.
    • Download: (4.426Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Fill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and Control

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4298380
    Collections
    • Journal of Water Resources Planning and Management

    Show full item record

    contributor authorSadegh Sadeghi Tabas
    contributor authorVidya Samadi
    date accessioned2024-12-24T10:08:39Z
    date available2024-12-24T10:08:39Z
    date copyright7/1/2024 12:00:00 AM
    date issued2024
    identifier otherJWRMD5.WRENG-6089.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4298380
    description abstractChanges in demand, various hydrological inputs, and environmental stressors are among the issues that reservoir managers and policymakers face on a regular basis. These concerns have sparked interest in applying different techniques to determine reservoir operation policy decisions. As the resolution of the analysis increases, it becomes more difficult to effectively represent a real-world system using traditional methods such as dynamic programming and stochastic dynamic programming for determining the best reservoir operation policy. One of the challenges is the “curse of dimensionality,” which means the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function. Deep reinforcement learning (DRL) is an intelligent approach to overcome the curses of stochastic optimization problems for reservoir operation policy decisions. To our knowledge, this study is the first attempt that examines various novel DRL continuous-action policy gradient methods, including deep deterministic policy gradients, twin delayed DDPG (TD3), and two different versions of Soft Actor-Critic (SAC18 and SAC19) for optimizing reservoir operation policy. In this study, multiple DRL techniques were implemented to find an optimal operation policy for Folsom Reservoir in California. The reservoir system supplies agricultural, municipal, hydropower, and environmental flow demands and flood control operations to the City of Sacramento. Analysis suggests that the TD3 and SAC are robust to meet the Folsom Reservoir’s demands and optimize reservoir operation policies. Experiments on continuous-action spaces of reservoir policy decisions demonstrated that the DRL techniques can efficiently learn strategic policies in spaces and can overcome the curse of dimensionality and modeling.
    publisherAmerican Society of Civil Engineers
    titleFill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and Control
    typeJournal Article
    journal volume150
    journal issue7
    journal titleJournal of Water Resources Planning and Management
    identifier doi10.1061/JWRMD5.WRENG-6089
    journal fristpage04024022-1
    journal lastpage04024022-18
    page18
    treeJournal of Water Resources Planning and Management:;2024:;Volume ( 150 ):;issue: 007
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian