Show simple item record

contributor authorSadegh Sadeghi Tabas
contributor authorVidya Samadi
date accessioned2024-12-24T10:08:39Z
date available2024-12-24T10:08:39Z
date copyright7/1/2024 12:00:00 AM
date issued2024
identifier otherJWRMD5.WRENG-6089.pdf
identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4298380
description abstractChanges in demand, various hydrological inputs, and environmental stressors are among the issues that reservoir managers and policymakers face on a regular basis. These concerns have sparked interest in applying different techniques to determine reservoir operation policy decisions. As the resolution of the analysis increases, it becomes more difficult to effectively represent a real-world system using traditional methods such as dynamic programming and stochastic dynamic programming for determining the best reservoir operation policy. One of the challenges is the “curse of dimensionality,” which means the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function. Deep reinforcement learning (DRL) is an intelligent approach to overcome the curses of stochastic optimization problems for reservoir operation policy decisions. To our knowledge, this study is the first attempt that examines various novel DRL continuous-action policy gradient methods, including deep deterministic policy gradients, twin delayed DDPG (TD3), and two different versions of Soft Actor-Critic (SAC18 and SAC19) for optimizing reservoir operation policy. In this study, multiple DRL techniques were implemented to find an optimal operation policy for Folsom Reservoir in California. The reservoir system supplies agricultural, municipal, hydropower, and environmental flow demands and flood control operations to the City of Sacramento. Analysis suggests that the TD3 and SAC are robust to meet the Folsom Reservoir’s demands and optimize reservoir operation policies. Experiments on continuous-action spaces of reservoir policy decisions demonstrated that the DRL techniques can efficiently learn strategic policies in spaces and can overcome the curse of dimensionality and modeling.
publisherAmerican Society of Civil Engineers
titleFill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and Control
typeJournal Article
journal volume150
journal issue7
journal titleJournal of Water Resources Planning and Management
identifier doi10.1061/JWRMD5.WRENG-6089
journal fristpage04024022-1
journal lastpage04024022-18
page18
treeJournal of Water Resources Planning and Management:;2024:;Volume ( 150 ):;issue: 007
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record