Fill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and ControlSource: Journal of Water Resources Planning and Management:;2024:;Volume ( 150 ):;issue: 007::page 04024022-1DOI: 10.1061/JWRMD5.WRENG-6089Publisher: American Society of Civil Engineers
Abstract: Changes in demand, various hydrological inputs, and environmental stressors are among the issues that reservoir managers and policymakers face on a regular basis. These concerns have sparked interest in applying different techniques to determine reservoir operation policy decisions. As the resolution of the analysis increases, it becomes more difficult to effectively represent a real-world system using traditional methods such as dynamic programming and stochastic dynamic programming for determining the best reservoir operation policy. One of the challenges is the “curse of dimensionality,” which means the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function. Deep reinforcement learning (DRL) is an intelligent approach to overcome the curses of stochastic optimization problems for reservoir operation policy decisions. To our knowledge, this study is the first attempt that examines various novel DRL continuous-action policy gradient methods, including deep deterministic policy gradients, twin delayed DDPG (TD3), and two different versions of Soft Actor-Critic (SAC18 and SAC19) for optimizing reservoir operation policy. In this study, multiple DRL techniques were implemented to find an optimal operation policy for Folsom Reservoir in California. The reservoir system supplies agricultural, municipal, hydropower, and environmental flow demands and flood control operations to the City of Sacramento. Analysis suggests that the TD3 and SAC are robust to meet the Folsom Reservoir’s demands and optimize reservoir operation policies. Experiments on continuous-action spaces of reservoir policy decisions demonstrated that the DRL techniques can efficiently learn strategic policies in spaces and can overcome the curse of dimensionality and modeling.
|
Show full item record
contributor author | Sadegh Sadeghi Tabas | |
contributor author | Vidya Samadi | |
date accessioned | 2024-12-24T10:08:39Z | |
date available | 2024-12-24T10:08:39Z | |
date copyright | 7/1/2024 12:00:00 AM | |
date issued | 2024 | |
identifier other | JWRMD5.WRENG-6089.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4298380 | |
description abstract | Changes in demand, various hydrological inputs, and environmental stressors are among the issues that reservoir managers and policymakers face on a regular basis. These concerns have sparked interest in applying different techniques to determine reservoir operation policy decisions. As the resolution of the analysis increases, it becomes more difficult to effectively represent a real-world system using traditional methods such as dynamic programming and stochastic dynamic programming for determining the best reservoir operation policy. One of the challenges is the “curse of dimensionality,” which means the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function. Deep reinforcement learning (DRL) is an intelligent approach to overcome the curses of stochastic optimization problems for reservoir operation policy decisions. To our knowledge, this study is the first attempt that examines various novel DRL continuous-action policy gradient methods, including deep deterministic policy gradients, twin delayed DDPG (TD3), and two different versions of Soft Actor-Critic (SAC18 and SAC19) for optimizing reservoir operation policy. In this study, multiple DRL techniques were implemented to find an optimal operation policy for Folsom Reservoir in California. The reservoir system supplies agricultural, municipal, hydropower, and environmental flow demands and flood control operations to the City of Sacramento. Analysis suggests that the TD3 and SAC are robust to meet the Folsom Reservoir’s demands and optimize reservoir operation policies. Experiments on continuous-action spaces of reservoir policy decisions demonstrated that the DRL techniques can efficiently learn strategic policies in spaces and can overcome the curse of dimensionality and modeling. | |
publisher | American Society of Civil Engineers | |
title | Fill-and-Spill: Deep Reinforcement Learning Policy Gradient Methods for Reservoir Operation Decision and Control | |
type | Journal Article | |
journal volume | 150 | |
journal issue | 7 | |
journal title | Journal of Water Resources Planning and Management | |
identifier doi | 10.1061/JWRMD5.WRENG-6089 | |
journal fristpage | 04024022-1 | |
journal lastpage | 04024022-18 | |
page | 18 | |
tree | Journal of Water Resources Planning and Management:;2024:;Volume ( 150 ):;issue: 007 | |
contenttype | Fulltext |