YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Water Resources Planning and Management
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Water Resources Planning and Management
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Leveraging Deep Reinforcement Learning for Water Distribution Systems with Large Action Spaces and Uncertainties: DRL-EPANET for Pressure Control

    Source: Journal of Water Resources Planning and Management:;2024:;Volume ( 150 ):;issue: 002::page 04023076-1
    Author:
    Anas Belfadil
    ,
    David Modesto
    ,
    Jordi Meseguer
    ,
    Bernat Joseph-Duran
    ,
    David Saporta
    ,
    Jose Antonio Martin Hernandez
    DOI: 10.1061/JWRMD5.WRENG-6108
    Publisher: ASCE
    Abstract: Deep reinforcement learning (DRL) has undergone a revolution in recent years, enabling researchers to tackle a variety of previously inaccessible sequential decision problems. However, its application to the control of water distribution systems (WDS) remains limited. This research demonstrates the successful application of DRL for pressure control in WDS by simulating an environment using EPANET version 2.2, a popular open-source hydraulic simulator. We highlight the ability of DRL-EPANET to handle large action spaces, with more than 1 million possible actions in each time step, and its capacity to deal with uncertainties such as random pipe breaks. We employ the Branching Dueling Q-Network (BDQ) algorithm, which can learn in this context, and enhance it with an algorithmic modification called BDQ with fixed actions (BDQF) that achieves better rewards, especially when manipulated actions are sparse. The proposed methodology was validated using the hydraulic models of 10 real WDS, one of which integrated transmission and distribution systems operated by Hidralia, and the rest of which were operated by Aigües de Barcelona. This research presents the DRL-EPANET framework, which combines deep reinforcement learning and EPANET to optimize water distribution systems. Although the focus of this paper is on pressure control, the approach is highly versatile and can be applied to various sequential decision-making problems within WDS, such as pump optimization, energy management, and water quality control. DRL-EPANET was tested and proven effective on 10 real-world WDS, resulting in as much as 26% improvement in mean pressure compared with the reference solutions. The framework offers real-time control solutions, enabling water utility operators to react quickly to changes in the network. Additionally, it is capable of handling stochastic scenarios, such as random pipe bursts, demand uncertainty, contamination, and component failures, making it a valuable tool for managing complex and unpredictable situations. This method can be developed more with the use of model-based deep reinforcement learning for enhanced sample efficiency, graph neural networks for better representation, and the quantification of agent action uncertainty for improved decision-making in uncharted situations. Overall, DRL-EPANET has the potential to revolutionize the management and operation of water distribution systems, leading to more-efficient use of resources and improved service for consumers.
    • Download: (1.416Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Leveraging Deep Reinforcement Learning for Water Distribution Systems with Large Action Spaces and Uncertainties: DRL-EPANET for Pressure Control

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4296974
    Collections
    • Journal of Water Resources Planning and Management

    Show full item record

    contributor authorAnas Belfadil
    contributor authorDavid Modesto
    contributor authorJordi Meseguer
    contributor authorBernat Joseph-Duran
    contributor authorDavid Saporta
    contributor authorJose Antonio Martin Hernandez
    date accessioned2024-04-27T22:34:23Z
    date available2024-04-27T22:34:23Z
    date issued2024/02/01
    identifier other10.1061-JWRMD5.WRENG-6108.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4296974
    description abstractDeep reinforcement learning (DRL) has undergone a revolution in recent years, enabling researchers to tackle a variety of previously inaccessible sequential decision problems. However, its application to the control of water distribution systems (WDS) remains limited. This research demonstrates the successful application of DRL for pressure control in WDS by simulating an environment using EPANET version 2.2, a popular open-source hydraulic simulator. We highlight the ability of DRL-EPANET to handle large action spaces, with more than 1 million possible actions in each time step, and its capacity to deal with uncertainties such as random pipe breaks. We employ the Branching Dueling Q-Network (BDQ) algorithm, which can learn in this context, and enhance it with an algorithmic modification called BDQ with fixed actions (BDQF) that achieves better rewards, especially when manipulated actions are sparse. The proposed methodology was validated using the hydraulic models of 10 real WDS, one of which integrated transmission and distribution systems operated by Hidralia, and the rest of which were operated by Aigües de Barcelona. This research presents the DRL-EPANET framework, which combines deep reinforcement learning and EPANET to optimize water distribution systems. Although the focus of this paper is on pressure control, the approach is highly versatile and can be applied to various sequential decision-making problems within WDS, such as pump optimization, energy management, and water quality control. DRL-EPANET was tested and proven effective on 10 real-world WDS, resulting in as much as 26% improvement in mean pressure compared with the reference solutions. The framework offers real-time control solutions, enabling water utility operators to react quickly to changes in the network. Additionally, it is capable of handling stochastic scenarios, such as random pipe bursts, demand uncertainty, contamination, and component failures, making it a valuable tool for managing complex and unpredictable situations. This method can be developed more with the use of model-based deep reinforcement learning for enhanced sample efficiency, graph neural networks for better representation, and the quantification of agent action uncertainty for improved decision-making in uncharted situations. Overall, DRL-EPANET has the potential to revolutionize the management and operation of water distribution systems, leading to more-efficient use of resources and improved service for consumers.
    publisherASCE
    titleLeveraging Deep Reinforcement Learning for Water Distribution Systems with Large Action Spaces and Uncertainties: DRL-EPANET for Pressure Control
    typeJournal Article
    journal volume150
    journal issue2
    journal titleJournal of Water Resources Planning and Management
    identifier doi10.1061/JWRMD5.WRENG-6108
    journal fristpage04023076-1
    journal lastpage04023076-9
    page9
    treeJournal of Water Resources Planning and Management:;2024:;Volume ( 150 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian