YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Bridge Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Bridge Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Sustainable Life-Cycle Maintenance Policymaking for Network-Level Deteriorating Bridges with a Convolutional Autoencoder–Structured Reinforcement Learning Agent

    Source: Journal of Bridge Engineering:;2023:;Volume ( 028 ):;issue: 009::page 04023063-1
    Author:
    Xiaoming Lei
    ,
    You Dong
    ,
    Dan M. Frangopol
    DOI: 10.1061/JBENF2.BEENG-6159
    Publisher: ASCE
    Abstract: Bridges play a significant role in urban areas, and their performance and safety are highly related to the carbon emissions of infrastructure systems. Previous studies have mainly offered maintenance policies that balance structural safety with overall costs. Considering the goal of achieving near-zero global carbon emissions by 2050, this study proposes a policymaking agent based on a convolutional autoencoder–structured deep-Q network (ConvAE-DQN) for managing deteriorating bridges at the network level while considering sustainability performance. This agent considers environmental, economic, and safety metrics, including spatially correlated structural failure probability, traffic volume, bridge size, and others, which are transformed into a multiattribute utility model to form the reward function. Reinforcement learning is employed to optimize the life-cycle maintenance planning to minimize the total carbon emissions and economic costs while maximizing regional safety performance. The proposed method is substantiated by developing sustainable life-cycle maintenance policies for an existing bridge network in Northern China. It is found that the proposed ConvAE-DQN policymaking agent could output efficient and sustainable life-cycle maintenance policies, which are annually stable and easy to schedule. The utility-based reward function enhances the stability and convergence efficiency of the algorithm. This study also assesses the impact of budget levels on network-level bridge safety and carbon footprint.
    • Download: (2.815Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Sustainable Life-Cycle Maintenance Policymaking for Network-Level Deteriorating Bridges with a Convolutional Autoencoder–Structured Reinforcement Learning Agent

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4293341
    Collections
    • Journal of Bridge Engineering

    Show full item record

    contributor authorXiaoming Lei
    contributor authorYou Dong
    contributor authorDan M. Frangopol
    date accessioned2023-11-27T23:09:33Z
    date available2023-11-27T23:09:33Z
    date issued9/1/2023 12:00:00 AM
    date issued2023-09-01
    identifier otherJBENF2.BEENG-6159.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4293341
    description abstractBridges play a significant role in urban areas, and their performance and safety are highly related to the carbon emissions of infrastructure systems. Previous studies have mainly offered maintenance policies that balance structural safety with overall costs. Considering the goal of achieving near-zero global carbon emissions by 2050, this study proposes a policymaking agent based on a convolutional autoencoder–structured deep-Q network (ConvAE-DQN) for managing deteriorating bridges at the network level while considering sustainability performance. This agent considers environmental, economic, and safety metrics, including spatially correlated structural failure probability, traffic volume, bridge size, and others, which are transformed into a multiattribute utility model to form the reward function. Reinforcement learning is employed to optimize the life-cycle maintenance planning to minimize the total carbon emissions and economic costs while maximizing regional safety performance. The proposed method is substantiated by developing sustainable life-cycle maintenance policies for an existing bridge network in Northern China. It is found that the proposed ConvAE-DQN policymaking agent could output efficient and sustainable life-cycle maintenance policies, which are annually stable and easy to schedule. The utility-based reward function enhances the stability and convergence efficiency of the algorithm. This study also assesses the impact of budget levels on network-level bridge safety and carbon footprint.
    publisherASCE
    titleSustainable Life-Cycle Maintenance Policymaking for Network-Level Deteriorating Bridges with a Convolutional Autoencoder–Structured Reinforcement Learning Agent
    typeJournal Article
    journal volume28
    journal issue9
    journal titleJournal of Bridge Engineering
    identifier doi10.1061/JBENF2.BEENG-6159
    journal fristpage04023063-1
    journal lastpage04023063-15
    page15
    treeJournal of Bridge Engineering:;2023:;Volume ( 028 ):;issue: 009
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian