YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    A Decision-Making Framework for Load Rating Planning of Aging Bridges Using Deep Reinforcement Learning

    Source: Journal of Computing in Civil Engineering:;2021:;Volume ( 035 ):;issue: 006::page 04021024-1
    Author:
    Minghui Cheng
    ,
    Dan M. Frangopol
    DOI: 10.1061/(ASCE)CP.1943-5487.0000991
    Publisher: ASCE
    Abstract: Load rating is gaining popularity as a method for inspecting the structural performance of aging bridges and determining maintenance actions. Cost-effective condition-based strategies have been developed in previous studies to balance the additional costs and structural safety. However, due to the lack of constant replacement thresholds usually stipulated in governmental guidelines, they may not be suitable in practice. Furthermore, those studies neglected the preferences of decision makers, which influences the choice of optimal plans. This paper proposes a decision-making framework incorporating risk attitudes and time preference for a cost-effective load rating strategy. This strategy utilizes replacement thresholds from current guidelines and determines the time of the next load rating adaptively based on the observation results. It is formulated as a Markov decision process (MDP) compatible with discounted utility theory. Deep reinforcement learning (DRL) is employed to solve the MDP efficiently for a bridge system with large state space. Special focus is given to hyperbolic discounting, one popular type of time preference. Its inconsistency with the MDP formulation is addressed by DRL implemented with auxiliary tasks that simultaneously learns multiple Q functions. An existing multigirder bridge was used as an illustrative example. Results showed that DRL can obtain cost-efficient load rating plans tailored to preferences of decision makers.
    • Download: (1.588Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      A Decision-Making Framework for Load Rating Planning of Aging Bridges Using Deep Reinforcement Learning

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4272051
    Collections
    • Journal of Computing in Civil Engineering

    Show full item record

    contributor authorMinghui Cheng
    contributor authorDan M. Frangopol
    date accessioned2022-02-01T21:47:59Z
    date available2022-02-01T21:47:59Z
    date issued11/1/2021
    identifier other%28ASCE%29CP.1943-5487.0000991.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4272051
    description abstractLoad rating is gaining popularity as a method for inspecting the structural performance of aging bridges and determining maintenance actions. Cost-effective condition-based strategies have been developed in previous studies to balance the additional costs and structural safety. However, due to the lack of constant replacement thresholds usually stipulated in governmental guidelines, they may not be suitable in practice. Furthermore, those studies neglected the preferences of decision makers, which influences the choice of optimal plans. This paper proposes a decision-making framework incorporating risk attitudes and time preference for a cost-effective load rating strategy. This strategy utilizes replacement thresholds from current guidelines and determines the time of the next load rating adaptively based on the observation results. It is formulated as a Markov decision process (MDP) compatible with discounted utility theory. Deep reinforcement learning (DRL) is employed to solve the MDP efficiently for a bridge system with large state space. Special focus is given to hyperbolic discounting, one popular type of time preference. Its inconsistency with the MDP formulation is addressed by DRL implemented with auxiliary tasks that simultaneously learns multiple Q functions. An existing multigirder bridge was used as an illustrative example. Results showed that DRL can obtain cost-efficient load rating plans tailored to preferences of decision makers.
    publisherASCE
    titleA Decision-Making Framework for Load Rating Planning of Aging Bridges Using Deep Reinforcement Learning
    typeJournal Paper
    journal volume35
    journal issue6
    journal titleJournal of Computing in Civil Engineering
    identifier doi10.1061/(ASCE)CP.1943-5487.0000991
    journal fristpage04021024-1
    journal lastpage04021024-16
    page16
    treeJournal of Computing in Civil Engineering:;2021:;Volume ( 035 ):;issue: 006
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian