A Reinforcement Learning Method for Multiasset Roadway Improvement Scheduling Considering Traffic ImpactsSource: Journal of Infrastructure Systems:;2022:;Volume ( 028 ):;issue: 004::page 04022033Author:Weiwen Zhou
,
Elise Miller-Hooks
,
Kostas G. Papakonstantinou
,
Shelley Stoffels
,
Sue McNeil
DOI: 10.1061/(ASCE)IS.1943-555X.0000702Publisher: ASCE
Abstract: Maintaining roadway pavements and bridge decks is key to providing high levels of service for road users. However, improvement actions incur downtime. These actions are typically scheduled by asset class, yet implemented on any asset type, they have network-wide impacts on traffic performance. This paper presents a bilevel program wherein the upper level involves a Markov decision process (MDP) through which potential roadway improvement actions across asset classes are prioritized and scheduled. The MDP approach considers uncertainty in component deterioration effects, while incorporating the benefits of implemented improvement actions. The upper level takes as input traffic flow estimates obtained from a lower-level user equilibrium traffic formulation that recognizes changes in capacities determined by decisions taken in the upper level. Because an exact solution of this bilevel, stochastic, dynamic program is formidable, a deep reinforcement learning (DRL) method is developed. The model and solution methodology were tested on a hypothetical problem from the literature. The importance of obtaining optimal activity plans that account for downtime effects, traffic congestion impacts, uncertainty in deterioration processes, and multiasset classes is demonstrated.
|
Collections
Show full item record
contributor author | Weiwen Zhou | |
contributor author | Elise Miller-Hooks | |
contributor author | Kostas G. Papakonstantinou | |
contributor author | Shelley Stoffels | |
contributor author | Sue McNeil | |
date accessioned | 2022-12-27T20:39:33Z | |
date available | 2022-12-27T20:39:33Z | |
date issued | 2022/12/01 | |
identifier other | (ASCE)IS.1943-555X.0000702.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl1/handle/yetl/4287740 | |
description abstract | Maintaining roadway pavements and bridge decks is key to providing high levels of service for road users. However, improvement actions incur downtime. These actions are typically scheduled by asset class, yet implemented on any asset type, they have network-wide impacts on traffic performance. This paper presents a bilevel program wherein the upper level involves a Markov decision process (MDP) through which potential roadway improvement actions across asset classes are prioritized and scheduled. The MDP approach considers uncertainty in component deterioration effects, while incorporating the benefits of implemented improvement actions. The upper level takes as input traffic flow estimates obtained from a lower-level user equilibrium traffic formulation that recognizes changes in capacities determined by decisions taken in the upper level. Because an exact solution of this bilevel, stochastic, dynamic program is formidable, a deep reinforcement learning (DRL) method is developed. The model and solution methodology were tested on a hypothetical problem from the literature. The importance of obtaining optimal activity plans that account for downtime effects, traffic congestion impacts, uncertainty in deterioration processes, and multiasset classes is demonstrated. | |
publisher | ASCE | |
title | A Reinforcement Learning Method for Multiasset Roadway Improvement Scheduling Considering Traffic Impacts | |
type | Journal Article | |
journal volume | 28 | |
journal issue | 4 | |
journal title | Journal of Infrastructure Systems | |
identifier doi | 10.1061/(ASCE)IS.1943-555X.0000702 | |
journal fristpage | 04022033 | |
journal lastpage | 04022033_15 | |
page | 15 | |
tree | Journal of Infrastructure Systems:;2022:;Volume ( 028 ):;issue: 004 | |
contenttype | Fulltext |