YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Modeling Motorcyclist–Pedestrian Near Misses: A Multiagent Adversarial Inverse Reinforcement Learning Approach

    Source: Journal of Computing in Civil Engineering:;2022:;Volume ( 036 ):;issue: 006::page 04022038
    Author:
    Gabriel Lanzaro
    ,
    Tarek Sayed
    ,
    Rushdi Alsaleh
    DOI: 10.1061/(ASCE)CP.1943-5487.0001053
    Publisher: ASCE
    Abstract: Several studies have used surrogate safety measures obtained from microsimulation packages, such as VISSIM, for safety assessments. However, this approach has shortcomings: (1) microsimulation models are developed considering specific rules that tend to avoid collisions; and (2) existing models do not realistically model road users’ behavior and collision avoidance strategies. Moreover, the majority of these models rely on the single-agent modeling assumption (i.e., the remaining agents are considered components of a fixed and stationary environment). Nevertheless, this framework is not realistic, which can limit the models’ representation of the real world. This study used a Markov Game (MG) for modeling concurrent road users’ behavior and evasive actions in near misses. Unlike the conventional game-theoretic approach that considers single-time-step modeling, the MG framework models the sequences of road user decisions. In this framework, road users are modeled as rational agents that aim to maximize their own utility functions by taking rational actions. Road user utility functions are recovered from examples of conflict trajectories using a multiagent adversarial inverse reinforcement learning (MAAIRL) framework. In this study, trajectories from conflicts between motorcyclists and pedestrians in Shanghai, China, were used. Road user policies and collision avoidance strategies in near misses were determined with multiagent actor–critic deep reinforcement learning. A multiagent simulation platform was implemented to emulate pedestrian and motorcyclist trajectories. The results demonstrated that the multiagent model outperformed a Gaussian process inverse reinforcement learning single-agent model in predicting road user trajectories and their evasive actions. The MAAIRL model predicted the interactions’ postencroachment time with high accuracy. Moreover, unlike the single-agent framework, the recovered multiagent reward function captured the equilibrium concept in road user interactions. The multiagent model enables greater understanding of road users’ behavior in conflict interactions and captures the nonstationariness in the environment.
    • Download: (3.372Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Modeling Motorcyclist–Pedestrian Near Misses: A Multiagent Adversarial Inverse Reinforcement Learning Approach

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4287562
    Collections
    • Journal of Computing in Civil Engineering

    Show full item record

    contributor authorGabriel Lanzaro
    contributor authorTarek Sayed
    contributor authorRushdi Alsaleh
    date accessioned2022-12-27T20:33:20Z
    date available2022-12-27T20:33:20Z
    date issued2022/11/01
    identifier other(ASCE)CP.1943-5487.0001053.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4287562
    description abstractSeveral studies have used surrogate safety measures obtained from microsimulation packages, such as VISSIM, for safety assessments. However, this approach has shortcomings: (1) microsimulation models are developed considering specific rules that tend to avoid collisions; and (2) existing models do not realistically model road users’ behavior and collision avoidance strategies. Moreover, the majority of these models rely on the single-agent modeling assumption (i.e., the remaining agents are considered components of a fixed and stationary environment). Nevertheless, this framework is not realistic, which can limit the models’ representation of the real world. This study used a Markov Game (MG) for modeling concurrent road users’ behavior and evasive actions in near misses. Unlike the conventional game-theoretic approach that considers single-time-step modeling, the MG framework models the sequences of road user decisions. In this framework, road users are modeled as rational agents that aim to maximize their own utility functions by taking rational actions. Road user utility functions are recovered from examples of conflict trajectories using a multiagent adversarial inverse reinforcement learning (MAAIRL) framework. In this study, trajectories from conflicts between motorcyclists and pedestrians in Shanghai, China, were used. Road user policies and collision avoidance strategies in near misses were determined with multiagent actor–critic deep reinforcement learning. A multiagent simulation platform was implemented to emulate pedestrian and motorcyclist trajectories. The results demonstrated that the multiagent model outperformed a Gaussian process inverse reinforcement learning single-agent model in predicting road user trajectories and their evasive actions. The MAAIRL model predicted the interactions’ postencroachment time with high accuracy. Moreover, unlike the single-agent framework, the recovered multiagent reward function captured the equilibrium concept in road user interactions. The multiagent model enables greater understanding of road users’ behavior in conflict interactions and captures the nonstationariness in the environment.
    publisherASCE
    titleModeling Motorcyclist–Pedestrian Near Misses: A Multiagent Adversarial Inverse Reinforcement Learning Approach
    typeJournal Article
    journal volume36
    journal issue6
    journal titleJournal of Computing in Civil Engineering
    identifier doi10.1061/(ASCE)CP.1943-5487.0001053
    journal fristpage04022038
    journal lastpage04022038_17
    page17
    treeJournal of Computing in Civil Engineering:;2022:;Volume ( 036 ):;issue: 006
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian