YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Computing in Civil Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Policy Analysis of Adaptive Traffic Signal Control Using Reinforcement Learning

    Source: Journal of Computing in Civil Engineering:;2020:;Volume ( 034 ):;issue: 001
    Author:
    Wade Genders
    ,
    Saiedeh Razavi
    DOI: 10.1061/(ASCE)CP.1943-5487.0000859
    Publisher: ASCE
    Abstract: Previous research studies have successfully developed adaptive traffic signal controllers using reinforcement learning; however, few have focused on analyzing what specifically reinforcement learning does differently than other traffic signal control methods. This study proposes and develops two reinforcement learning adaptive traffic signal controllers, analyzes their learned policies, and compares them to a Webster’s controller. The asynchronous Q-learning and advantage actor-critic adaptive algorithms are used to develop reinforcement learning traffic signal controllers using neural network function approximation with two action spaces. Using an aggregate statistic state representation (i.e., vehicle queue and density), the proposed reinforcement learning traffic signal controllers develop the optimal policy in a dynamic, stochastic traffic microsimulation. Results show that the reinforcement learning controllers increases red and yellow times but ultimately achieve superior performance compared to the Webster’s controller, reducing mean queues, stopped time, and travel time. The reinforcement learning controllers exhibit goal-oriented behavior, developing a policy that excludes many phases found in a tradition phase cycle (i.e., protected turning movements) instead of choosing phases that maximize reward, as opposed to the Webster’s controller, which is constrained by cyclical logic that diminishes performance.
    • Download: (2.594Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Policy Analysis of Adaptive Traffic Signal Control Using Reinforcement Learning

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4268361
    Collections
    • Journal of Computing in Civil Engineering

    Show full item record

    contributor authorWade Genders
    contributor authorSaiedeh Razavi
    date accessioned2022-01-30T21:31:40Z
    date available2022-01-30T21:31:40Z
    date issued1/1/2020 12:00:00 AM
    identifier other%28ASCE%29CP.1943-5487.0000859.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4268361
    description abstractPrevious research studies have successfully developed adaptive traffic signal controllers using reinforcement learning; however, few have focused on analyzing what specifically reinforcement learning does differently than other traffic signal control methods. This study proposes and develops two reinforcement learning adaptive traffic signal controllers, analyzes their learned policies, and compares them to a Webster’s controller. The asynchronous Q-learning and advantage actor-critic adaptive algorithms are used to develop reinforcement learning traffic signal controllers using neural network function approximation with two action spaces. Using an aggregate statistic state representation (i.e., vehicle queue and density), the proposed reinforcement learning traffic signal controllers develop the optimal policy in a dynamic, stochastic traffic microsimulation. Results show that the reinforcement learning controllers increases red and yellow times but ultimately achieve superior performance compared to the Webster’s controller, reducing mean queues, stopped time, and travel time. The reinforcement learning controllers exhibit goal-oriented behavior, developing a policy that excludes many phases found in a tradition phase cycle (i.e., protected turning movements) instead of choosing phases that maximize reward, as opposed to the Webster’s controller, which is constrained by cyclical logic that diminishes performance.
    publisherASCE
    titlePolicy Analysis of Adaptive Traffic Signal Control Using Reinforcement Learning
    typeJournal Paper
    journal volume34
    journal issue1
    journal titleJournal of Computing in Civil Engineering
    identifier doi10.1061/(ASCE)CP.1943-5487.0000859
    page10
    treeJournal of Computing in Civil Engineering:;2020:;Volume ( 034 ):;issue: 001
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian