YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Aerospace Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Aerospace Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Autonomous Navigation for Cellular-Connected UAV in Highly Dynamic Environments: A Deep Reinforcement Learning Approach

    Source: Journal of Aerospace Engineering:;2024:;Volume ( 037 ):;issue: 005::page 04024063-1
    Author:
    Di Wu
    ,
    Zhiyi Shi
    ,
    Yibo Zhang
    ,
    Mengxing Huang
    DOI: 10.1061/JAEEEZ.ASENG-5265
    Publisher: American Society of Civil Engineers
    Abstract: This study investigated the navigation problem for cellular-connected unmanned aerial vehicles (UAVs), particularly in highly dynamic urban environments. To address this problem, the UAV is required not only to evade high-speed obstacles in the airspace but also to avoid the coverage holes of cellular base stations (BS). Moreover, the UAV needs to reach the destination to complete the navigation task. Hence, it is imperative to design the trade-off in action selections between collision evasion and destination-approaching scenarios, while also considering the expected communication outage duration as a crucial reference. To overcome this multiobjective optimization challenge, we propose a deep reinforcement learning (DRL)-based algorithm aimed at enabling the UAV to acquire an optimal decision-making policy. Specifically, we formulated the navigation problem as a Markov decision process (MDP) and developed a layered recurrent soft actor–critic (RSAC)-based DRL framework, stimulating the UAV to resolve two fundamental subtasks of UAV navigation. Furthermore, we develop a multilayer perception (MLP)-based integrated evaluation network to select a particular action from the two subsolutions, satisfying the demands for the entire navigation problem. The layered architecture simplifies the navigation problem, thereby enhancing the convergence speed of the proposed algorithm. Numerical results indicate that the layered-RSAC-based UAV can autonomously perform scheduled navigation tasks in our designed simulated urban environments with superior effectiveness.
    • Download: (2.676Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Autonomous Navigation for Cellular-Connected UAV in Highly Dynamic Environments: A Deep Reinforcement Learning Approach

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4298542
    Collections
    • Journal of Aerospace Engineering

    Show full item record

    contributor authorDi Wu
    contributor authorZhiyi Shi
    contributor authorYibo Zhang
    contributor authorMengxing Huang
    date accessioned2024-12-24T10:14:06Z
    date available2024-12-24T10:14:06Z
    date copyright9/1/2024 12:00:00 AM
    date issued2024
    identifier otherJAEEEZ.ASENG-5265.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4298542
    description abstractThis study investigated the navigation problem for cellular-connected unmanned aerial vehicles (UAVs), particularly in highly dynamic urban environments. To address this problem, the UAV is required not only to evade high-speed obstacles in the airspace but also to avoid the coverage holes of cellular base stations (BS). Moreover, the UAV needs to reach the destination to complete the navigation task. Hence, it is imperative to design the trade-off in action selections between collision evasion and destination-approaching scenarios, while also considering the expected communication outage duration as a crucial reference. To overcome this multiobjective optimization challenge, we propose a deep reinforcement learning (DRL)-based algorithm aimed at enabling the UAV to acquire an optimal decision-making policy. Specifically, we formulated the navigation problem as a Markov decision process (MDP) and developed a layered recurrent soft actor–critic (RSAC)-based DRL framework, stimulating the UAV to resolve two fundamental subtasks of UAV navigation. Furthermore, we develop a multilayer perception (MLP)-based integrated evaluation network to select a particular action from the two subsolutions, satisfying the demands for the entire navigation problem. The layered architecture simplifies the navigation problem, thereby enhancing the convergence speed of the proposed algorithm. Numerical results indicate that the layered-RSAC-based UAV can autonomously perform scheduled navigation tasks in our designed simulated urban environments with superior effectiveness.
    publisherAmerican Society of Civil Engineers
    titleAutonomous Navigation for Cellular-Connected UAV in Highly Dynamic Environments: A Deep Reinforcement Learning Approach
    typeJournal Article
    journal volume37
    journal issue5
    journal titleJournal of Aerospace Engineering
    identifier doi10.1061/JAEEEZ.ASENG-5265
    journal fristpage04024063-1
    journal lastpage04024063-14
    page14
    treeJournal of Aerospace Engineering:;2024:;Volume ( 037 ):;issue: 005
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian