YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASCE
    • Journal of Aerospace Engineering
    • View Item
    •   YE&T Library
    • ASCE
    • Journal of Aerospace Engineering
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Investigation of Deep Reinforcement Learning for Longitudinal-Axis Flight Control

    Source: Journal of Aerospace Engineering:;2024:;Volume ( 037 ):;issue: 002::page 04023111-1
    Author:
    Ke Xu
    ,
    Shuling Tian
    ,
    Jian Xia
    DOI: 10.1061/JAEEEZ.ASENG-5007
    Publisher: ASCE
    Abstract: Traditional aerodynamic modeling methods have some difficulties in multiparameter unsteady state-space modeling and control law design, which raise significant challenges in flight control. The rapid development of machine learning provides a new idea for unsteady aerodynamic modeling and control law design, which has significant value for theoretical research and engineering application. In this paper, an unsteady aerodynamic model environment was established based on deep neural network (DNN), and a dynamic computational fluid dynamics (CFD) virtual environment was taken as an approximation of the real environment. A deep reinforcement learning (DRL) longitudinal-axis flight control method was studied on the basis of both environments. The deep deterministic policy gradient (DDPG) algorithm was used to implement flight control, and the effects of different constraints in the reward function on the results of DRL were studied at the same time. The results show that the DDPG agent effectively achieves longitudinal-axis flight control in the model environment. The DDPG agent trained in the model environment was also used for longitudinal-axis flight control in the dynamic CFD virtual environment, and this method provides a reference for the coupling of dynamic CFD and DRL. The control results from the model environment and the dynamic CFD virtual environment are compared, and results show that the DDPG agent has good robustness in both environments. The results of this study suggest that machine learning and deep reinforcement learning can effectively solve complicated multiconstraint, uncertain, and sophisticated control tasks. In this paper, a DNN-based unsteady aerodynamic modeling method was established based on dynamic computational fluid dynamics calculation and machine learning, and the results show that the DNN-based unsteady aerodynamic model has advantages in dealing with highly nonlinear data under unsteady conditions. In addition, the deep deterministic policy gradient algorithm can be well applied in longitudinal-axis flight control, and the agent can be trained in DNN-based unsteady aerodynamic model environment. The agent was also used for longitudinal-axis flight control in the dynamic computational fluid dynamics virtual environment, and the results show that the agent has good robustness in both environments. This provides the possibility for the agent to achieve longitudinal-axis flight control in a real environment.
    • Download: (7.647Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Investigation of Deep Reinforcement Learning for Longitudinal-Axis Flight Control

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4297173
    Collections
    • Journal of Aerospace Engineering

    Show full item record

    contributor authorKe Xu
    contributor authorShuling Tian
    contributor authorJian Xia
    date accessioned2024-04-27T22:39:12Z
    date available2024-04-27T22:39:12Z
    date issued2024/03/01
    identifier other10.1061-JAEEEZ.ASENG-5007.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4297173
    description abstractTraditional aerodynamic modeling methods have some difficulties in multiparameter unsteady state-space modeling and control law design, which raise significant challenges in flight control. The rapid development of machine learning provides a new idea for unsteady aerodynamic modeling and control law design, which has significant value for theoretical research and engineering application. In this paper, an unsteady aerodynamic model environment was established based on deep neural network (DNN), and a dynamic computational fluid dynamics (CFD) virtual environment was taken as an approximation of the real environment. A deep reinforcement learning (DRL) longitudinal-axis flight control method was studied on the basis of both environments. The deep deterministic policy gradient (DDPG) algorithm was used to implement flight control, and the effects of different constraints in the reward function on the results of DRL were studied at the same time. The results show that the DDPG agent effectively achieves longitudinal-axis flight control in the model environment. The DDPG agent trained in the model environment was also used for longitudinal-axis flight control in the dynamic CFD virtual environment, and this method provides a reference for the coupling of dynamic CFD and DRL. The control results from the model environment and the dynamic CFD virtual environment are compared, and results show that the DDPG agent has good robustness in both environments. The results of this study suggest that machine learning and deep reinforcement learning can effectively solve complicated multiconstraint, uncertain, and sophisticated control tasks. In this paper, a DNN-based unsteady aerodynamic modeling method was established based on dynamic computational fluid dynamics calculation and machine learning, and the results show that the DNN-based unsteady aerodynamic model has advantages in dealing with highly nonlinear data under unsteady conditions. In addition, the deep deterministic policy gradient algorithm can be well applied in longitudinal-axis flight control, and the agent can be trained in DNN-based unsteady aerodynamic model environment. The agent was also used for longitudinal-axis flight control in the dynamic computational fluid dynamics virtual environment, and the results show that the agent has good robustness in both environments. This provides the possibility for the agent to achieve longitudinal-axis flight control in a real environment.
    publisherASCE
    titleInvestigation of Deep Reinforcement Learning for Longitudinal-Axis Flight Control
    typeJournal Article
    journal volume37
    journal issue2
    journal titleJournal of Aerospace Engineering
    identifier doi10.1061/JAEEEZ.ASENG-5007
    journal fristpage04023111-1
    journal lastpage04023111-20
    page20
    treeJournal of Aerospace Engineering:;2024:;Volume ( 037 ):;issue: 002
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian