description abstract | This paper presents a reinforcement learning (RL)–based adaptive fault-tolerant control scheme for quadrotor unmanned aerial vehicles (UAVs) subjected to external disturbances, input uncertainties, and structural uncertainties. In practical engineering, UAV systems often are influenced by aforementioned multiple-source coupled uncertainties, making it challenging to design effective controllers. Herein, first, by introducing a penalty function, a critic network is established to evaluate control performance. Subsequently, the output signals of the critic network are introduced into the updating of actor network, functioning as a reinforcement signal to drive the actor network to approximate unknown nonlinearities. Moreover, an adaptive disturbance boundary estimator is constructed to attenuate the external disturbances and network errors, which are defined collectively in a lumped disturbance set. Additionally, a series of adaptive compensating laws are developed to deal with the input uncertainties. Finally, to tackle multisource coupled uncertainties, a novel RL-based adaptive fault-tolerant controller is proposed which integrates the RL framework, adaptive disturbance boundary estimator, and adaptive input uncertainty compensating laws. Analyzing the Lyapunov function proved that the controlled system is asymptotically stable and all signals are bounded. Numerical simulations revealed the effectiveness and superiority of the proposed method. | |