Show simple item record

contributor authorBorah, Kaustav Jyoti
date accessioned2024-12-24T18:48:53Z
date available2024-12-24T18:48:53Z
date copyright3/13/2024 12:00:00 AM
date issued2024
identifier issn0022-0434
identifier otherds_146_03_031009.pdf
identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4302795
description abstractThis paper introduces a novel approach for designing estimators to achieve consensus in uncertain multi-agent systems (MAS), even when various fault conditions are present and communication is assumed to be undirected and connected. The method includes an adaptive fault detection technique to detect faults and a unique adaptation in the unscented Kalman filter (UKF) to adjust noise covariance matrices and reconstruct uncertain states in the MAS is proposed in the framework of Q-learning. Additionally, it involves training neural network internal parameters using previous measurements. A Chebyshev neural network (CNN) is employed to model the uncertain plant, and a hyperbolic tangent-based robust control term is used to mitigate neural network approximation errors. This novel approach is known as reinforced UKF (RUKF). The paper also discusses the asymptotic stability of the proposed method and presents numerical simulations to demonstrate its effectiveness with reduced computational load.
publisherThe American Society of Mechanical Engineers (ASME)
titleNonlinear Filtering and Reinforcement Learning Based Consensus Achievement of Uncertain Multi-Agent Systems
typeJournal Paper
journal volume146
journal issue3
journal titleJournal of Dynamic Systems, Measurement, and Control
identifier doi10.1115/1.4064601
journal fristpage31009-1
journal lastpage31009-11
page11
treeJournal of Dynamic Systems, Measurement, and Control:;2024:;volume( 146 ):;issue: 003
contenttypeFulltext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record