contributor author | Andreas A. Malikopoulos | |
contributor author | Panos Y. Papalambros | |
contributor author | Dennis N. Assanis | |
date accessioned | 2017-05-09T00:32:10Z | |
date available | 2017-05-09T00:32:10Z | |
date copyright | July, 2009 | |
date issued | 2009 | |
identifier issn | 0022-0434 | |
identifier other | JDSMAA-26497#041010_1.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl/handle/yetl/140199 | |
description abstract | Modeling dynamic systems incurring stochastic disturbances for deriving a control policy is a ubiquitous task in engineering. However, in some instances obtaining a model of a system may be impractical or impossible. Alternative approaches have been developed using a simulation-based stochastic framework, in which the system interacts with its environment in real time and obtains information that can be processed to produce an optimal control policy. In this context, the problem of developing a policy for controlling the system’s behavior is formulated as a sequential decision-making problem under uncertainty. This paper considers the problem of deriving a control policy for a dynamic system with unknown dynamics in real time, formulated as a sequential decision-making under uncertainty. The evolution of the system is modeled as a controlled Markov chain. A new state-space representation model and a learning mechanism are proposed that can be used to improve system performance over time. The major difference between the existing methods and the proposed learning model is that the latter utilizes an evaluation function, which considers the expected cost that can be achieved by state transitions forward in time. The model allows decision-making based on gradually enhanced knowledge of system response as it transitions from one state to another, in conjunction with actions taken at each state. The proposed model is demonstrated on the single cart-pole balancing problem and a vehicle cruise-control problem. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | A Real-Time Computational Learning Model for Sequential Decision-Making Problems Under Uncertainty | |
type | Journal Paper | |
journal volume | 131 | |
journal issue | 4 | |
journal title | Journal of Dynamic Systems, Measurement, and Control | |
identifier doi | 10.1115/1.3117200 | |
journal fristpage | 41010 | |
identifier eissn | 1528-9028 | |
tree | Journal of Dynamic Systems, Measurement, and Control:;2009:;volume( 131 ):;issue: 004 | |
contenttype | Fulltext | |