When Is a Linear Control System Optimal?Source: Journal of Fluids Engineering:;1964:;volume( 086 ):;issue: 001::page 51Author:R. E. Kalman
DOI: 10.1115/1.3653115Publisher: The American Society of Mechanical Engineers (ASME)
Abstract: The purpose of this paper is to formulate, study, and (in certain cases) resolve the Inverse Problem of Optimal Control Theory, which is the following: Given a control law, find all performance indices for which this control law is optimal. Under the assumptions of (a) linear constant plant, (b) linear constant control law, (c) measurable state variables, (d) quadratic loss functions with constant coefficients, (e) single control variable, we give a complete analysis of this problem and obtain various explicit conditions for the optimality of a given control law. An interesting feature of the analysis is the central role of frequency-domain concepts, which have been ignored in optimal control theory until very recently. The discussion is presented in rigorous mathematical form. The central conclusion is the following (Theorem 6): A stable control law is optimal if and only if the absolute value of the corresponding return difference is at least equal to one at all frequencies. This provides a beautifully simple connecting link between modern control theory and the classical point of view which regards feedback as a means of reducing component variations.
keyword(s): Theorems (Mathematics) , Control theory , Control systems , Optimal control , Feedback , Frequency , Functions , Industrial plants AND Inverse problems ,
|
Collections
Show full item record
contributor author | R. E. Kalman | |
date accessioned | 2017-05-08T23:23:34Z | |
date available | 2017-05-08T23:23:34Z | |
date copyright | March, 1964 | |
date issued | 1964 | |
identifier issn | 0098-2202 | |
identifier other | JFEGA4-27253#51_1.pdf | |
identifier uri | http://yetl.yabesh.ir/yetl/handle/yetl/101779 | |
description abstract | The purpose of this paper is to formulate, study, and (in certain cases) resolve the Inverse Problem of Optimal Control Theory, which is the following: Given a control law, find all performance indices for which this control law is optimal. Under the assumptions of (a) linear constant plant, (b) linear constant control law, (c) measurable state variables, (d) quadratic loss functions with constant coefficients, (e) single control variable, we give a complete analysis of this problem and obtain various explicit conditions for the optimality of a given control law. An interesting feature of the analysis is the central role of frequency-domain concepts, which have been ignored in optimal control theory until very recently. The discussion is presented in rigorous mathematical form. The central conclusion is the following (Theorem 6): A stable control law is optimal if and only if the absolute value of the corresponding return difference is at least equal to one at all frequencies. This provides a beautifully simple connecting link between modern control theory and the classical point of view which regards feedback as a means of reducing component variations. | |
publisher | The American Society of Mechanical Engineers (ASME) | |
title | When Is a Linear Control System Optimal? | |
type | Journal Paper | |
journal volume | 86 | |
journal issue | 1 | |
journal title | Journal of Fluids Engineering | |
identifier doi | 10.1115/1.3653115 | |
journal fristpage | 51 | |
journal lastpage | 60 | |
identifier eissn | 1528-901X | |
keywords | Theorems (Mathematics) | |
keywords | Control theory | |
keywords | Control systems | |
keywords | Optimal control | |
keywords | Feedback | |
keywords | Frequency | |
keywords | Functions | |
keywords | Industrial plants AND Inverse problems | |
tree | Journal of Fluids Engineering:;1964:;volume( 086 ):;issue: 001 | |
contenttype | Fulltext |