Jump to content

Control (optimal control theory)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Bender235 (talk | contribs) at 23:22, 18 July 2020 (→‎top). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In optimal control theory, a control is a variable chosen by the controller or agent to manipulate state variables, similar to an actual control valve. Unlike the state variable, it does not have a predetermined equation of motion.[1] The goal of optimal control theory is to find some sequence of controls (within an admissible set) to achieve an optimal path for the state variables (with respect to a loss function).

See also

References

  1. ^ Ferguson, Brian S.; Lim, G. C. (1998). Introduction to Dynamic Economic Problems. Manchester: Manchester University Press. p. 162. ISBN 0-7190-4996-2.