Model predictive control

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Model predictive control (MPC) is an advanced method of process control that has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models.[1] Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot. MPC has the ability to anticipate future events and can take control actions accordingly. PID and LQR controllers do not have this predictive ability. MPC is a digital control.

Overview[edit]

The models used in MPC are generally intended to represent the behavior of complex dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.

MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.

MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.

While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.

When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.

Theory behind MPC[edit]

A discrete MPC scheme.

MPC is based on iterative, finite horizon optimization of a plant model. At time t the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: [t,t+T]. Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time t+T. Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method. To some extent the theoreticians have been trying to catch up with the control engineers when it comes to MPC.[2]

Principles of MPC[edit]

Model Predictive Control (MPC) is a multivariable control algorithm that uses:

  • an internal dynamic model of the process
  • a history of past control moves and
  • an optimization cost function J over the receding prediction horizon,

to calculate the optimum control moves.

The optimization cost function is given by:

J=\sum_{i=1}^N w_{x_i} (r_i-x_i)^2 + \sum_{i=1}^N w_{u_i} {\Delta u_i}^2


without violating constraints (low/high limits)

With:

x_i = i -th controlled variable (e.g. measured temperature)

r_i = i -th reference variable (e.g. required temperature)

u_i = i -th manipulated variable (e.g. control valve)

w_{x_i} = weighting coefficient reflecting the relative importance of x_i

w_{u_i} = weighting coefficient penalizing relative big changes in u_i

etc.

Nonlinear MPC[edit]

Nonlinear Model Predictive Control, or NMPC, is a variant of model predictive control (MPC) that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not convex anymore. This poses challenges for both NMPC stability theory and numerical solution.[3]

The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other.

This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take one iteration towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized.

While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is more and more being applied to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems) [4]

Robust MPC[edit]

Robust variants of Model Predictive Control (MPC) are able to account for set bounded disturbance while still ensuring state constraints are met. There are three main approaches to robust MPC:

  • Min-max MPC. In this formulation, the optimization is performed with respect to all possible evolutions of the disturbance.[5] This is the optimal solution to linear robust control problems, however it caries a high computational cost.
  • Constraint Tightening MPC. Here the state constraints are enlarged by a given margin so that a trajectory can be guaranteed to be found under any evolution of disturbance.[6]
  • Tube MPC. This uses an independent nominal model of the system, and uses a feedback controller to ensure the actual state converges to the nominal state.[7] The amount of separation required from the state constraints is determined by the robust positively invariant (RPI) set, which is the set of all possible state deviations that may be introduced by disturbance with the feedback controller.

Commercially available MPC software[edit]

Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.

A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.

See also[edit]

References[edit]

  1. ^ Michèle Arnold, Göran Andersson. "Model Predictive Control for energy storage including uncertain forecasts" http://www.eeh.ee.ethz.ch/uploads/tx_ethpublications/PSCC2011_Arnold.pdf
  2. ^ Michael Nikolaou, Model predictive controllers: A critical synthesis of theory and industrial needs, Advances in Chemical Engineering, Academic Press, 2001, Volume 26, Pages 131-204
  3. ^ An excellent overview of the state of the art (in 2008) is given in the proceedings of the two large international workshops on NMPC, by Zheng and Allgower (2000) and by Findeisen, Allgöwer, and Biegler (2006).
  4. ^ M.R. García; C. Vilas, L.O. Santos, A.A. Alonso (2012). "A Robust Multi-Model Predictive Controller for Distributed Parameter Systems". Journal of Process Control 22 (1): 60–71. doi:10.1016/j.jprocont.2011.10.008. 
  5. ^ Scokaert, P. O.; Mayne, D.Q. (1998). "Min-max feedback model predictive control for constrained linear systems". IEEE Transactions on Automatic Control 43 (8): 1136–1142. doi:10.1109/9.704989. 
  6. ^ Richards, A.; How, J. (2006). "Robust stable model predictive control with constraint tightening". Proceedings of the American Control Conference. 
  7. ^ Langson, W.; Chryssochoos I.; Rakovic S. V.; Mayne, D.Q. (2004). "Robust model predictive control using tubes". Automatica 40 (1): 125–133. doi:10.1016/j.automatica.2003.08.009. 

Further reading[edit]

  • Kwon, W. H.; Bruckstein, Kailath (1983). "Stabilizing state feedback design via the moving horizon method". International Journal of Control 37 (3): pp.631–643. doi:10.1080/00207178308932998. 
  • Garcia, C; Prett, Morari (1989). "Model predictive control: theory and practice". Automatica 25 (3): pp.335–348. doi:10.1016/0005-1098(89)90002-2. 
  • Mayne, D.Q.; Michalska (1990). "Receding horizon control of nonlinear systems". IEEE Transactions on Automatic Control 35 (7): pp.814–824. doi:10.1109/9.57020. 
  • Mayne, D; Rawlings, Rao, Scokaert (2000). "Constrained model predictive control: stability and optimality". Automatica 36 (6): pp.789–814. doi:10.1016/S0005-1098(99)00214-9. 
  • Allgöwer; Zheng (2000). Nonlinear model predictive control. Progress in Systems Theory 26. Birkhauser. 
  • Camacho; Bordons (2004). Model predictive control. Springer Verlag. 
  • Findeisen; Allgöwer, Biegler (2006). Assessment and Future Directions of Nonlinear Model Predictive Control. Lecture Notes in Control and Information Sciences 26. Springer. 
  • Diehl, M; Bock, Schlöder, Findeisen, Nagy, Allgöwer (2002). "Real-time optimization and Nonlinear Model Predictive Control of Processes governed by differential-algebraic equations". Journal of Process Control 12 (4): pp.577–585. doi:10.1016/S0959-1524(01)00023-3. 

External links[edit]