A Heuristic Parameter-dependent Open-loop Model Predictive Control

For the uncertain linear systems described by the linear parameter varying (LPV) model, a parameter-dependent open-loop model predictive control (MPC) is proposed. The controller applies a tree trajectory to generate the vertices of uncertainty state predictions. Based on the state prediction tree, the future free control moves are parameter-dependent, whose vertices correspond to those of state predictions. The cost function penalizes the deviations of all the vertices of state/input from their steady-state target values. It is shown that the offset-free property is achieved by this method. A simulation example is given to demonstrate effectiveness of the approach.


I. INTRODUCTION
Model predictive control (MPC) has been widely applied as a representative advanced process control (APC) algorithm in industrial circle since 1978 [1]. Nowadays, MPC expands to both theoretical and industrial fields to obtain insightful results (e.g., referring to [2]- [10] for several good algorithms). The main feature of MPC is its ability to handle physical constraints and multiple variables in a systematic manner (i.e., pose into an receding-horizon optimization problem). At each control interval, MPC optimizes a cost function associated with the future state/output/input predictions based on an explicit model of the system satisfying some physical constraints, then yields a sequence of control moves. However, only the first control move among this sequence is implemented, and the refreshed optimization is performed at the next control interval. Since the future predictions for state/output/input are needed from the system model, the accuracy of this model is crucial for the the future prediction, which as a result, can influence the control performance.
An efficient system model should be representative for a wide class of systems. In MPC studies, there is a widely accepted system model called linear parameter varying (LPV) model. LPV model can include the dynamics of both nonlinear and uncertain systems by the utilizing the dynamic vector for a family of linear models [11]. Usually, thanks to the convexity property of the LPV model, applying LPV model in MPC can yield the convex optimization which can be solved very efficiently using the current optimization method such as interior point method. Another advantage for LPV model is the local linearity for each sub-linear model, which allows for the application of powerful linear design tools. MPC for LPV models, with guaranteed recursive feasibility (of optimization problem) and stability, has been referred to as the synthesis approach. Usually, a synthesis approach is designed based the state-or output-feedback laws (see e.g., [12]- [16]). Some free perturbation items have been added to the state-feedback law in order to enlarge regions of attraction [17]. The class of linearly parameter-dependent Lyapunov functions are proposed for MPC of LPV model in [18], which gives rise to less conservative stability conditions than those arising from classical quadratic Lyapunov functions in e.g. [19]. In [20], the authors presented a class of nonlinearly parameterized Lyapunov functions instrumental to the achievement of more efficient relaxed stability conditions. In [21], an efficient algorithm is given which constructs the maximal admissible set for LPVs. Considering the highspeed control for constrained LPV models, some explicit MPCs have been developed [22], [23]. MPC for LPV models with bounded parameter variations have been investigated in [24]- [26]. In [27], the output feedback MPC is proposed for LPV models based on the quasi-min-max algorithm. In [28], the authors considered the robust MPC for LPV models, where the scheduling parameter of LPV model is known online (advantageous for feedback).
Although there are excellent theoretical results in pursuing the synthesis approaches of MPC, they have not been reported for wide applications in real industrial systems. Instead, the widely applied MPC is the heuristic approach without guaranteeing the stability and recursive feasibility. For practical applications, one of the major drawbacks with synthesis MPC is its high computational burden, as compared with the heuristic one. Another disadvantage of MPC synthesis approach is its conservativeness since a min-max optimization is usually formulated which take into account all possible realization of the system model. Hence, the linear model is typically addressed for heuristic MPC. However, this can be risky for a intrinsic uncertain systems. This paper aims at LPV model, with consideration of state and input constraints, and adopts a parameter-dependent openloop MPC scheme based on the heuristic MPC. The proposed controller uses a tree trajectory to forecast the vertices of future state predictions, which is inspired by [29]. The optimization problem is properly formulated as a classic quadratic programming (QP) with the cost function involving all vertices of state predictions, input predictions and steadystate targets. By solving this QP, it yields the vertex control moves. By applying the scheme, the computational burden is less than the synthesis MPC and the offset-free control is achieved.
Notation: R n is the n-dimensional Euclidean space, and R m×n the m × n-dimensional real matrix space. For any matrix A, A T denotes its transpose. For the variable x, x(i|k) denotes the value at the future time k + i, predicted at k. The symbol implies that the element can be deduced from the symmetry of the matrix. A variable with * as superscript indicates that it is the optimal solution of the optimization problem. For the column vectors x and y, [x; y] = [x T , y T ] T . The time-dependence of MPC decision variables is often omitted for simplicity.

II. PROBLEM STATEMENT
Consider the discrete-time LPV model, i.e., where x(k) ∈ R n , u(k) ∈ R m are measurable state and input, respectively. We assume that i.e., there exist l time-varying nonnegative combining param- where [A l |B l ] are known vertices of the polytope.
The input and state constraints are where u := [u 1 , u 2 , · · · , u m ],ū : In practice, (1) is short for Namely, (1) neglects the symbol ∇ in (4), where ∇x = x − x eq and ∇u = u − u eq . x eq and u eq denote the steady-state operating points (equilibrium) of the system. The system is said to be at steady state at time T if where y ss , u ss and x ss are the steady-state targets (setpoints) of y, u and x, respectively. Since there is no uncertainty in matrix C, we have, at the steady state,

III. THE OPEN-LOOP CONTROL PROBLEM
In this section, in order to effectively counteract the timevarying uncertainty, an effective method is designed which calculates the vertex control moves for all corners of the uncertainty evolution. Define the vertex control moves where N is the control horizon. Note that the future control moves are based on the vertices of the uncertain polytope. With the increase of N , the number of vertices are increased dramatically. Then, the corresponding vertex state predictions are The true control move u(i|k) for i > 0 is defined by where u(i|k) is parameter-dependent, i.e., each u(i|k) is a convex combination through the parameters ω l h (k + h). According to (1) and (7), the future state predictions are found as . . .
Note that since the parameters w(k) is completely unknown, so the accurate value of state is unknown. Hence, we utilize the vertex state predictions for MPC controller design, where the accurate value of state is allowed to vary in the polytope. We define the following positive definite quadratic function with respect to vertices π x and π u as (10). In (10), are nonnegative weighing matrices. It is unnecessary that the steady-state targets are always equal to the equilibrium.
The objective of the control problem is to find the control actions that, once implemented, drive all branches (vertices) in the tree trajectory to converge to x ss and u ss . Accordingly, let vertices π x and π u be the decision variables. The optimization problem at each control interval k is formulated as a QP problem After the optimization problem is solved, only u(0|k) is implemented on the plant. Each diagonal block ofΨ is Ψ. The approach based on the optimization problem (11) is called the open-loop model predictive heuristic control (MPHC). Remark 1: Vertex control moves, vertex state predictions and the cost function asĴ N 0 (k) are found in [29]. This open-loop MPHC has some features, i.e., i) the computation load is less than the synthesis MPC; ii) stability of the system cannot be proved theoretically; iii) suppose that weighing matrices Q • and R • are positive definite, when y ss = 0 and u ss = 0, it is not easy to achieveĴ N 0 (∞) = 0, i.e., there may exist the offset. This is because usually, Cx li−1···l1l0 (N − 1|k) = y ss = 0 cannot hold for all i = 1, 2, · · · , N and l i−1 · · · l 1 l 0 even if the closed-loop system is stable.

IV. AN IMPROVED OPEN-LOOP MPHC
Since we cannot give deterministic [A(k)|B(k)], usually, y ss and u ss do not always satisfy the following equations: x ss = A ss x ss + B ss u ss y ss = Cx ss (12) However, there is an exception, i.e., when k → ∞, [A(k)|B(k)] can converge to the fixed [A ss |B ss ]. In this case, y ss and u ss in the optimization problem (11a) must satisfy (12). In general, in order to achieve offset-free control, we can assume that x ss and u ss satisfy the steady-state nonlinear equation g(x ss + x eq , u ss + u eq ) = 0, where g(·) is assumed to be Lipschitz continuous and differentiable with respect to x and u in X × U, with g(0, 0) = 0.
When we obtain x ss and u ss from (13), we calculate Then, the cost functionĴ N 0 (k) is replaced bỹ The optimization problem (11) is updated to s.t. (9), (11b), (11c) (15b) Remark 2: In [29], ifĴ N 0 (k) is utilized, the offset-free control is achieved because x ss and u ss are obtained by a special procedure. Otherwise the proof of Theorem 5.2 in [29] must take advantage of the cost functionJ N 0 (k) in this paper. The open-loop MPHC algorithm is summarized as follows.

V. NUMERICAL EXAMPLE
A nonlinear model of a continuous stirred tank reactor (CSTR) is adopted in this simulation (see Figure 1). With constant volume, the CSTR for an exothermic, irreversible reaction A → B is described bẏ where C A is the concentration of material A in the reactor, T the reactor temperature, T c the coolant stream temperature. V and U A denote the volume of the reactor and the rate of heat input, respectively. k 0 , E, and ∆H denote the preexponential constant, the activation energy, and the enthalpy of the reaction, respectively. C p and ρ stand for the heat capacity and density of the fluid in the reactor, respectively. The objective is to regulate T by manipulating T c satisfying 328 K ≤ T c ≤ 348 K. Denote the non-zero equilibrium as {C eq A , T eq , T eq c }. Choose C eq A = 0.5 mol/l, T eq = 350 K, T eq c = 338 K, 340 K ≤ T ≤ 360 K, 0 ≤ C A ≤ 1 mol/l, q = 100 l/min, C Af = 0.9 mol/l, T f = 350 K, V = 100 l, ρ = 1000 g/l, C p = 0.239 J/(g K), ∆H = −2.5 × 10 4 J/mol, E/R = 8750 K, k 0 = 3.456 × 10 10 min −1 , UA = 5 × 10 4 J/(min K).
A , T − T eq T , u = T c − T eq c . Denote the bounds on u and x as u ≤ u ≤ū (−10 ≤ u ≤ 10), x 1 ≤ x 1 ≤x 1 (−0.5 ≤ x 1 ≤ 0.5), and x 2 ≤ x 2 ≤ x 2 (−10 ≤ x 2 ≤ 10). VOLUME 4, 2016 Define Then (16) can be exactly represented by wherẽ By discretizing the continuous system (17) with sampling period T s = 0.05min, we obtain the discrete-time LPV model, i.e., Base on (16), lettingĊ A = 0 andṪ = 0, we obtain the steady-state model g(x ss , u ss ) of (16). Choose the input steady-state setpoint T ss c = 330, and find C ss A , T ss satisfying (13), i.e., To illustrate the effectiveness of the proposed approach, we take the synthesis MPC in [30] which also uses the tree trajectory approach for comparison. The simulation results are shown in Figures 2 and 3. From the figures we find that the values of the input and state finally converge to the steady-state setpoints, while the deviations between the actual steady-state values and the steady-state setpoints are zero. This reveals that the offset-free property is achieved. However, since [30] needs to guarantee the stability and considers all possible realization of the system model in the robust worst-case manner, optimization inevitably suffers from high computational burden (see Table 1).  Approach in this paper Approach in [30]

FIGURE 3: Control input
Total time for 100 simulation steps (seconds) Approach in this paper 230.4209 Approach in [30] 884.2202 Based on these predictions, the optimization problem is transformed into a QP involving all vertices of state predictions, input predictions and steady-state targets. Under this scheme, the offset-free control is achieved with a low computational burden.