Volumetric Barrier Cutting Plane Algorithms for Stochastic Linear Semi-Infinite Optimization

In this paper, we study the two-stage stochastic linear semi-infinite programming with recourse to handle uncertainty in data defining (deterministic) linear semi-infinite programming. We develop and analyze volumetric barrier cutting plane interior point methods for solving this class of optimization problems, and present a complexity analysis of the proposed algorithms. We establish our convergence analysis by showing that the volumetric barrier associated with the recourse function of stochastic linear semi-infinite programs is a strongly self-concordant barrier and forms a self-concordant family on the first-stage solutions. The dominant terms in the complexity expressions obtained in this paper are given in terms of the problem dimension and the number of realizations. The novelty of our algorithms lies in their ability to kill the effect of the radii of the largest Euclidean balls contained in the feasibility sets on the dominant complexity terms.


I. INTRODUCTION
The purpose of this paper is to introduce and analytically study the two-stage stochastic linear semi-infinite programming (SLSIP in brief) with recourse in the dual standard form where x ∈ R m 1 is the first-stage decision variable, is an index set, the vectors c, a γ ∈ R m 1 and b γ ∈ R (γ ∈ ) are deterministic data, and Q(x, ω) is the maximum value of the problem max d(ω) T y where y ∈ R m 2 is the second-stage variable, is an index set, the vectors d(ω), w λ (ω), t λ (ω) ∈ R m 2 and h λ (ω) ∈ R (λ ∈ ) are random data whose realizations depend on an underlying outcome ω in an event space with a known The associate editor coordinating the review of this manuscript and approving it for publication was Hao Luo . probability function P, and E[Q(x, ω)] := Q(x, ω)P(dω).
To the best of our knowledge, the optimization problem introduced in (1) and (2) has not been studied yet, although it is the simplest nontrivial stochastic semi-infinite optimization problem. A linear semi-infinite program is an optimization problem with a linear objective function and linear constraints in which either the number of unknowns or the number of constraints is finite. Clearly, the SLSIP problem (1) and (2) has a finite number of unknowns and an infinite number of constraints.
It can be seen that the SLSIP generalizes the ordinary stochastic linear programming by allowing infinite number of constraints on one hand, and the deterministic linear semiinfinite programming by allowing uncertainty in data on the other hand. The SLSIP is also a special case of the stochastic semi-infinite programming (or, more generally, the stochastic infinite-dimensional programming) by enforcing linearity in the objective function and the constraints. The very broad and direct applications of each of linear programs, semi-infinite programs (see for example [1]- [7]) and two-stage stochastic programs (see for example [8]- [14]) attracted us to study and analyze SLSIPs as a promising class of optimization problems applicable to a wide range of real-life problems. See the models in [15]- [17] which describe some applications in radiation transfer theory, neutron transport theory, and waste management.
Interior-point algorithms [18]- [30] are one of the most intensively developed methods of convex optimization. Luo et al. [22] (see also [23], [24]) derived a logarithmic barrier decomposition-based interior point algorithm for deterministic linear semi-infinite programming. An alternative to the logarithmic barrier is the volumetric barrier of Vaidya [25] (see also [26]). It has been found [14] that some cutting plane algorithms for stochastic linear programming problems based on the volumetric barrier perform practically better than those based on the logarithmic barrier. For this reason, a number of volumetric barrier Benders' decomposition-based interior point algorithms have been developed recently for solving stochastic (linear, second-order, and semidefinite) cone optimization problems. Below we briefly outline these algorithmic results. In 2007, Ariyawansa and Zhu [27] derived a volumetric barrier decomposition interior point algorithm for two-stage stochastic (convex) quadratic linear programming. In 2011, Ariyawansa and Zhu [28] generalized their work in [27] to derive a volumetric barrier decomposition interior point algorithm for two-stage stochastic semidefinite programming. In 2015, Alzalg [29] exploited the work of Ariyawansa and Zhu in [27], [28] to derive a volumetric barrier decomposition interior point algorithm for two-stage stochastic second-order cone programming.
Note that the setting in this paper is similar to that of Ariyawansa and Zhu [27] for stochastic quadratic linear programs, but the linearity is assumed in the objective function, and, most notably, the semi-infiniteness is involved in the constraints of our setting. The current setting is also similar to that of Luo et al. [22] for deterministic linear semi-infinite programming but the stochasticity with discrete support is assumed here. In this paper, we utilize the work of Luo et al. [22] for deterministic linear semi-infinite programming on one hand and the work of Ariyawansa and Zhu [27] for ordinary stochastic quadratic linear programming on the other hand to derive volumetric barrier cutting plane decomposition algorithms for two-stage SLSIP problem with recourse.
We establish our convergence analysis by showing that the volumetric barrier associated with the recourse function of stochastic linear semi-infinite programs behaves a strongly self-concordant barrier and forms a self-concordant family on the first-stage solutions. We will see that the self-concordance analysis is established in this work in a different way than that in Section 3 of [27]. In fact, as (convex) quadratic linear programming is a special case of semidefinite programming, the authors in [27] re-wrote the stochastic quadratic linear programming problem as a stochastic semidefinite programming problem (by formulating the linear inequalities as linear matrix inequalities) and heavily made use of their own results in [28,Section 4] for stochastic semidefinite programming. In comparison, we approach the self-concordance proofs from a linear programming point of view, which gives more explicit and direct proofs because we use more elementary arguments.
We establish polynomial complexity of the resulting methods. The dominant terms of the complexity expressions obtained in this work are given in terms of the problem dimension and the number of realizations. Unlike the complexity expression obtained for the logarithmic barrier algorithm in [22], the dominant complexity terms for our volumetric barrier algorithm are not affected by the radii of the largest Euclidean balls contained in the feasibility sets. From this advantage comes the importance of the development of this paper. We will see that this significant advantage stems from the use the volumetric barrier instead of using the logarithmic barrier. We will also see that the ''rich flavor'' hidden inside the volumetric barrier can be tasted in the proposed algorithm of this work more than in their counterparts algorithms in [27]- [29].
We mention how this paper is structured. In Section II, we present our problem formulation, discretization, some assumptions, and introduce the volumetric barrier problem associated with SLSIP problem. In Section III, we first show that the problem of finding an approximate minimizer of the SLSIP problem can be reduced to that of finding an approximate minimizer of the discretized problem under a certain condition on the measure of proximity of the current point x to the central path. Then, we compute the gradient and Hessian of the barrier functions in the second part of Section III. In Section IV, we show that the set of volumetric barrier functions for positive values of barrier parameter forms a self-concordant family. Based on this property, we present a class of volumetric barrier cutting plane interior point algorithms and provide their convergence and complexity in Section V. Section VI contains some concluding remarks. The proofs of the convergence and complexity results are given in Appendix VI.
We end this section by introducing some notations that will be used in the sequel. Let R m×n denote the vector space of real m × n matrices. We use to denote the Hadamard product of matrices; i.e. (U V ) ij = u ij v ij for U , V ∈ R m×n . Let R n∨n denote the vector space of real symmetric n × n matrices. For U , V ∈ R n∨n , we write U 0 (U 0) to mean that U is positive semidefinite (positive definite) and U V or V U to mean that U −V 0. For any strictly positive vector x ∈ R n , we define ln x := (ln x 1 , . . . , ln x n ) T , We also use X := diag (x 1 , . . . , x n ) to denote the n × n diagonal matrix whose diagonal entries are x 1 , . . . , x n .

II. PROBLEM FORMULATION AND ASSUMPTIONS
In this section, we first present the extensive formulation of the SLSIP problem (1) and (2) with discretization. Then, we present the volumetric barrier problem for SLSIPs.

A. THE SLSIP PROBLEM FORMULATION AND DISCRETIZATION
We examine (1) and (2) when the event space is finite. Let : k = 1, . . . , K , λ ∈ be the set of the possible values of the random variables (d 0 (ω), w λ (ω), t λ (ω), h λ (ω)) and let be the associated probability for k = 1, 2, . . . , K . Then, the two-stage SLSIP problem (1) and (2) becomes where, for k = 1, 2, . . . , K , (k) (x) is the maximum value of the problem We re-define d (k) as d (k) := p k d (k) for k = 1, 2, . . . , K . Further, in order to conveniently make use of the results in [26], we rewrite the SLSIP problem (3) and (4) in the following equivalent form: where, for k = 1, 2, . . . , K , (k) (x) is the minimum value of the problem Without loss of generality, we assume the vectors c and a γ , γ ∈ , are normalized so that c = a γ = 1, for all γ ∈ . We also assume the vectors d (k) and w (k) , for all λ ∈ and k = 1, 2, . . . , K . Let Then F denotes the feasible set of the SLSIP problem (5) and (6). Throughout the paper, we make the assumptions that F 1 is contained in the unit hyperbolic [0, 1] m 1 , F (k) 2 (x) is contained in the unit hyperbolic [0, 1] m 2 and F has nonempty interior.
For any subsets Q 1 ⊂ and Q 2 ⊂ , we can define a corresponding discretization (or relaxation) of the SLSIP problem (5) and (6) by considering only those constraints indexed by Q 1 in the subproblem (5) and only those constraints indexed by Q 2 in the subproblem (6). So, we can consider the following discretization of the SLSIP problem (5) and (6).
where, for k = 1, 2, . . . , K , (k,n 2 ) (x) is the minimum value of the problem . . , n 2 . (8) We construct the above discretization by choosing n 1 (n 1 ≥ 2 m 1 ) linear constraints from the constraint set {a T γ x ≥ b γ : γ ∈ } and n 2 (n 2 ≥ 2 m 2 ) linear constraints from the } for each k = 1, 2, . . . , K . As will be illustrated thoroughly later in the next section, the problem of finding an approximate minimizer of the SLSIP problem (5) and (6) can be reduced to that of finding an approximate solution: x ∈ F, y (1) ∈ F (1) 2 (x) of the discretized problem (7) and (8) provided a certain bound on the measure of proximity of the current point x to the central path holds.
For the sake of simplicity in presentation, we write A (n 1 ) ∈ R m 1 ×n 1 to denote the matrix whose i th column is the vector a i ∈ R m 1 for i = 1, 2, . . . , n 1 . Likewise, W (k,n 2 ) ∈ R m 2 ×n 2 and T (k,n 2 ) ∈ R m 1 ×n 2 denote the matrices whose j th columns are the vectors w (k) j ∈ R m 2 and t (k) j ∈ R m 1 , respectively, for j = 1, 2, . . . , n 2 and k = 1, 2, . . . , K . We also write b (n 1 ) ∈ R n 1 for the vector whose the i th component is b i for i = 1, 2, . . . , n 1 , and write h (k,n 2 ) ∈ R n 2 for the vector whose the j th component is h (k) j , for j = 1, 2, . . . , n 2 and k = 1, 2, . . . , K . With these simplified notations, the SLSIP problem (5) and (6) becomes where, for k = 1, 2, . . . , K , (k,n 2 ) (x) is the minimum value of the problem The SLSIP problem (9) and (10) can be equivalently written as a DSILP: VOLUME 8, 2020 Note that the dual of the SLSIP problem (11) is the problem where ν is the first-stage dual multiplier and (ν (1) ; . . . ; ν (K ) ) is the second-stage dual multiplier.

B. THE VOLUMETRIC BARRIER PROBLEM FOR SLSIPs
First, we define For simplicity, we write s 1 and s 2 for s (n 1 ) 1 (x) and s (k,n 2 ) 2 (x, y (k) ), respectively, when it does not lead to confusion. Then we make the following assumptions: Assumption 1: The matrices A (n 1 ) , T (k,n 2 ) and W (k,n 2 ) for all k have full row rank.
Assumption 2: The set F • is nonempty. Assumption 1 is for convenience. Assumption 2 guarantees strong duality for first-and second-stage SLSIPs.

III. RELATIONSHIPS AND COMPUTATIONS
In this section, we first study the aspect of the relationship between the SLSIP problem and its discretization. Then, we obtain expressions for the derivatives of the recourse functions required in the rest of the paper.

A. RELATIONSHIP OF THE SLSIP PROBLEM TO THE DISCRETIZED PROBLEM
In this part, we show that the problem of finding an approximate minimizer of the SLSIP problem (5) and (6) can be reduced to that of finding an approximate solution: x ∈ F, y (1) ∈ F (1) 2 (x) of the discretized problem (7) and (8) under a certain condition on the measure of proximity δ of the current point x to the central path.
We need the following lemma in Lemma 2 in order to be able to relate the approximate solutions of the SLSIP problem to the discretized problem.
We need the following lemma in Lemma 2 in order to be able to relate the approximate solutions of the SLSIP problem to the discretized problem.
Proof: By Lemma 1, (x; y (1) ; . . . ; y (K ) ) is an -minimizer of the discretized problem (7) and (8) Let q 1 and q 2 be the cardinalities of Q 1 and Q 2 , respectively. We claim that (x; y (1) ; . . . ; y (K ) ) is an -minimizer of the discretized problem (5) and (6) defined by the index sets Q 1 and Q 2 . To see this, we construct the dual multiplier (ν (q 1 ) (µ, x);ν (1,q 2 and W (k,q 2 ) (respectively, h (k,q 2 ) ) denote(s) matrices (vector) whose j th columns (component) are (is) given by t , T (k,q 2 ) and W (k,q 2 ) contain A (n 1 ) , T (k,n 2 ) and W (k,n 2 ) , respectively, as submatrices. It follows that This proves that (ν (n 1 ) ;ν (1,n 2 ) ; . . . ;ν (K ,n 2 ) ) is dual feasible. In addition, we have Thus, the duality gap remains unchanged. By Lemma 1 again, we obtain is the minimum value of the discretized subproblem min is the minimum value of the discretized problem is the minimum value of the discretized problem (7) defined by the index sets Q 1 and Q 2 .
Recall that (k) denotes the global minimum of the original problem (6). Let denote the global minimum of the original problem (5). Now, we are ready to apply a theorem of Gustafson [32] which concludes that under the compactness assumptions on and and the assumptions that the maps λ are continuous, the minimum value (k,q 2 ) 2 converges to (k) for a certain sequence of index set Q 2 with increasing cardinality, and the minimum value (q 1 ,q 2 ) converges to for a certain sequence of index sets Q 1 and Q 2 with increasing cardinality.
Taking the limit along the sequences of index sets Q 1 and Q 2 in the above inequality, we get showing that (x, y (1) , . . . , y (k) ) is indeed an -minimizer of the SLSIP problem (5) and (6). The proof is complete.
By Lemma 2, we can reduce the problem of finding an approximate minimizer of problem (5) and (6) to that of finding an approximate solution (x, y (1) , . . . , y (K ) ) of the discretized problem (9) and (10) such that x ∈ F, y (1) ∈ F 2 (x), . . . , y (K ) ∈ F 2 (x) and δ(µ, x) ≤ 1. So, in the proposed algorithms, we always maintain these two conditions, and we should be able to guarantee that we can successfully reduce the barrier parameter µ and the algorithms terminate finitely. Resolving this is, in fact, the substance of Section V.
In this part, we compute the gradient and the Hessian of η (n 1 ,n 2 ) (µ, x), which in turn requires obtaining a representation for the gradient and the Hessian of the barrier functions.
In order to compute the derivatives of η we need to determine the derivatives of the function ρ, k = 1, 2, . . . , K . This requires computing the derivatives of (n 1 ) 1 and v (n 1 ) 1 and the partial derivatives of (k,n 2 ) 2 and v (k,n 2 ) 2 with respect to x for k = 1, 2, . . . , K . Throughout the rest of this section and the next, we will drop the superscripts (k), (n 1 ) and (k, n 2 ) when it does not lead to confusion.
First, we compute the gradient and the Hessian of the logarithmic barriers 1 (x) and 2 (x, y) with respect to x and y.
Note that Note that the Hessian matrices are positive definite under Assumption 1 and the assumptions that s 1 = s 1 (x) > 0 and s 2 = s 2 (x, y) > 0.
Next, we compute the gradient and the Hessian of the volumetric barriers v 1 (x) and v 2 (x, y) with respect to x.
Throughout this section, we define Note that P 1 and P 2 act as the orthogonal projections onto the ranges of AS −1 1 , TS −1 2 and WS −1 2 , respectively. Let σ 1 := σ 1 (s 1 ) and σ 2 := σ 2 (s 2 ) denote the vectors equal to the diagonal of the projection matrices P 1 and P 2 , respectively. In other words, σ 1i = P 1ii and σ 2j = P 2jj , respectively, for i = 1, 2, . . . , n 1 and j = 1, 2, . . . , n 2 . Following our notations in Section I, let 1 := diag(σ 1 ) and 2 := diag(σ 2 ). Derivations in the Appendix of Anstreicher [33] can be reconstructed and similar details contained therein can be adopted for our setting to obtain and We now compute the first and second order derivatives of ρ with respect to x.

IV. FUNDAMENTAL PROPERTIES OF THE VOLUMETRIC BARRIER RECOURSE
In this section, we establish fundamental properties of the recourse function η(µ, x) that lead to nice performance of Newton's method used for the proposed algorithms. More specifically, we prove that the recourse function with volumetric barrier is a strongly self-concordant function leading to a strongly self-concordant family with appropriate parameters. This allows us to develop volumetric barrier decomposition interior point algorithms for solving SLSIPs and establish their convergence and complexity analysis. As we mentioned in the introduction, our proofs in this section are different from those in Section 3 of [27], for which the authors heavily made use of their own results in [28, Section 4] after re-writing the underlying problem as a stochastic semidefinite programming problem. In comparison, although our proofs are not totally self-contained, the self-concordance results for the current setting are completely proven in the context of linear programming, which has the advantage of allowing very explicit and direct proofs. First, we prove that η(µ, ·) is a strongly self-concordant barrier on F • . We have the following definition.

Definition 1 (Nesterov and Nemirovskii): [30, Definition 2.1.1] Let E be a finite-dimensional real vector space,
G be an open nonempty convex subset of E, and let g be a C 3 , convex mapping from G to R. Then g is called α-selfconcordant on G with the parameter α > 0 if for every x ∈ G and h ∈ E, the following inequality holds: 3/2 . An α-self-concordant function g on G is called strongly α-self-concordant if g tends to infinity for any sequence approaching a boundary point of G. We note that in the above definition the set G is assumed to be open. However, relative openness would be sufficient to apply the definition. See also [30, Item A, Page 57].
Throughout this section, we define The proof of self-concordance of η(µ, ·) depends on the following three lemmas.  , y)[p, p]. VOLUME 8, 2020 For proofs of Lemmas 3 and 4, similar details contained in the derivations of Equations (2.3) and (2.18) in [26] can be adopted and reconstructed for current setting. The proof of Lemma 5 is very similar to that of Theorem 2.3 in [26] except that our setting is based on stochasticity which brings the second-stage variable y besides x.
We can now state and prove the characterization the self concordance of v 1 (·) and v 2 (·, ·) for F • 1 and F • 2 , respectively. Then, we state and prove the following results which characterize the self-concordance of ρ(µ, ·) and η(µ, ·) for F • .
Proof: By using (22) we have It follows that This completes the proof. Next, we show that the family of functions {η(µ, ·) : µ > 0} is a strongly self-concordant family with appropriate parameters. We have the following definition.
Lemma 7: For any µ > 0 and x ∈ F • , we have for all p ∈ R m 1 . Proof: Since P 2 is a projection onto an m 2 -dimensional space, we have where u 1 , u 2 , . . . , u m 2 are orthonormal eigenvectors of P 2 corresponding to the nonzero eigenvalues of P 2 . Using Lemma 3 we have {∇ 2 xx v 2 (x, y)} −1 {Q(x, y)} −1 . This implies that The above bound is equivalent to Similarly, we can show that By differentiating ∇ x η(µ, x) in (23) with respect to µ, we have It follows that, for all p ∈ R m 1 , by using (25) and (26) we obtain |∇ x η (µ, x)[p]| as shown at the bottom of this page, The proof is complete.

V. VOLUMETRIC BARRIER CUTTING PLANE ALGORITHMS AND COMPLEXITY
Based on the self-concordance analysis established in Section IV, we develop a volumetric barrier cutting plane algorithm for SLSIPs, which is formally stated in Algorithm 1.
In Algorithm 1, we use µ = µ 0 as the starting value for the barrier parameter, as the desired accuracy of the final solution, and γ as the reduction parameter. We also use β as a threshold for the measure of the proximity δ of the current point x to the central path. We start with x 0 as a given first-stage interior point (possibly infeasible) that satisfies the initial set of 2m 1 constraints A (2m 1 ) T x ≥ b (2m 1 ) with an initial constraint matrix A (2m 1 ) and an initial right-hand side vector b (2m 1 ) . We obtain second-stage interior points (possibly infeasible)ȳ (1) ,ȳ (2) , . . . ,ȳ (K ) by solving Problem (10) projected to the initial set of 2m 2 constraints W (k,2m 2 ) T y (k) ≥ h (k,2m 2 ) − T (k,2m 2 ) T x with initial constraint matrices W (k,2m 2 ) and T (k,2m 2 ) and an initial right-hand side vector h (k,2m 2 ) for each k = 1, 2, . . . , K .

End while End algorithm
For convenience, because it is our assumption that . . , m 1 } as our initial set of 2m 1 constraints; these constraints can be written as . . , m 2 } as our initial set of 2m 2 constraints; these constraints can be written as W (k,2m 2 ) T y (k) ≥ h (k,2m 2 ) − T (k,2m 2 ) T x, for each k = 1, 2, . . . , K .
Impacted by the manner of selecting the parameter γ in Algorithm 1, we have two variants of algorithms: The shortstep algorithm and the long-step algorithm. Below we identify suitable values for the algorithmic parameters γ and β introduced in Algorithm 1.
1/6, in long-step alg. (27) Note that if the current point x is too far away from the central path in the sense that δ > β, Newton's method is applied to find a point close to the central path, then the value of µ is reduced by a factor γ and the whole process is repeated until the value of µ is within the tolerance . Algorithm 1 approximately traces the central path as µ approaches zero. This ends up in a strictly feasible -optimal solution of the problem. Note also that when an iterate becomes infeasible, a new cut is introduced and the algorithm attempts to move to a new central point.
Theorems 5 and 6 present the complexity analysis for the short-step algorithm and the long-step algorithm, respectively.
In the short-step algorithm, the barrier parameter in each iteration is decreased by the factor γ given in (27). The k th iteration of the short-step algorithm is performed as follows: At the beginning of the iteration, we have µ (k−1) and x (k−1) on hand and x (k−1) is close to the center path, i.e., δ(µ (k−1) , x (k−1) ) ≤ β. After we reduce the barrier parameter µ from µ (k−1) to µ k := γ µ (k−1) , we have that δ(µ k , x (k−1) ) ≤ 2β. Then we take a full Newton step with size θ = 1 to produce a new point x k with δ(µ k , x k ) ≤ β. The TABLE 1. Comparison of complexities between the logarithmic and volumetric barriers for SLSIP with dimensions m 1 and m 2 in the first-and second-stage problems, respectively, and for K number of realizations, when the maximum numbers of cuts generated are n 1 max and n 2 max in the firstand second-stage problems, respectively.
following theorem presents the complexity result for shortstep algorithm.
Theorem 5: Assume that the maximum numbers of cuts generated by Algorithm 1 are finite and are denoted by n 1 max and n 2 max in the first-and second-stage problems, respectively. If the starting point x 0 is sufficiently close to the central path, i.e., δ(µ 0 , x 0 ) ≤ β, then the short-step algorithm reduces the barrier parameter µ at a linear rate and terminates with at most Proof: See Sub-appendix VI-A.
In the long-step algorithm, we decrease the barrier parameter by an arbitrary constant factor γ ∈ (0, 1). It has a potential for larger decrease on the objective function value, however, several damped Newton steps might be needed for restoring the proximity to the central path. The k th iteration of the long-step algorithm is performed as follows: At the beginning of the iteration we have a point x (k−1) , which is sufficiently close to x(µ (k−1) ), where x(µ (k−1) ) is the solution to (12) for µ := µ (k−1) . We reduce the barrier parameter from µ (k−1) to µ k := γ µ (k−1) , where γ ∈ (0, 1), and then start the searching to find a point x k that is sufficiently close to x(µ k ). The long-step algorithm generates a finite sequence consisting of N points in F 0 , and we finally take x k to be equal to the last point of this sequence. The following theorem presents the complexity result for long-step algorithm.
Theorem 6: Assume that the maximum numbers of cuts generated by Algorithm 1 are finite and are denoted by n 1 max and n 2 max in the first-and second-stage problems, respectively. If the starting point x 0 is sufficiently close to the central path, i.e., δ(µ 0 , x 0 ) ≤ β, then the long-step algorithm reduces the barrier parameter µ at a linear rate and terminates with at most It is clear that the dominant terms in the complexity expressions in Theorems 5 and 6 are given in terms of the problem dimension and the number of realizations, and, most notably, they are not given in terms of maximum numbers of cuts to be generated. Table 1 compares the complexities between the logarithmic and volumetric barriers for SLSIP. In case the logarithmic barriers 1 and 2 are used instead of the volumetric barriers v 1 and v 2 in the first-and second-stage problems, respectively, it can be shown that the complexity expressions in Theorems 5 and 6 become those shown in the middle column of Table 1, which have more complexity than those obtained in Theorems 5 and 6 because of contributing n 1 max and n 1 max in the leading terms and bounding them is generally difficult.
Note that the complexity results in Theorems 5 and 6 are the counterparts of those in Theorems 6 and 7 in [27] for two-stage stochastic quadratic linear programs with recourse, those in Theorems 3 and 4 in [29] for two-stage stochastic second-order programs with recourse, and those in Theorems 4 and 5 in [28] for two-stage stochastic semidefinite programs with recourse. Note also that the ''rich flavor'' hidden inside the volumetric barrier can be tasted in Theorems 5 and 6 more than in their counterpart theorems in [27]- [29]. The reason for this is that there are no cuts to be generated in the optimization problems studied in [27]- [29], which in turns makes no big difference by replacing m 1 and m 2 with n 1 and n 2 in case the volumetric barrier is not used in [27]- [29].
Since the maximum numbers of cuts to be generated contribute in the log-terms in the complexity expressions, we also bound the numbers n 1 max and n 2 max . Luo et al. [22] proved the existence of an upper bound on the number of cuts to be generated by the short-step algorithm. Taking into account such a bound in [22] and applying this to our problem setting, we conclude the following result which bounds the number of cuts to be generated by the short-step algorithm. The following theorem follows directly from the definition of self-concordance and [22,Theorem 4.2].
Theorem 7: Assume the hypothesis of Lemma 2 holds. Let r 1 be the radius of the largest Euclidean ball contained in F 1 and r 2 be the radius of the largest Euclidean ball contained in F (k) 2 for each k = 1, 2, . . . , K . Assume also that the short-step algorithm is used in Algorithm 1. Then for > 0, Algorithm 1 terminates with an -minimizer of Problem (5) and (6)  We conclude from Theorem 7 that the log-terms in the complexity expressions Theorems 5 and 6 are given in terms of the problem dimension and the radii of the largest Euclidean balls contained in the feasibility sets. On the other side, we also conclude from the above discussion that the dominant complexity terms are not affected by the radii of the largest Euclidean balls contained in the feasibility sets.

VI. CONCLUSION
In this paper, we have studied the two-stage stochastic linear semi-infinite programming problem with discrete support. We have develop and analyzed volumetric barrier cutting plane interior point algorithms for solving this class of optimization problems. We have proved the convergence results by showing that the volumetric barrier associated with the recourse function forms a self-concordant family. We have also described and analyzed short-and long-step variants of the algorithm that follow the primal central trajectory of the first-stage problem.
We have seen that, for a stochastic linear semi-infinite program with dimensions m 1 and m 2 in the first-and second-stage problems, respectively, and for K number of realizations, when the maximum numbers of cuts generated are n 1 max and n 2 max in the first-and second-stage problems, respectively, we need at most O( √ (1 + K )(m 1 ς 1 + m 2 ς 2 ) ln((1 + K ) (n 1 max + n 2 max ) µ 0 / )) Newton iterations in the short-step algorithm to follow the central path from a starting value of the barrier parameter µ 0 to a terminating value /((K + 1) (n 1 max + n 2 max + √ n 1 max + n 2 max )), and we need at most O((1 + K )(m 1 ς 1 + m 2 ς 2 ) ln((1 + K ) (n 1 max + n 2 max ) µ 0 / )) Newton iterations in the long-step algorithm for this recentering, where is the desired accuracy of the final solution. Note that the dominant terms in the above complexity expressions are given in terms of the problem dimension and the number of realizations, and, most notably, they are not given in terms of maximum numbers of cuts to be generated, which means that they are not affected by the radii of the largest Euclidean balls contained in the feasibility sets.
Our framework is attractive from analytical and complexity points of view. Nevertheless, there are several issues that need further work to develop practical implementations. This includes, but not limited to, developing a practical first-stage length selection procedure and adapting addition of scenarios. A rigorous computational framework of these issues and others is a topic of future research.

APPENDIX COMPLEXITY PROOFS
In this appendix, we present proofs for the complexity results stated in Section V that bound the number of iterations. The general scheme of our proofs follows the lines of the proofs from [29] and [28]. The proof of Theorem 5 for the short-step algorithm is given in Sub-appendix VI-A, and the proof of Theorem 6 for the longstep algorithm is given in Sub-appendix VI-B. Throughout this appendix, we will drop the superscripts (k), (n 1 ) and (k, n 2 ) when it does not lead to confusion.
The proofs make use of the following proposition which is due to [30, Theorem 2.1.1].
By Proposition 3, we have δ(µ + , x) ≤ κ. From Lemmas 8(i) and 9, we deduce that we can reduce the parameter µ by the factor γ given in (27), at each iteration, and that only one Newton step is sufficient to restore proximity to the central path. Hence, Theorem 5 follows.

B. COMPLEXITY PROOF OF THE LONG-STEP ALGORITHM
For x ∈ F 0 and µ > 0, we define the function φ(µ, x) := η(µ, x(µ))−η(µ, x), which represents the difference between the objective value η(µ k , x (k) ) at the end of k th iteration and the minimum objective value η(µ k , x(µ k−1 )) at the beginning of k th iteration. Then the task is to find an upper bound on φ(µ, x). To do so, we first give upper bounds on φ(µ, x) and φ (µ, x) respectively.
The proof of Theorem 6 makes use of the following lemma, in which its proof is quite similar to that of [28,Lemma 6].
Observe that the previous lemma requiresδ < 1. However, evaluatingδ explicitly may not be possible. Now we will see thatδ is actually proportional to δ, which can be evaluated. The following lemma is due to [28,Lemma 8].