Gradient Methods with Dynamic Inexact Oracles

We show that the primal-dual gradient method, also known as the gradient descent ascent method, for solving convex-concave minimax problems can be viewed as an inexact gradient method applied to the primal problem. The gradient, whose exact computation relies on solving the inner maximization problem, is computed approximately by another gradient method. To model the approximate computational routine implemented by iterative algorithms, we introduce the notion of dynamic inexact oracles, which are discrete-time dynamical systems whose output asymptotically approaches the output of an exact oracle. We present a unified convergence analysis for dynamic inexact oracles realized by general first-order methods and demonstrate its use in creating new accelerated primal-dual algorithms.


I. INTRODUCTION
We consider algorithms for solving the unconstrained minimax problem min.
x∈R n max y∈R m L(x, y) := f (x) + y T Ax − g(y). (1) We assume that f is smooth and convex (but not necessarily strongly convex), g is smooth and strongly convex, and A ∈ R m×n has full column rank. For convenience, we define p(x) := max y L(x, y) and write problem (1) as min.
x p(x), which we refer to as the primal problem. We also define d(y) := min x L(x, y) and refer to the problem max.
as the dual problem. Under the given assumptions, it follows from standard results (see, e.g., [12, Ch. X]) in convex analysis that both p and −d are strictly convex (in fact, strongly convex). Therefore, the primal-dual optimal solution of problems (2) and (3) is unique, which we denote by (x , y ). The minimax problem (1) has a number of applications. For example, when f (x) = −b T x for some b ∈ R n , the dual problem (3) becomes equivalent to the equality-constrained convex optimization problem given by max.
Other applications include image processing [5] and empirical risk minimization [22]. More broadly, when the function L is a general convex-concave function, the minimax problem formulation also arises in game theory [16] and robust optimization [2].
The author is with the Department of Electrical and Computer Engineering, University of Illinois, Chicago, IL 60607. hanshuo@uic.edu.
One important algorithm for computing the primal-dual optimal solution (x , y ) is the primal-dual gradient method (PDGM): x k+1 = x k − η 1 ∇ 1 L(x k , y k ) where η 1 and η 2 are step sizes, and ∇ 1 L(x k , y k ) = ∇f (x k ) + A T y k and ∇ 2 L(x k , y k ) = Ax k − ∇g(y k ) are the partial derivatives of L with respect to the first and second arguments, respectively. The PDGM is also known by various other names such as the Arrow-Hurwicz gradient method [1, p. 155] and the (simultaneous) gradient descent ascent method (see, e.g., [7]). It has also been generalized to the case where L is non-differentiable [17] and the case where the dynamics in (5) are in continuous time [6], [11], [19]. Convergence of the PDGM has been studied extensively in the literature. Under the assumption we made on f , g, and A, it has been shown [9] that the PDGM converges exponentially to the optimal solution (x , y ). Because the update rule (5) of the PDGM performs gradient descent/ascent on the primal/dual variable, a natural question arises as to whether these gradient updates can be substituted by other first-order methods (e.g., Nesterov's accelerated gradient method) to create new primal-dual algorithms. Our paper attempts to address this question by providing a unified convergence analysis that allows the gradient updates to be replaced by a class of first-order methods. The analysis hinges on an alternative view of the PDGM: We show that the PDGM is equivalent to applying an inexact gradient method to the primal problem (2), where the gradient ∇p is computed approximately by a dynamic inexact oracle (see Definition 3). A dynamic inexact oracle is only required to compute the exact gradient asymptotically, and the transient results of such an oracle may be inexact. For the case of the PDGM, the inexact oracle is realized by running one iteration of gradient descent with warm starts (see Section III-A). This abstract view using dynamic inexact oracles leads to a unified convergence analysis that does not rely on the detailed realization of the oracle.
Contribution: While the notion of inexact oracles has long existed in the study of optimization algorithms, including approximating the gradient mapping (see, e.g., [3,Ch. 3.3]) and the proximal operator [20], these inexact oracles are static mappings and hence less general than our proposed notion of dynamic inexact oracles, which are permitted to have internal states and are necessary for modeling warm starts used in iterative algorithms. The introduction of dynamics also demands a new analysis for understanding the dynamical interaction between the gradient method and the inexact oracle used therein. By modeling the dynamical interaction as a feedback interconnection of two dynamical systems, we derive a convergence analysis (Theorems 8 and 10) using the small-gain principle. The convergence analysis also enables us to build new primal-dual algorithms by simply changing the realization of the inexact oracle used in PDGM to other first-order methods in a "plug-and-play" manner.

II. MATHEMATICAL PRELIMINARIES
For a vector x, we denote by x its 2 -norm and x P := (x T P x) 1/2 its P -quadratic norm, where P is a positive definite matrix (written as P 0). For a function f (·, ·) with two arguments, we denote by ∇ i f (i = 1, 2) the partial derivative of f with respect to the ith argument. Unless noted otherwise, we reserve the use of superscripts for indexing an infinite sequence {x k } ∞ k=0 . For a real-valued function f , we denote by f * its convex conjugate, defined by f * (s) := sup x {s T x − f (x)}. We denote by S(µ, β) the set of µ-strongly convex and β-smooth functions. By convention, we use S(0, β) to denote the set of β-smooth and convex functions. Recall the following basic properties of functions in S(µ, β). where is called the sector constraint. Furthermore, if µ > 0, then 2) f * ∈ S(1/β, 1/µ); 3) ∇f is invertible and (∇f ) −1 = ∇f * , where ∇f * is the gradient of f * ; A proof of item 1 can be found in [18,Thm. 2.1.12]. Proofs of items 2 and 3 can be found in [12, Ch. X].

III. DYNAMIC INEXACT ORACLES
We begin by considering another way to solve the primal problem (2) by directly applying the gradient method. By allowing inexact gradient computation, we reveal that the PDGM can be viewed alternatively as an inexact gradient method applied to the primal problem. An abstraction of the inexact gradient computation leads to the definition of dynamic inexact oracles, the central topic of study in this paper.

A. The PDGM as inexact gradient descent
Consider solving the primal problem (2) using the gradient method: where η 1 is the step size. (The subscript "ex" stands for exact, in comparison to the inexact gradient method to be presented shortly.) Defineg(y, x) := f (x)−L(x, y) = g(y)− y T Ax. Using Danskin's theorem (see, e.g., [4, p. 245]), we obtain ∇p(x k ex ) = ∇f (x k ex ) + A T y k ex , where y k ex = arg min yg (y, x k ex ) (unique becauseg is strongly convex). Therefore, the gradient method (6) can be rewritten as Remark 2. The equality-constrained optimization problem (4) can be viewed as the dual problem of (1) for . In this case, the functions f andg are given by , which recovers the augmented Lagrangian method (see, e.g., [3, p. 262]).
Under appropriate choice of the step size η 1 , the sequence {x k ex } generated by (7) converges to the optimal solution x of the primal problem (2). However, because the gradient mapping ∇p depends on y ex , each iteration requires solving the minimization problem in (7a). This is undesired because the minimization problem does not generally admit a closedform solution.
We now show that the PDGM can be derived from (7) by allowing the minimization problem in (7a) to be solved approximately. Suppose the approximate solution, denoted by {y k }, is generated by applying one iteration of the gradient method (with step size η 2 ) to the problem in (7a). This yields Note that the update rule (8) uses a warm start: It uses the approximate solution y k at iteration k to initialize iteration k + 1. The approximate solution {y k } is then used in place of {y k ex } in (7b), yielding where we have replaced x ex with x to distinguish from the sequence generated by the exact gradient method. It can be seen that the update rules (8) and (9) recover the PDGM; namely, the PDGM can be viewed as an inexact gradient method applied to the primal problem.

B. Dynamic inexact oracles
It is not difficult to imagine that the gradient method (8) is not the only iterative algorithm for generating an approximate solution to the minimization problem in (7a), which is needed for computing the gradient ∇p. To facilitate discussion, we introduce the notion of dynamic inexact oracles as a high-level description of iterative algorithms used for approximation. Definition 3 (Dynamic inexact oracles). A (discrete-time) dynamical system G is called a dynamic inexact oracle for computing a mapping φ if for any input sequence u = {u k } ∞ k=0 converging to u , the output Gu converges to φ(u ). If G is a dynamic inexact oracle, even when the input sequence u ≡ u is constant, the output of G is not required to immediately match the exact oracle output φ(u ), hence the term inexact; the only requirement is that G must compute φ(u ) asymptotically.
In the remainder of this paper, we focus on dynamic inexact oracles that approximately solve the optimization problem in (7a). Denote by {x k } and {y k } the input and output of the oracle, respectively. For any {x k } converging to x , the output of the inexact oracle must asymptotically converge to the optimal solution y = arg min yg (y, x ). We will show in Section IV-A that the update rule (8) given by the gradient method is one such inexact oracle. Furthermore, by constructing the inexact oracle from different first-order optimization algorithms, we can create new primal-dual firstorder methods beyond the PDGM (see Section IV-B).
The notion of dynamic inexact oracles is fundamentally different from the inexact oracles studied in the existing literature, which are static inexact oracles. For a static oracle, the output of the oracle at any iteration k only depends on the instantaneous input u k . Incorporating dynamics into inexact oracles is necessary because a static oracle is not able to model iterative optimization algorithms with warm starts, in which the solution during the current iteration needs to be memorized to initialize the next iteration such as in (8). One example of static inexact oracles is approximate gradient mappings used in first-order methods, such as in the -(sub)gradient method (see, e.g., [3,Ch. 3.3]). Other examples include approximate proximal operators used in the proximal point algorithm [20, p. 880] and in the Douglas-Rachford splitting method [10,Thm. 8]. A general treatment of static inexact oracles in first-order methods can be found in [8].

IV. CONVERGENCE ANALYSIS
We show that the convergence of gradient methods with dynamic inexact oracles can be analyzed by viewing it as a feedback interconnection of two dynamical systems. By applying the small-gain principle, we present a unified convergence analysis that only depends on the input-output behavior of the inexact oracle. We begin with the oracle realized by gradient descent, after which we extend the analysis to oracles realized by general first-order methods.
We shall make the following assumptions on f , g, and A: Assumption 4. Let f , g, and A in the minimax problem (1) be such that f ∈ S(0, β f ), g ∈ S(µ g , β g ), and A has full column rank.
Let σ max and σ min be the maximum and minimum singular values of A, respectively. Recall that the primal objective function p is given by p(x) = max y L(x, y) = f (x) + g * (Ax). From Proposition 1, we have p ∈ S(µ p , β p ), where µ p = σ 2 min /β g and β p = σ 2 max /µ g + β f .

A. The oracle based on gradient descent
We begin by showing that the recursion (8) based on gradient descent, which can be viewed as a dynamical system G gd with input x and output y, is indeed an inexact oracle for computing the optimal solution of the minimization problem in (7a). The optimality condition of the minimization problem gives Because g ∈ S(µ g , β g ), using Proposition 1, we obtain In other words, we need to show that G gd asymptotically computes the mapping φ.
Let us now analyze the convergence of the gradient method (9) with the dynamic inexact oracle G gd defined by (8). For convenience, we define the error e such that e k := y k − ∇g * (Ax k ) and rewrite (8) and (9) as Although the recursion (11a) converges when the error e ≡ 0, and the recursion (11b) converges when x ≡ x (Proposition 5), the joint recursion (11) is not guaranteed to converge. Indeed, the joint recursion (11) can be viewed as a feedback interconnection of two dynamical systems (11a) and (11b) as illustrated in Fig. 1, and it is well known in control theory that a feedback connection of two internally stable systems may be unstable.  A powerful method for analyzing the stability of feedback interconnections of dynamical systems is the small-gain principle. The small-gain principle can take various forms depending on the specific setup. The following is what we will use in this paper. See Appendix for a detailed proof. The small-gain lemma (Lemma 7) shows that, in order for the feedback interconnection of two (nonnegative) systems to be stable, aside from the stability of individual systems (γ 11 < 1 and γ 22 < 1), the coupling coefficients γ 12 and γ 21 must be small enough. We now apply the small-gain lemma to establish the convergence of (11).
Although exponential convergence of the PDGM has already been established [9], the technique used in the proof of Theorem 8 is different from what is used in the existing literature. The proof reveals two attractive features of the smallgain principle in the analysis of the inexact gradient method. First, it is capable of incorporating existing convergence results, i.e., internal stability of the gradient dynamics (11a) and (11b) as manifested in Lemma 6. This avoids the need of finding a Lyapunov function from scratch; in comparison, typical convergence proofs of first-order algorithms in the literature involve constructing a Lyapunov function, which is often nontrivial except for the simplest algorithms. Second, the small-gain analysis only relies on a coarse description of the input-output behavior such as what is given in (14). Therefore, when the dynamic inexact oracle is realized by another iterative algorithm G io , the small-gain analysis can be readily applied as long as a relationship between the input x and the error e of G io similar to (14) can be derived (which, incidentally, often makes use of the fact that G io is a dynamic inexact oracle and hence internally stable). The "plug-andplay" nature of this approach allows us to easily generalize the analysis to a wide range of dynamic inexact oracles, which we will discuss shortly in Section IV-B.

B. Oracles based on general first-order algorithms
As we pointed out in Section III-B, dynamical inexact oracles for solving the minimization problem in (7a) can be constructed from iterative optimization algorithms. Inspired by the work in [21], we consider inexact oracles constructed from algorithms in the following state-space form: where A io , B io , C io , and E io are given by Here, F is the (convex) objective function to be minimized, ξ = (ξ 1 , ξ 2 ) is the state, v is the feedback output, z is the output of the algorithm, η 2 is the step size, and c 1 , c 2 , and c 3 are constants. The form (15) captures a number of important first-order optimization algorithms. For example, setting c 1 = c 2 = c 3 = 0 recovers the gradient method, and setting c 1 = c 2 = 0 and c 3 = 0 recovers Nesterov's accelerated gradient method. Interested readers can refer to [21, Table I] for more examples. Similar to G gd defined by (8), we construct a dynamic inexact oracle G io by replacing ∇F (v k ) and output z k in (15) with ∇ 1g (v k , x k ) = ∇g(v k ) − Ax k and y k , respectively: where {x k } and {y k } are the input and output of G io . Similar to Lemma 6, we shall make the following assumption on the algorithm given in (15).
Assumption 9. Let µ and β be constants satisfying 0 < µ ≤ β. Then there exist P 0, η 2 > 0, and ρ 2 ∈ [0, 1) such that Assumption 9 ensures that G io is a dynamic inexact oracle that asymptotically computes the mapping φ defined in (10). The proof is similar to that of Proposition 5. Recall that we can recover the gradient method by setting c 1 = c 2 = c 3 = 0 in (15). In this case, the second component ξ 2 of ξ becomes irrelevant and can be dropped, so that we obtain (with an abuse of notion) A io = I, B io = −η 2 I, and C io = I. Therefore, by Lemma 6, the gradient method satisfies Assumption 9 with P = I. For other first-order algorithms, while we we are unable to provide conditions under which Assumption 9 holds, numerical methods [15,Figs. 3 and 5] have been used to show the existence of P , η 2 , and ρ for both Nesterov's accelerated gradient method and the heavy-ball method, at least when β/µ is small.
Convergence of the inexact gradient method (9) using a dynamic inexact oracle G io can be established using a smallgain analysis similar to the proof of Theorem 8. Details of the proof can be found in Appendix.
Theorem 10. Consider the gradient method given by (9), where {y k } is given by a dynamic inexact oracle G io of the form (16). Suppose f , g, and A satisfy Assumption 4, and A io , B io , and C io satisfy Assumption 9. Then there exists η 1 such that {x k } and {y k } converge exponentially to the primal and dual optimal solutions x and y , respectively.
As an application of Theorem 10, we give a convergence result for the case where G io is realized by Nesterov's accelerated gradient method. Corollary 11. Let γ = ( β g − √ µ g )/( β g + √ µ g ) and η 2 = 1/β g . Consider the gradient method given by (9), where {y k } is given by a dynamic inexact oracle realized by Nesterov's accelerated gradient method: Suppose f , g, and A satisfy Assumption 4. Then there exists η 1 such that {x k } and {y k } converge exponentially to the primal and dual optimal solutions x and y , respectively, when β g /µ g is small enough.
Proof: The recursion (17) can be derived from (16) by setting c 1 = c 2 = γ and c 3 = 0 followed by eliminating ξ. Under the given choice of γ and η 2 , it has been shown in [15, Fig. 3] that Assumption 9 holds when β g /µ g is small enough. The corollary then follows from Theorem 10.
For a numerical comparison between the method in Corollary 11 and the PDGM, we considered a simple case where f is linear, and g is convex quadratic. For both methods, we chose η 2 = 1/β g and numerically searched for η 1 that achieved the best exponential convergence rate. Fig. 2 shows the convergence rate for different condition numbers β g /µ g . It can be seen that the method in Corollary 11 (referred to as "PD-Nesterov") not only ensures convergence but also leads to a faster convergence rate compared to the PDGM.

V. CONCLUSIONS
We have studied the convergence of inexact gradient methods in which the gradient is provided by what we refer to as a dynamic inexact oracle. When the gradient corresponds to the solution of a parametric optimization problem, dynamic inexact oracles can be realized by iterative optimization algorithms. In minimax problems, when the oracle is realized by one step of gradient descent with warm starts, the corresponding inexact gradient method recovers the PDGM. We have shown that the interaction between the gradient method and the inexact oracle can be viewed as a feedback interconnection of two dynamical systems. Using the small-gain principle, we have derived a unified convergence analysis that only depends on a high-level description of the input-output behavior of the oracle. The convergence analysis is applicable to a range of dynamic inexact oracles that are realized by first-order methods. Furthermore, we have shown how this analysis can be used as a guideline in choosing realizations of the inexact oracle for creating new algorithms.
Proof of Lemma 7: Consider a single-input single-output linear system whose input u and output y are described by y k+1 = ay k + bu k , where a ∈ [0, 1) and b ≥ 0. It can be shown that the 2 -gain of the system is given by b/ (1 − a). The result then follows from the (usual) smallgain theorem for feedback interconnections (see, e.g., [13,Thm. 5.6]) and the (discrete-time) comparison lemma (see, e.g., [14, Thm. 1.9.1]).
The second term on the right side of (18) can be further bounded by making use of (11a). Letx k := x k − x , we have x k+1 − x k = η 1 ∇p(x k ) + A T e k = η 1 ∇p(x k ) + A T E ioξ k ≤ η 1 (β p x k + c ξ ξ k P ) for some c ξ > 0, where we have used the equivalence of norms again. Substituting this into (18), we have ξ k+1 P ≤ η 1 c φ β p x k + (ρ 2 + η 1 c ξ ) ξ k P .